Feb 24 02:01:54.466318 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 24 02:01:55.137618 master-0 kubenswrapper[4207]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 02:01:55.137618 master-0 kubenswrapper[4207]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 24 02:01:55.137618 master-0 kubenswrapper[4207]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 02:01:55.137618 master-0 kubenswrapper[4207]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 02:01:55.137618 master-0 kubenswrapper[4207]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 24 02:01:55.137618 master-0 kubenswrapper[4207]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 02:01:55.139259 master-0 kubenswrapper[4207]: I0224 02:01:55.138647 4207 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 24 02:01:55.148533 master-0 kubenswrapper[4207]: W0224 02:01:55.148464 4207 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 02:01:55.148533 master-0 kubenswrapper[4207]: W0224 02:01:55.148516 4207 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 02:01:55.148533 master-0 kubenswrapper[4207]: W0224 02:01:55.148527 4207 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 02:01:55.148533 master-0 kubenswrapper[4207]: W0224 02:01:55.148537 4207 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 02:01:55.148533 master-0 kubenswrapper[4207]: W0224 02:01:55.148546 4207 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 02:01:55.148846 master-0 kubenswrapper[4207]: W0224 02:01:55.148555 4207 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 02:01:55.148846 master-0 kubenswrapper[4207]: W0224 02:01:55.148563 4207 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 02:01:55.148846 master-0 kubenswrapper[4207]: W0224 02:01:55.148600 4207 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 02:01:55.148846 master-0 kubenswrapper[4207]: W0224 02:01:55.148609 4207 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 02:01:55.148846 master-0 kubenswrapper[4207]: W0224 02:01:55.148617 4207 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 02:01:55.148846 master-0 kubenswrapper[4207]: W0224 02:01:55.148625 4207 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 02:01:55.148846 master-0 kubenswrapper[4207]: W0224 02:01:55.148633 4207 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 02:01:55.148846 master-0 kubenswrapper[4207]: W0224 02:01:55.148640 4207 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 02:01:55.148846 master-0 kubenswrapper[4207]: W0224 02:01:55.148649 4207 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 02:01:55.148846 master-0 kubenswrapper[4207]: W0224 02:01:55.148657 4207 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 02:01:55.148846 master-0 kubenswrapper[4207]: W0224 02:01:55.148665 4207 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 02:01:55.148846 master-0 kubenswrapper[4207]: W0224 02:01:55.148672 4207 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 02:01:55.148846 master-0 kubenswrapper[4207]: W0224 02:01:55.148680 4207 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 02:01:55.148846 master-0 kubenswrapper[4207]: W0224 02:01:55.148688 4207 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 24 02:01:55.148846 master-0 kubenswrapper[4207]: W0224 02:01:55.148698 4207 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 02:01:55.148846 master-0 kubenswrapper[4207]: W0224 02:01:55.148706 4207 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 02:01:55.148846 master-0 kubenswrapper[4207]: W0224 02:01:55.148714 4207 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 02:01:55.148846 master-0 kubenswrapper[4207]: W0224 02:01:55.148724 4207 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 02:01:55.148846 master-0 kubenswrapper[4207]: W0224 02:01:55.148733 4207 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 02:01:55.148846 master-0 kubenswrapper[4207]: W0224 02:01:55.148741 4207 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 02:01:55.149919 master-0 kubenswrapper[4207]: W0224 02:01:55.148749 4207 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 02:01:55.149919 master-0 kubenswrapper[4207]: W0224 02:01:55.148771 4207 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 02:01:55.149919 master-0 kubenswrapper[4207]: W0224 02:01:55.148779 4207 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 02:01:55.149919 master-0 kubenswrapper[4207]: W0224 02:01:55.148790 4207 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 02:01:55.149919 master-0 kubenswrapper[4207]: W0224 02:01:55.148805 4207 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 02:01:55.149919 master-0 kubenswrapper[4207]: W0224 02:01:55.148815 4207 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 02:01:55.149919 master-0 kubenswrapper[4207]: W0224 02:01:55.148824 4207 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 02:01:55.149919 master-0 kubenswrapper[4207]: W0224 02:01:55.148834 4207 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 02:01:55.149919 master-0 kubenswrapper[4207]: W0224 02:01:55.148843 4207 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 02:01:55.149919 master-0 kubenswrapper[4207]: W0224 02:01:55.148852 4207 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 02:01:55.149919 master-0 kubenswrapper[4207]: W0224 02:01:55.148863 4207 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 02:01:55.149919 master-0 kubenswrapper[4207]: W0224 02:01:55.148873 4207 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 02:01:55.149919 master-0 kubenswrapper[4207]: W0224 02:01:55.148883 4207 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 02:01:55.149919 master-0 kubenswrapper[4207]: W0224 02:01:55.148893 4207 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 02:01:55.149919 master-0 kubenswrapper[4207]: W0224 02:01:55.148902 4207 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 02:01:55.149919 master-0 kubenswrapper[4207]: W0224 02:01:55.148912 4207 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 02:01:55.149919 master-0 kubenswrapper[4207]: W0224 02:01:55.148921 4207 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 02:01:55.149919 master-0 kubenswrapper[4207]: W0224 02:01:55.148931 4207 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 02:01:55.149919 master-0 kubenswrapper[4207]: W0224 02:01:55.148940 4207 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 02:01:55.149919 master-0 kubenswrapper[4207]: W0224 02:01:55.148949 4207 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 02:01:55.150878 master-0 kubenswrapper[4207]: W0224 02:01:55.148961 4207 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 02:01:55.150878 master-0 kubenswrapper[4207]: W0224 02:01:55.148970 4207 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 02:01:55.150878 master-0 kubenswrapper[4207]: W0224 02:01:55.148979 4207 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 02:01:55.150878 master-0 kubenswrapper[4207]: W0224 02:01:55.148988 4207 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 02:01:55.150878 master-0 kubenswrapper[4207]: W0224 02:01:55.148997 4207 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 02:01:55.150878 master-0 kubenswrapper[4207]: W0224 02:01:55.149007 4207 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 02:01:55.150878 master-0 kubenswrapper[4207]: W0224 02:01:55.149016 4207 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 02:01:55.150878 master-0 kubenswrapper[4207]: W0224 02:01:55.149025 4207 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 02:01:55.150878 master-0 kubenswrapper[4207]: W0224 02:01:55.149033 4207 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 02:01:55.150878 master-0 kubenswrapper[4207]: W0224 02:01:55.149041 4207 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 02:01:55.150878 master-0 kubenswrapper[4207]: W0224 02:01:55.149049 4207 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 02:01:55.150878 master-0 kubenswrapper[4207]: W0224 02:01:55.149058 4207 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 02:01:55.150878 master-0 kubenswrapper[4207]: W0224 02:01:55.149066 4207 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 02:01:55.150878 master-0 kubenswrapper[4207]: W0224 02:01:55.149076 4207 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 02:01:55.150878 master-0 kubenswrapper[4207]: W0224 02:01:55.149084 4207 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 02:01:55.150878 master-0 kubenswrapper[4207]: W0224 02:01:55.149093 4207 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 02:01:55.150878 master-0 kubenswrapper[4207]: W0224 02:01:55.149101 4207 feature_gate.go:330] unrecognized feature gate: Example Feb 24 02:01:55.150878 master-0 kubenswrapper[4207]: W0224 02:01:55.149111 4207 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 02:01:55.150878 master-0 kubenswrapper[4207]: W0224 02:01:55.149122 4207 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 02:01:55.151748 master-0 kubenswrapper[4207]: W0224 02:01:55.149131 4207 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 02:01:55.151748 master-0 kubenswrapper[4207]: W0224 02:01:55.149141 4207 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 02:01:55.151748 master-0 kubenswrapper[4207]: W0224 02:01:55.149150 4207 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 02:01:55.151748 master-0 kubenswrapper[4207]: W0224 02:01:55.149157 4207 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 02:01:55.151748 master-0 kubenswrapper[4207]: W0224 02:01:55.149165 4207 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 02:01:55.151748 master-0 kubenswrapper[4207]: W0224 02:01:55.149173 4207 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 02:01:55.151748 master-0 kubenswrapper[4207]: W0224 02:01:55.149181 4207 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 02:01:55.151748 master-0 kubenswrapper[4207]: W0224 02:01:55.149189 4207 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 02:01:55.151748 master-0 kubenswrapper[4207]: I0224 02:01:55.150276 4207 flags.go:64] FLAG: --address="0.0.0.0" Feb 24 02:01:55.151748 master-0 kubenswrapper[4207]: I0224 02:01:55.150300 4207 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 24 02:01:55.151748 master-0 kubenswrapper[4207]: I0224 02:01:55.150315 4207 flags.go:64] FLAG: --anonymous-auth="true" Feb 24 02:01:55.151748 master-0 kubenswrapper[4207]: I0224 02:01:55.150328 4207 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 24 02:01:55.151748 master-0 kubenswrapper[4207]: I0224 02:01:55.150340 4207 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 24 02:01:55.151748 master-0 kubenswrapper[4207]: I0224 02:01:55.150350 4207 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 24 02:01:55.151748 master-0 kubenswrapper[4207]: I0224 02:01:55.150363 4207 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 24 02:01:55.151748 master-0 kubenswrapper[4207]: I0224 02:01:55.150374 4207 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 24 02:01:55.151748 master-0 kubenswrapper[4207]: I0224 02:01:55.150384 4207 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 24 02:01:55.151748 master-0 kubenswrapper[4207]: I0224 02:01:55.150394 4207 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 24 02:01:55.151748 master-0 kubenswrapper[4207]: I0224 02:01:55.150407 4207 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 24 02:01:55.151748 master-0 kubenswrapper[4207]: I0224 02:01:55.150418 4207 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 24 02:01:55.151748 master-0 kubenswrapper[4207]: I0224 02:01:55.150427 4207 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 24 02:01:55.151748 master-0 kubenswrapper[4207]: I0224 02:01:55.150436 4207 flags.go:64] FLAG: --cgroup-root="" Feb 24 02:01:55.152746 master-0 kubenswrapper[4207]: I0224 02:01:55.150446 4207 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 24 02:01:55.152746 master-0 kubenswrapper[4207]: I0224 02:01:55.150456 4207 flags.go:64] FLAG: --client-ca-file="" Feb 24 02:01:55.152746 master-0 kubenswrapper[4207]: I0224 02:01:55.150465 4207 flags.go:64] FLAG: --cloud-config="" Feb 24 02:01:55.152746 master-0 kubenswrapper[4207]: I0224 02:01:55.150474 4207 flags.go:64] FLAG: --cloud-provider="" Feb 24 02:01:55.152746 master-0 kubenswrapper[4207]: I0224 02:01:55.150484 4207 flags.go:64] FLAG: --cluster-dns="[]" Feb 24 02:01:55.152746 master-0 kubenswrapper[4207]: I0224 02:01:55.150496 4207 flags.go:64] FLAG: --cluster-domain="" Feb 24 02:01:55.152746 master-0 kubenswrapper[4207]: I0224 02:01:55.150506 4207 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 24 02:01:55.152746 master-0 kubenswrapper[4207]: I0224 02:01:55.150515 4207 flags.go:64] FLAG: --config-dir="" Feb 24 02:01:55.152746 master-0 kubenswrapper[4207]: I0224 02:01:55.150524 4207 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 24 02:01:55.152746 master-0 kubenswrapper[4207]: I0224 02:01:55.150534 4207 flags.go:64] FLAG: --container-log-max-files="5" Feb 24 02:01:55.152746 master-0 kubenswrapper[4207]: I0224 02:01:55.150546 4207 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 24 02:01:55.152746 master-0 kubenswrapper[4207]: I0224 02:01:55.150555 4207 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 24 02:01:55.152746 master-0 kubenswrapper[4207]: I0224 02:01:55.150565 4207 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 24 02:01:55.152746 master-0 kubenswrapper[4207]: I0224 02:01:55.150599 4207 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 24 02:01:55.152746 master-0 kubenswrapper[4207]: I0224 02:01:55.150608 4207 flags.go:64] FLAG: --contention-profiling="false" Feb 24 02:01:55.152746 master-0 kubenswrapper[4207]: I0224 02:01:55.150617 4207 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 24 02:01:55.152746 master-0 kubenswrapper[4207]: I0224 02:01:55.150626 4207 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 24 02:01:55.152746 master-0 kubenswrapper[4207]: I0224 02:01:55.150636 4207 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 24 02:01:55.152746 master-0 kubenswrapper[4207]: I0224 02:01:55.150645 4207 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 24 02:01:55.152746 master-0 kubenswrapper[4207]: I0224 02:01:55.150656 4207 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 24 02:01:55.152746 master-0 kubenswrapper[4207]: I0224 02:01:55.150666 4207 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 24 02:01:55.152746 master-0 kubenswrapper[4207]: I0224 02:01:55.150675 4207 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 24 02:01:55.152746 master-0 kubenswrapper[4207]: I0224 02:01:55.150684 4207 flags.go:64] FLAG: --enable-load-reader="false" Feb 24 02:01:55.152746 master-0 kubenswrapper[4207]: I0224 02:01:55.150694 4207 flags.go:64] FLAG: --enable-server="true" Feb 24 02:01:55.152746 master-0 kubenswrapper[4207]: I0224 02:01:55.150703 4207 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 24 02:01:55.153896 master-0 kubenswrapper[4207]: I0224 02:01:55.150739 4207 flags.go:64] FLAG: --event-burst="100" Feb 24 02:01:55.153896 master-0 kubenswrapper[4207]: I0224 02:01:55.150750 4207 flags.go:64] FLAG: --event-qps="50" Feb 24 02:01:55.153896 master-0 kubenswrapper[4207]: I0224 02:01:55.150759 4207 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 24 02:01:55.153896 master-0 kubenswrapper[4207]: I0224 02:01:55.150769 4207 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 24 02:01:55.153896 master-0 kubenswrapper[4207]: I0224 02:01:55.150778 4207 flags.go:64] FLAG: --eviction-hard="" Feb 24 02:01:55.153896 master-0 kubenswrapper[4207]: I0224 02:01:55.150790 4207 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 24 02:01:55.153896 master-0 kubenswrapper[4207]: I0224 02:01:55.150799 4207 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 24 02:01:55.153896 master-0 kubenswrapper[4207]: I0224 02:01:55.150811 4207 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 24 02:01:55.153896 master-0 kubenswrapper[4207]: I0224 02:01:55.150822 4207 flags.go:64] FLAG: --eviction-soft="" Feb 24 02:01:55.153896 master-0 kubenswrapper[4207]: I0224 02:01:55.150831 4207 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 24 02:01:55.153896 master-0 kubenswrapper[4207]: I0224 02:01:55.150840 4207 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 24 02:01:55.153896 master-0 kubenswrapper[4207]: I0224 02:01:55.150849 4207 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 24 02:01:55.153896 master-0 kubenswrapper[4207]: I0224 02:01:55.150858 4207 flags.go:64] FLAG: --experimental-mounter-path="" Feb 24 02:01:55.153896 master-0 kubenswrapper[4207]: I0224 02:01:55.150867 4207 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 24 02:01:55.153896 master-0 kubenswrapper[4207]: I0224 02:01:55.150876 4207 flags.go:64] FLAG: --fail-swap-on="true" Feb 24 02:01:55.153896 master-0 kubenswrapper[4207]: I0224 02:01:55.150886 4207 flags.go:64] FLAG: --feature-gates="" Feb 24 02:01:55.153896 master-0 kubenswrapper[4207]: I0224 02:01:55.150898 4207 flags.go:64] FLAG: --file-check-frequency="20s" Feb 24 02:01:55.153896 master-0 kubenswrapper[4207]: I0224 02:01:55.150907 4207 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 24 02:01:55.153896 master-0 kubenswrapper[4207]: I0224 02:01:55.150917 4207 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 24 02:01:55.153896 master-0 kubenswrapper[4207]: I0224 02:01:55.150926 4207 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 24 02:01:55.153896 master-0 kubenswrapper[4207]: I0224 02:01:55.150936 4207 flags.go:64] FLAG: --healthz-port="10248" Feb 24 02:01:55.153896 master-0 kubenswrapper[4207]: I0224 02:01:55.150946 4207 flags.go:64] FLAG: --help="false" Feb 24 02:01:55.153896 master-0 kubenswrapper[4207]: I0224 02:01:55.150955 4207 flags.go:64] FLAG: --hostname-override="" Feb 24 02:01:55.153896 master-0 kubenswrapper[4207]: I0224 02:01:55.150964 4207 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 24 02:01:55.153896 master-0 kubenswrapper[4207]: I0224 02:01:55.150974 4207 flags.go:64] FLAG: --http-check-frequency="20s" Feb 24 02:01:55.153896 master-0 kubenswrapper[4207]: I0224 02:01:55.150983 4207 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 24 02:01:55.155081 master-0 kubenswrapper[4207]: I0224 02:01:55.150992 4207 flags.go:64] FLAG: --image-credential-provider-config="" Feb 24 02:01:55.155081 master-0 kubenswrapper[4207]: I0224 02:01:55.151002 4207 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 24 02:01:55.155081 master-0 kubenswrapper[4207]: I0224 02:01:55.151011 4207 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 24 02:01:55.155081 master-0 kubenswrapper[4207]: I0224 02:01:55.151021 4207 flags.go:64] FLAG: --image-service-endpoint="" Feb 24 02:01:55.155081 master-0 kubenswrapper[4207]: I0224 02:01:55.151030 4207 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 24 02:01:55.155081 master-0 kubenswrapper[4207]: I0224 02:01:55.151042 4207 flags.go:64] FLAG: --kube-api-burst="100" Feb 24 02:01:55.155081 master-0 kubenswrapper[4207]: I0224 02:01:55.151051 4207 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 24 02:01:55.155081 master-0 kubenswrapper[4207]: I0224 02:01:55.151061 4207 flags.go:64] FLAG: --kube-api-qps="50" Feb 24 02:01:55.155081 master-0 kubenswrapper[4207]: I0224 02:01:55.151070 4207 flags.go:64] FLAG: --kube-reserved="" Feb 24 02:01:55.155081 master-0 kubenswrapper[4207]: I0224 02:01:55.151079 4207 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 24 02:01:55.155081 master-0 kubenswrapper[4207]: I0224 02:01:55.151089 4207 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 24 02:01:55.155081 master-0 kubenswrapper[4207]: I0224 02:01:55.151099 4207 flags.go:64] FLAG: --kubelet-cgroups="" Feb 24 02:01:55.155081 master-0 kubenswrapper[4207]: I0224 02:01:55.151107 4207 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 24 02:01:55.155081 master-0 kubenswrapper[4207]: I0224 02:01:55.151119 4207 flags.go:64] FLAG: --lock-file="" Feb 24 02:01:55.155081 master-0 kubenswrapper[4207]: I0224 02:01:55.151128 4207 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 24 02:01:55.155081 master-0 kubenswrapper[4207]: I0224 02:01:55.151138 4207 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 24 02:01:55.155081 master-0 kubenswrapper[4207]: I0224 02:01:55.151148 4207 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 24 02:01:55.155081 master-0 kubenswrapper[4207]: I0224 02:01:55.151175 4207 flags.go:64] FLAG: --log-json-split-stream="false" Feb 24 02:01:55.155081 master-0 kubenswrapper[4207]: I0224 02:01:55.151185 4207 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 24 02:01:55.155081 master-0 kubenswrapper[4207]: I0224 02:01:55.151195 4207 flags.go:64] FLAG: --log-text-split-stream="false" Feb 24 02:01:55.155081 master-0 kubenswrapper[4207]: I0224 02:01:55.151205 4207 flags.go:64] FLAG: --logging-format="text" Feb 24 02:01:55.155081 master-0 kubenswrapper[4207]: I0224 02:01:55.151214 4207 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 24 02:01:55.155081 master-0 kubenswrapper[4207]: I0224 02:01:55.151224 4207 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 24 02:01:55.155081 master-0 kubenswrapper[4207]: I0224 02:01:55.151234 4207 flags.go:64] FLAG: --manifest-url="" Feb 24 02:01:55.155081 master-0 kubenswrapper[4207]: I0224 02:01:55.151243 4207 flags.go:64] FLAG: --manifest-url-header="" Feb 24 02:01:55.156360 master-0 kubenswrapper[4207]: I0224 02:01:55.151255 4207 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 24 02:01:55.156360 master-0 kubenswrapper[4207]: I0224 02:01:55.151265 4207 flags.go:64] FLAG: --max-open-files="1000000" Feb 24 02:01:55.156360 master-0 kubenswrapper[4207]: I0224 02:01:55.151277 4207 flags.go:64] FLAG: --max-pods="110" Feb 24 02:01:55.156360 master-0 kubenswrapper[4207]: I0224 02:01:55.151286 4207 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 24 02:01:55.156360 master-0 kubenswrapper[4207]: I0224 02:01:55.151296 4207 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 24 02:01:55.156360 master-0 kubenswrapper[4207]: I0224 02:01:55.151305 4207 flags.go:64] FLAG: --memory-manager-policy="None" Feb 24 02:01:55.156360 master-0 kubenswrapper[4207]: I0224 02:01:55.151314 4207 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 24 02:01:55.156360 master-0 kubenswrapper[4207]: I0224 02:01:55.151324 4207 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 24 02:01:55.156360 master-0 kubenswrapper[4207]: I0224 02:01:55.151334 4207 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 24 02:01:55.156360 master-0 kubenswrapper[4207]: I0224 02:01:55.151343 4207 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 24 02:01:55.156360 master-0 kubenswrapper[4207]: I0224 02:01:55.151365 4207 flags.go:64] FLAG: --node-status-max-images="50" Feb 24 02:01:55.156360 master-0 kubenswrapper[4207]: I0224 02:01:55.151375 4207 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 24 02:01:55.156360 master-0 kubenswrapper[4207]: I0224 02:01:55.151385 4207 flags.go:64] FLAG: --oom-score-adj="-999" Feb 24 02:01:55.156360 master-0 kubenswrapper[4207]: I0224 02:01:55.151395 4207 flags.go:64] FLAG: --pod-cidr="" Feb 24 02:01:55.156360 master-0 kubenswrapper[4207]: I0224 02:01:55.151404 4207 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5001a555eb05eef7f23d64667303c2b4db8343ee900c265f7613c40c1db229" Feb 24 02:01:55.156360 master-0 kubenswrapper[4207]: I0224 02:01:55.151417 4207 flags.go:64] FLAG: --pod-manifest-path="" Feb 24 02:01:55.156360 master-0 kubenswrapper[4207]: I0224 02:01:55.151426 4207 flags.go:64] FLAG: --pod-max-pids="-1" Feb 24 02:01:55.156360 master-0 kubenswrapper[4207]: I0224 02:01:55.151436 4207 flags.go:64] FLAG: --pods-per-core="0" Feb 24 02:01:55.156360 master-0 kubenswrapper[4207]: I0224 02:01:55.151446 4207 flags.go:64] FLAG: --port="10250" Feb 24 02:01:55.156360 master-0 kubenswrapper[4207]: I0224 02:01:55.151456 4207 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 24 02:01:55.156360 master-0 kubenswrapper[4207]: I0224 02:01:55.151465 4207 flags.go:64] FLAG: --provider-id="" Feb 24 02:01:55.156360 master-0 kubenswrapper[4207]: I0224 02:01:55.151475 4207 flags.go:64] FLAG: --qos-reserved="" Feb 24 02:01:55.156360 master-0 kubenswrapper[4207]: I0224 02:01:55.151484 4207 flags.go:64] FLAG: --read-only-port="10255" Feb 24 02:01:55.156360 master-0 kubenswrapper[4207]: I0224 02:01:55.151494 4207 flags.go:64] FLAG: --register-node="true" Feb 24 02:01:55.157436 master-0 kubenswrapper[4207]: I0224 02:01:55.151503 4207 flags.go:64] FLAG: --register-schedulable="true" Feb 24 02:01:55.157436 master-0 kubenswrapper[4207]: I0224 02:01:55.151513 4207 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 24 02:01:55.157436 master-0 kubenswrapper[4207]: I0224 02:01:55.151529 4207 flags.go:64] FLAG: --registry-burst="10" Feb 24 02:01:55.157436 master-0 kubenswrapper[4207]: I0224 02:01:55.151539 4207 flags.go:64] FLAG: --registry-qps="5" Feb 24 02:01:55.157436 master-0 kubenswrapper[4207]: I0224 02:01:55.151549 4207 flags.go:64] FLAG: --reserved-cpus="" Feb 24 02:01:55.157436 master-0 kubenswrapper[4207]: I0224 02:01:55.151559 4207 flags.go:64] FLAG: --reserved-memory="" Feb 24 02:01:55.157436 master-0 kubenswrapper[4207]: I0224 02:01:55.151595 4207 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 24 02:01:55.157436 master-0 kubenswrapper[4207]: I0224 02:01:55.151606 4207 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 24 02:01:55.157436 master-0 kubenswrapper[4207]: I0224 02:01:55.151615 4207 flags.go:64] FLAG: --rotate-certificates="false" Feb 24 02:01:55.157436 master-0 kubenswrapper[4207]: I0224 02:01:55.151625 4207 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 24 02:01:55.157436 master-0 kubenswrapper[4207]: I0224 02:01:55.151635 4207 flags.go:64] FLAG: --runonce="false" Feb 24 02:01:55.157436 master-0 kubenswrapper[4207]: I0224 02:01:55.151644 4207 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 24 02:01:55.157436 master-0 kubenswrapper[4207]: I0224 02:01:55.151654 4207 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 24 02:01:55.157436 master-0 kubenswrapper[4207]: I0224 02:01:55.151663 4207 flags.go:64] FLAG: --seccomp-default="false" Feb 24 02:01:55.157436 master-0 kubenswrapper[4207]: I0224 02:01:55.151672 4207 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 24 02:01:55.157436 master-0 kubenswrapper[4207]: I0224 02:01:55.151681 4207 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 24 02:01:55.157436 master-0 kubenswrapper[4207]: I0224 02:01:55.151691 4207 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 24 02:01:55.157436 master-0 kubenswrapper[4207]: I0224 02:01:55.151701 4207 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 24 02:01:55.157436 master-0 kubenswrapper[4207]: I0224 02:01:55.151710 4207 flags.go:64] FLAG: --storage-driver-password="root" Feb 24 02:01:55.157436 master-0 kubenswrapper[4207]: I0224 02:01:55.151719 4207 flags.go:64] FLAG: --storage-driver-secure="false" Feb 24 02:01:55.157436 master-0 kubenswrapper[4207]: I0224 02:01:55.151729 4207 flags.go:64] FLAG: --storage-driver-table="stats" Feb 24 02:01:55.157436 master-0 kubenswrapper[4207]: I0224 02:01:55.151739 4207 flags.go:64] FLAG: --storage-driver-user="root" Feb 24 02:01:55.157436 master-0 kubenswrapper[4207]: I0224 02:01:55.151749 4207 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 24 02:01:55.157436 master-0 kubenswrapper[4207]: I0224 02:01:55.151759 4207 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 24 02:01:55.157436 master-0 kubenswrapper[4207]: I0224 02:01:55.151769 4207 flags.go:64] FLAG: --system-cgroups="" Feb 24 02:01:55.158706 master-0 kubenswrapper[4207]: I0224 02:01:55.151779 4207 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 24 02:01:55.158706 master-0 kubenswrapper[4207]: I0224 02:01:55.151797 4207 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 24 02:01:55.158706 master-0 kubenswrapper[4207]: I0224 02:01:55.151808 4207 flags.go:64] FLAG: --tls-cert-file="" Feb 24 02:01:55.158706 master-0 kubenswrapper[4207]: I0224 02:01:55.151819 4207 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 24 02:01:55.158706 master-0 kubenswrapper[4207]: I0224 02:01:55.151835 4207 flags.go:64] FLAG: --tls-min-version="" Feb 24 02:01:55.158706 master-0 kubenswrapper[4207]: I0224 02:01:55.151845 4207 flags.go:64] FLAG: --tls-private-key-file="" Feb 24 02:01:55.158706 master-0 kubenswrapper[4207]: I0224 02:01:55.151856 4207 flags.go:64] FLAG: --topology-manager-policy="none" Feb 24 02:01:55.158706 master-0 kubenswrapper[4207]: I0224 02:01:55.151867 4207 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 24 02:01:55.158706 master-0 kubenswrapper[4207]: I0224 02:01:55.151888 4207 flags.go:64] FLAG: --topology-manager-scope="container" Feb 24 02:01:55.158706 master-0 kubenswrapper[4207]: I0224 02:01:55.151899 4207 flags.go:64] FLAG: --v="2" Feb 24 02:01:55.158706 master-0 kubenswrapper[4207]: I0224 02:01:55.151914 4207 flags.go:64] FLAG: --version="false" Feb 24 02:01:55.158706 master-0 kubenswrapper[4207]: I0224 02:01:55.151928 4207 flags.go:64] FLAG: --vmodule="" Feb 24 02:01:55.158706 master-0 kubenswrapper[4207]: I0224 02:01:55.151942 4207 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 24 02:01:55.158706 master-0 kubenswrapper[4207]: I0224 02:01:55.151953 4207 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 24 02:01:55.158706 master-0 kubenswrapper[4207]: W0224 02:01:55.152215 4207 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 02:01:55.158706 master-0 kubenswrapper[4207]: W0224 02:01:55.152227 4207 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 02:01:55.158706 master-0 kubenswrapper[4207]: W0224 02:01:55.152238 4207 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 02:01:55.158706 master-0 kubenswrapper[4207]: W0224 02:01:55.152248 4207 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 02:01:55.158706 master-0 kubenswrapper[4207]: W0224 02:01:55.152258 4207 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 02:01:55.158706 master-0 kubenswrapper[4207]: W0224 02:01:55.152267 4207 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 02:01:55.158706 master-0 kubenswrapper[4207]: W0224 02:01:55.152277 4207 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 02:01:55.158706 master-0 kubenswrapper[4207]: W0224 02:01:55.152286 4207 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 02:01:55.158706 master-0 kubenswrapper[4207]: W0224 02:01:55.152295 4207 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 02:01:55.159769 master-0 kubenswrapper[4207]: W0224 02:01:55.152304 4207 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 02:01:55.159769 master-0 kubenswrapper[4207]: W0224 02:01:55.152313 4207 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 02:01:55.159769 master-0 kubenswrapper[4207]: W0224 02:01:55.152322 4207 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 02:01:55.159769 master-0 kubenswrapper[4207]: W0224 02:01:55.152331 4207 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 02:01:55.159769 master-0 kubenswrapper[4207]: W0224 02:01:55.152343 4207 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 02:01:55.159769 master-0 kubenswrapper[4207]: W0224 02:01:55.152353 4207 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 02:01:55.159769 master-0 kubenswrapper[4207]: W0224 02:01:55.152362 4207 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 02:01:55.159769 master-0 kubenswrapper[4207]: W0224 02:01:55.152371 4207 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 02:01:55.159769 master-0 kubenswrapper[4207]: W0224 02:01:55.152381 4207 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 24 02:01:55.159769 master-0 kubenswrapper[4207]: W0224 02:01:55.152390 4207 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 02:01:55.159769 master-0 kubenswrapper[4207]: W0224 02:01:55.152399 4207 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 02:01:55.159769 master-0 kubenswrapper[4207]: W0224 02:01:55.152408 4207 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 02:01:55.159769 master-0 kubenswrapper[4207]: W0224 02:01:55.152418 4207 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 02:01:55.159769 master-0 kubenswrapper[4207]: W0224 02:01:55.152426 4207 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 02:01:55.159769 master-0 kubenswrapper[4207]: W0224 02:01:55.152436 4207 feature_gate.go:330] unrecognized feature gate: Example Feb 24 02:01:55.159769 master-0 kubenswrapper[4207]: W0224 02:01:55.152445 4207 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 02:01:55.159769 master-0 kubenswrapper[4207]: W0224 02:01:55.152455 4207 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 02:01:55.159769 master-0 kubenswrapper[4207]: W0224 02:01:55.152468 4207 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 02:01:55.159769 master-0 kubenswrapper[4207]: W0224 02:01:55.152477 4207 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 02:01:55.159769 master-0 kubenswrapper[4207]: W0224 02:01:55.152486 4207 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 02:01:55.160774 master-0 kubenswrapper[4207]: W0224 02:01:55.152495 4207 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 02:01:55.160774 master-0 kubenswrapper[4207]: W0224 02:01:55.152503 4207 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 02:01:55.160774 master-0 kubenswrapper[4207]: W0224 02:01:55.152512 4207 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 02:01:55.160774 master-0 kubenswrapper[4207]: W0224 02:01:55.152520 4207 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 02:01:55.160774 master-0 kubenswrapper[4207]: W0224 02:01:55.152529 4207 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 02:01:55.160774 master-0 kubenswrapper[4207]: W0224 02:01:55.152537 4207 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 02:01:55.160774 master-0 kubenswrapper[4207]: W0224 02:01:55.152546 4207 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 02:01:55.160774 master-0 kubenswrapper[4207]: W0224 02:01:55.152554 4207 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 02:01:55.160774 master-0 kubenswrapper[4207]: W0224 02:01:55.152564 4207 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 02:01:55.160774 master-0 kubenswrapper[4207]: W0224 02:01:55.152595 4207 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 02:01:55.160774 master-0 kubenswrapper[4207]: W0224 02:01:55.152604 4207 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 02:01:55.160774 master-0 kubenswrapper[4207]: W0224 02:01:55.152613 4207 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 02:01:55.160774 master-0 kubenswrapper[4207]: W0224 02:01:55.152621 4207 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 02:01:55.160774 master-0 kubenswrapper[4207]: W0224 02:01:55.152629 4207 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 02:01:55.160774 master-0 kubenswrapper[4207]: W0224 02:01:55.152637 4207 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 02:01:55.160774 master-0 kubenswrapper[4207]: W0224 02:01:55.152648 4207 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 02:01:55.160774 master-0 kubenswrapper[4207]: W0224 02:01:55.152659 4207 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 02:01:55.160774 master-0 kubenswrapper[4207]: W0224 02:01:55.152667 4207 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 02:01:55.160774 master-0 kubenswrapper[4207]: W0224 02:01:55.152676 4207 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 02:01:55.160774 master-0 kubenswrapper[4207]: W0224 02:01:55.152685 4207 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 02:01:55.161934 master-0 kubenswrapper[4207]: W0224 02:01:55.152695 4207 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 02:01:55.161934 master-0 kubenswrapper[4207]: W0224 02:01:55.152704 4207 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 02:01:55.161934 master-0 kubenswrapper[4207]: W0224 02:01:55.152713 4207 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 02:01:55.161934 master-0 kubenswrapper[4207]: W0224 02:01:55.152721 4207 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 02:01:55.161934 master-0 kubenswrapper[4207]: W0224 02:01:55.152731 4207 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 02:01:55.161934 master-0 kubenswrapper[4207]: W0224 02:01:55.152739 4207 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 02:01:55.161934 master-0 kubenswrapper[4207]: W0224 02:01:55.152748 4207 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 02:01:55.161934 master-0 kubenswrapper[4207]: W0224 02:01:55.152757 4207 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 02:01:55.161934 master-0 kubenswrapper[4207]: W0224 02:01:55.152766 4207 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 02:01:55.161934 master-0 kubenswrapper[4207]: W0224 02:01:55.152778 4207 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 02:01:55.161934 master-0 kubenswrapper[4207]: W0224 02:01:55.152787 4207 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 02:01:55.161934 master-0 kubenswrapper[4207]: W0224 02:01:55.152796 4207 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 02:01:55.161934 master-0 kubenswrapper[4207]: W0224 02:01:55.152805 4207 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 02:01:55.161934 master-0 kubenswrapper[4207]: W0224 02:01:55.152814 4207 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 02:01:55.161934 master-0 kubenswrapper[4207]: W0224 02:01:55.152823 4207 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 02:01:55.161934 master-0 kubenswrapper[4207]: W0224 02:01:55.152835 4207 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 02:01:55.161934 master-0 kubenswrapper[4207]: W0224 02:01:55.152847 4207 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 02:01:55.161934 master-0 kubenswrapper[4207]: W0224 02:01:55.152858 4207 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 02:01:55.161934 master-0 kubenswrapper[4207]: W0224 02:01:55.152867 4207 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 02:01:55.163080 master-0 kubenswrapper[4207]: W0224 02:01:55.152877 4207 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 02:01:55.163080 master-0 kubenswrapper[4207]: W0224 02:01:55.152888 4207 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 02:01:55.163080 master-0 kubenswrapper[4207]: W0224 02:01:55.152899 4207 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 02:01:55.163080 master-0 kubenswrapper[4207]: W0224 02:01:55.152909 4207 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 02:01:55.163080 master-0 kubenswrapper[4207]: I0224 02:01:55.152937 4207 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 02:01:55.165762 master-0 kubenswrapper[4207]: I0224 02:01:55.165697 4207 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 24 02:01:55.165762 master-0 kubenswrapper[4207]: I0224 02:01:55.165749 4207 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 24 02:01:55.165957 master-0 kubenswrapper[4207]: W0224 02:01:55.165927 4207 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 02:01:55.165957 master-0 kubenswrapper[4207]: W0224 02:01:55.165948 4207 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 02:01:55.166098 master-0 kubenswrapper[4207]: W0224 02:01:55.165960 4207 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 02:01:55.166098 master-0 kubenswrapper[4207]: W0224 02:01:55.165971 4207 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 02:01:55.166098 master-0 kubenswrapper[4207]: W0224 02:01:55.165986 4207 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 02:01:55.166098 master-0 kubenswrapper[4207]: W0224 02:01:55.166002 4207 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 02:01:55.166098 master-0 kubenswrapper[4207]: W0224 02:01:55.166014 4207 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 02:01:55.166098 master-0 kubenswrapper[4207]: W0224 02:01:55.166024 4207 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 02:01:55.166098 master-0 kubenswrapper[4207]: W0224 02:01:55.166034 4207 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 24 02:01:55.166098 master-0 kubenswrapper[4207]: W0224 02:01:55.166045 4207 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 02:01:55.166098 master-0 kubenswrapper[4207]: W0224 02:01:55.166055 4207 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 02:01:55.166098 master-0 kubenswrapper[4207]: W0224 02:01:55.166066 4207 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 02:01:55.166098 master-0 kubenswrapper[4207]: W0224 02:01:55.166076 4207 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 02:01:55.166098 master-0 kubenswrapper[4207]: W0224 02:01:55.166115 4207 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 02:01:55.167088 master-0 kubenswrapper[4207]: W0224 02:01:55.166125 4207 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 02:01:55.167088 master-0 kubenswrapper[4207]: W0224 02:01:55.166136 4207 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 02:01:55.167088 master-0 kubenswrapper[4207]: W0224 02:01:55.166146 4207 feature_gate.go:330] unrecognized feature gate: Example Feb 24 02:01:55.167088 master-0 kubenswrapper[4207]: W0224 02:01:55.166156 4207 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 02:01:55.167088 master-0 kubenswrapper[4207]: W0224 02:01:55.166166 4207 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 02:01:55.167088 master-0 kubenswrapper[4207]: W0224 02:01:55.166177 4207 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 02:01:55.167088 master-0 kubenswrapper[4207]: W0224 02:01:55.166187 4207 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 02:01:55.167088 master-0 kubenswrapper[4207]: W0224 02:01:55.166198 4207 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 02:01:55.167088 master-0 kubenswrapper[4207]: W0224 02:01:55.166207 4207 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 02:01:55.167088 master-0 kubenswrapper[4207]: W0224 02:01:55.166219 4207 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 02:01:55.167088 master-0 kubenswrapper[4207]: W0224 02:01:55.166229 4207 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 02:01:55.167088 master-0 kubenswrapper[4207]: W0224 02:01:55.166239 4207 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 02:01:55.167088 master-0 kubenswrapper[4207]: W0224 02:01:55.166250 4207 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 02:01:55.167088 master-0 kubenswrapper[4207]: W0224 02:01:55.166260 4207 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 02:01:55.167088 master-0 kubenswrapper[4207]: W0224 02:01:55.166274 4207 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 02:01:55.167088 master-0 kubenswrapper[4207]: W0224 02:01:55.166286 4207 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 02:01:55.167088 master-0 kubenswrapper[4207]: W0224 02:01:55.166296 4207 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 02:01:55.167088 master-0 kubenswrapper[4207]: W0224 02:01:55.166306 4207 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 02:01:55.167088 master-0 kubenswrapper[4207]: W0224 02:01:55.166316 4207 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 02:01:55.167088 master-0 kubenswrapper[4207]: W0224 02:01:55.166327 4207 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 02:01:55.168613 master-0 kubenswrapper[4207]: W0224 02:01:55.166340 4207 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 02:01:55.168613 master-0 kubenswrapper[4207]: W0224 02:01:55.166351 4207 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 02:01:55.168613 master-0 kubenswrapper[4207]: W0224 02:01:55.166361 4207 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 02:01:55.168613 master-0 kubenswrapper[4207]: W0224 02:01:55.166372 4207 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 02:01:55.168613 master-0 kubenswrapper[4207]: W0224 02:01:55.166382 4207 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 02:01:55.168613 master-0 kubenswrapper[4207]: W0224 02:01:55.166392 4207 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 02:01:55.168613 master-0 kubenswrapper[4207]: W0224 02:01:55.166405 4207 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 02:01:55.168613 master-0 kubenswrapper[4207]: W0224 02:01:55.166419 4207 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 02:01:55.168613 master-0 kubenswrapper[4207]: W0224 02:01:55.166430 4207 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 02:01:55.168613 master-0 kubenswrapper[4207]: W0224 02:01:55.166442 4207 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 02:01:55.168613 master-0 kubenswrapper[4207]: W0224 02:01:55.166453 4207 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 02:01:55.168613 master-0 kubenswrapper[4207]: W0224 02:01:55.166465 4207 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 02:01:55.168613 master-0 kubenswrapper[4207]: W0224 02:01:55.166476 4207 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 02:01:55.168613 master-0 kubenswrapper[4207]: W0224 02:01:55.166488 4207 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 02:01:55.168613 master-0 kubenswrapper[4207]: W0224 02:01:55.166498 4207 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 02:01:55.168613 master-0 kubenswrapper[4207]: W0224 02:01:55.166512 4207 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 02:01:55.168613 master-0 kubenswrapper[4207]: W0224 02:01:55.166526 4207 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 02:01:55.168613 master-0 kubenswrapper[4207]: W0224 02:01:55.166537 4207 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 02:01:55.168613 master-0 kubenswrapper[4207]: W0224 02:01:55.166548 4207 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 02:01:55.170159 master-0 kubenswrapper[4207]: W0224 02:01:55.166559 4207 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 02:01:55.170159 master-0 kubenswrapper[4207]: W0224 02:01:55.166569 4207 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 02:01:55.170159 master-0 kubenswrapper[4207]: W0224 02:01:55.166616 4207 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 02:01:55.170159 master-0 kubenswrapper[4207]: W0224 02:01:55.166626 4207 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 02:01:55.170159 master-0 kubenswrapper[4207]: W0224 02:01:55.166636 4207 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 02:01:55.170159 master-0 kubenswrapper[4207]: W0224 02:01:55.166646 4207 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 02:01:55.170159 master-0 kubenswrapper[4207]: W0224 02:01:55.166657 4207 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 02:01:55.170159 master-0 kubenswrapper[4207]: W0224 02:01:55.166667 4207 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 02:01:55.170159 master-0 kubenswrapper[4207]: W0224 02:01:55.166677 4207 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 02:01:55.170159 master-0 kubenswrapper[4207]: W0224 02:01:55.166687 4207 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 02:01:55.170159 master-0 kubenswrapper[4207]: W0224 02:01:55.166698 4207 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 02:01:55.170159 master-0 kubenswrapper[4207]: W0224 02:01:55.166708 4207 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 02:01:55.170159 master-0 kubenswrapper[4207]: W0224 02:01:55.166718 4207 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 02:01:55.170159 master-0 kubenswrapper[4207]: W0224 02:01:55.166749 4207 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 02:01:55.170159 master-0 kubenswrapper[4207]: W0224 02:01:55.166761 4207 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 02:01:55.170159 master-0 kubenswrapper[4207]: W0224 02:01:55.166771 4207 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 02:01:55.170159 master-0 kubenswrapper[4207]: W0224 02:01:55.166782 4207 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 02:01:55.170159 master-0 kubenswrapper[4207]: W0224 02:01:55.166795 4207 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 02:01:55.170159 master-0 kubenswrapper[4207]: W0224 02:01:55.166806 4207 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 02:01:55.171437 master-0 kubenswrapper[4207]: I0224 02:01:55.166823 4207 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 02:01:55.171437 master-0 kubenswrapper[4207]: W0224 02:01:55.167125 4207 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 02:01:55.171437 master-0 kubenswrapper[4207]: W0224 02:01:55.167144 4207 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 02:01:55.171437 master-0 kubenswrapper[4207]: W0224 02:01:55.167156 4207 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 02:01:55.171437 master-0 kubenswrapper[4207]: W0224 02:01:55.167167 4207 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 02:01:55.171437 master-0 kubenswrapper[4207]: W0224 02:01:55.167179 4207 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 02:01:55.171437 master-0 kubenswrapper[4207]: W0224 02:01:55.167191 4207 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 02:01:55.171437 master-0 kubenswrapper[4207]: W0224 02:01:55.167201 4207 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 02:01:55.171437 master-0 kubenswrapper[4207]: W0224 02:01:55.167213 4207 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 02:01:55.171437 master-0 kubenswrapper[4207]: W0224 02:01:55.167223 4207 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 02:01:55.171437 master-0 kubenswrapper[4207]: W0224 02:01:55.167233 4207 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 02:01:55.171437 master-0 kubenswrapper[4207]: W0224 02:01:55.167243 4207 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 02:01:55.171437 master-0 kubenswrapper[4207]: W0224 02:01:55.167253 4207 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 02:01:55.171437 master-0 kubenswrapper[4207]: W0224 02:01:55.167264 4207 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 02:01:55.171437 master-0 kubenswrapper[4207]: W0224 02:01:55.167274 4207 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 02:01:55.172624 master-0 kubenswrapper[4207]: W0224 02:01:55.167284 4207 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 02:01:55.172624 master-0 kubenswrapper[4207]: W0224 02:01:55.167294 4207 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 02:01:55.172624 master-0 kubenswrapper[4207]: W0224 02:01:55.167303 4207 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 02:01:55.172624 master-0 kubenswrapper[4207]: W0224 02:01:55.167314 4207 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 02:01:55.172624 master-0 kubenswrapper[4207]: W0224 02:01:55.167324 4207 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 02:01:55.172624 master-0 kubenswrapper[4207]: W0224 02:01:55.167337 4207 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 02:01:55.172624 master-0 kubenswrapper[4207]: W0224 02:01:55.167350 4207 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 02:01:55.172624 master-0 kubenswrapper[4207]: W0224 02:01:55.167362 4207 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 02:01:55.172624 master-0 kubenswrapper[4207]: W0224 02:01:55.167374 4207 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 02:01:55.172624 master-0 kubenswrapper[4207]: W0224 02:01:55.167385 4207 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 02:01:55.172624 master-0 kubenswrapper[4207]: W0224 02:01:55.167395 4207 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 02:01:55.172624 master-0 kubenswrapper[4207]: W0224 02:01:55.167405 4207 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 24 02:01:55.172624 master-0 kubenswrapper[4207]: W0224 02:01:55.167418 4207 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 02:01:55.172624 master-0 kubenswrapper[4207]: W0224 02:01:55.167433 4207 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 02:01:55.172624 master-0 kubenswrapper[4207]: W0224 02:01:55.167445 4207 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 02:01:55.172624 master-0 kubenswrapper[4207]: W0224 02:01:55.167456 4207 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 02:01:55.172624 master-0 kubenswrapper[4207]: W0224 02:01:55.167466 4207 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 02:01:55.172624 master-0 kubenswrapper[4207]: W0224 02:01:55.167476 4207 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 02:01:55.172624 master-0 kubenswrapper[4207]: W0224 02:01:55.167486 4207 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 02:01:55.173829 master-0 kubenswrapper[4207]: W0224 02:01:55.167501 4207 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 02:01:55.173829 master-0 kubenswrapper[4207]: W0224 02:01:55.167511 4207 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 02:01:55.173829 master-0 kubenswrapper[4207]: W0224 02:01:55.167525 4207 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 02:01:55.173829 master-0 kubenswrapper[4207]: W0224 02:01:55.167537 4207 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 02:01:55.173829 master-0 kubenswrapper[4207]: W0224 02:01:55.167550 4207 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 02:01:55.173829 master-0 kubenswrapper[4207]: W0224 02:01:55.167561 4207 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 02:01:55.173829 master-0 kubenswrapper[4207]: W0224 02:01:55.167606 4207 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 02:01:55.173829 master-0 kubenswrapper[4207]: W0224 02:01:55.167620 4207 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 02:01:55.173829 master-0 kubenswrapper[4207]: W0224 02:01:55.167632 4207 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 02:01:55.173829 master-0 kubenswrapper[4207]: W0224 02:01:55.167645 4207 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 02:01:55.173829 master-0 kubenswrapper[4207]: W0224 02:01:55.167657 4207 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 02:01:55.173829 master-0 kubenswrapper[4207]: W0224 02:01:55.167668 4207 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 02:01:55.173829 master-0 kubenswrapper[4207]: W0224 02:01:55.167679 4207 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 02:01:55.173829 master-0 kubenswrapper[4207]: W0224 02:01:55.167689 4207 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 02:01:55.173829 master-0 kubenswrapper[4207]: W0224 02:01:55.167699 4207 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 02:01:55.173829 master-0 kubenswrapper[4207]: W0224 02:01:55.167709 4207 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 02:01:55.173829 master-0 kubenswrapper[4207]: W0224 02:01:55.167720 4207 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 02:01:55.173829 master-0 kubenswrapper[4207]: W0224 02:01:55.167731 4207 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 02:01:55.173829 master-0 kubenswrapper[4207]: W0224 02:01:55.167741 4207 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 02:01:55.173829 master-0 kubenswrapper[4207]: W0224 02:01:55.167752 4207 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 02:01:55.174849 master-0 kubenswrapper[4207]: W0224 02:01:55.167761 4207 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 02:01:55.174849 master-0 kubenswrapper[4207]: W0224 02:01:55.167772 4207 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 02:01:55.174849 master-0 kubenswrapper[4207]: W0224 02:01:55.167782 4207 feature_gate.go:330] unrecognized feature gate: Example Feb 24 02:01:55.174849 master-0 kubenswrapper[4207]: W0224 02:01:55.167792 4207 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 02:01:55.174849 master-0 kubenswrapper[4207]: W0224 02:01:55.167807 4207 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 02:01:55.174849 master-0 kubenswrapper[4207]: W0224 02:01:55.167820 4207 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 02:01:55.174849 master-0 kubenswrapper[4207]: W0224 02:01:55.167833 4207 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 02:01:55.174849 master-0 kubenswrapper[4207]: W0224 02:01:55.167845 4207 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 02:01:55.174849 master-0 kubenswrapper[4207]: W0224 02:01:55.167858 4207 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 02:01:55.174849 master-0 kubenswrapper[4207]: W0224 02:01:55.167870 4207 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 02:01:55.174849 master-0 kubenswrapper[4207]: W0224 02:01:55.167881 4207 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 02:01:55.174849 master-0 kubenswrapper[4207]: W0224 02:01:55.167893 4207 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 02:01:55.174849 master-0 kubenswrapper[4207]: W0224 02:01:55.167905 4207 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 02:01:55.174849 master-0 kubenswrapper[4207]: W0224 02:01:55.167917 4207 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 02:01:55.174849 master-0 kubenswrapper[4207]: W0224 02:01:55.167929 4207 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 02:01:55.174849 master-0 kubenswrapper[4207]: W0224 02:01:55.167941 4207 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 02:01:55.174849 master-0 kubenswrapper[4207]: W0224 02:01:55.167957 4207 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 02:01:55.174849 master-0 kubenswrapper[4207]: W0224 02:01:55.167970 4207 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 02:01:55.174849 master-0 kubenswrapper[4207]: W0224 02:01:55.167982 4207 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 02:01:55.175869 master-0 kubenswrapper[4207]: I0224 02:01:55.168000 4207 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 02:01:55.175869 master-0 kubenswrapper[4207]: I0224 02:01:55.168367 4207 server.go:940] "Client rotation is on, will bootstrap in background" Feb 24 02:01:55.175869 master-0 kubenswrapper[4207]: I0224 02:01:55.173015 4207 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 24 02:01:55.175869 master-0 kubenswrapper[4207]: I0224 02:01:55.175451 4207 server.go:997] "Starting client certificate rotation" Feb 24 02:01:55.175869 master-0 kubenswrapper[4207]: I0224 02:01:55.175494 4207 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 24 02:01:55.175869 master-0 kubenswrapper[4207]: I0224 02:01:55.175753 4207 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 02:01:55.207885 master-0 kubenswrapper[4207]: I0224 02:01:55.207795 4207 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 02:01:55.212105 master-0 kubenswrapper[4207]: E0224 02:01:55.212035 4207 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 02:01:55.215024 master-0 kubenswrapper[4207]: I0224 02:01:55.214958 4207 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 02:01:55.242443 master-0 kubenswrapper[4207]: I0224 02:01:55.242364 4207 log.go:25] "Validated CRI v1 runtime API" Feb 24 02:01:55.249179 master-0 kubenswrapper[4207]: I0224 02:01:55.249132 4207 log.go:25] "Validated CRI v1 image API" Feb 24 02:01:55.251922 master-0 kubenswrapper[4207]: I0224 02:01:55.251876 4207 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 24 02:01:55.261915 master-0 kubenswrapper[4207]: I0224 02:01:55.261798 4207 fs.go:135] Filesystem UUIDs: map[19c17b43-4715-4d15-ba6d-72e795fc4d8f:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Feb 24 02:01:55.262017 master-0 kubenswrapper[4207]: I0224 02:01:55.261902 4207 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Feb 24 02:01:55.297697 master-0 kubenswrapper[4207]: I0224 02:01:55.297333 4207 manager.go:217] Machine: {Timestamp:2026-02-24 02:01:55.294924412 +0000 UTC m=+0.638228672 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514153472 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:1d448a69ed5349cda3229fbde6198537 SystemUUID:1d448a69-ed53-49cd-a322-9fbde6198537 BootID:db0156e3-cefa-4894-85d6-ad7931f79daa Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:b3:1a:4a Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:91:46:2f Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:72:e3:6b:de:66:46 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514153472 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 24 02:01:55.297697 master-0 kubenswrapper[4207]: I0224 02:01:55.297653 4207 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 24 02:01:55.297945 master-0 kubenswrapper[4207]: I0224 02:01:55.297812 4207 manager.go:233] Version: {KernelVersion:5.14.0-427.109.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602022246-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 24 02:01:55.299394 master-0 kubenswrapper[4207]: I0224 02:01:55.299358 4207 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 24 02:01:55.299645 master-0 kubenswrapper[4207]: I0224 02:01:55.299592 4207 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 24 02:01:55.299932 master-0 kubenswrapper[4207]: I0224 02:01:55.299634 4207 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 24 02:01:55.300035 master-0 kubenswrapper[4207]: I0224 02:01:55.299952 4207 topology_manager.go:138] "Creating topology manager with none policy" Feb 24 02:01:55.300035 master-0 kubenswrapper[4207]: I0224 02:01:55.299967 4207 container_manager_linux.go:303] "Creating device plugin manager" Feb 24 02:01:55.300666 master-0 kubenswrapper[4207]: I0224 02:01:55.300632 4207 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 24 02:01:55.300754 master-0 kubenswrapper[4207]: I0224 02:01:55.300669 4207 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 24 02:01:55.300841 master-0 kubenswrapper[4207]: I0224 02:01:55.300810 4207 state_mem.go:36] "Initialized new in-memory state store" Feb 24 02:01:55.301016 master-0 kubenswrapper[4207]: I0224 02:01:55.300984 4207 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 24 02:01:55.306760 master-0 kubenswrapper[4207]: I0224 02:01:55.306731 4207 kubelet.go:418] "Attempting to sync node with API server" Feb 24 02:01:55.306760 master-0 kubenswrapper[4207]: I0224 02:01:55.306753 4207 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 24 02:01:55.306919 master-0 kubenswrapper[4207]: I0224 02:01:55.306780 4207 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 24 02:01:55.306919 master-0 kubenswrapper[4207]: I0224 02:01:55.306795 4207 kubelet.go:324] "Adding apiserver pod source" Feb 24 02:01:55.306919 master-0 kubenswrapper[4207]: I0224 02:01:55.306815 4207 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 24 02:01:55.311275 master-0 kubenswrapper[4207]: I0224 02:01:55.311233 4207 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-6.rhaos4.18.git7ed6156.el9" apiVersion="v1" Feb 24 02:01:55.315285 master-0 kubenswrapper[4207]: W0224 02:01:55.315181 4207 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:01:55.315415 master-0 kubenswrapper[4207]: I0224 02:01:55.315303 4207 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 24 02:01:55.315415 master-0 kubenswrapper[4207]: E0224 02:01:55.315313 4207 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 02:01:55.315415 master-0 kubenswrapper[4207]: W0224 02:01:55.315174 4207 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:01:55.315415 master-0 kubenswrapper[4207]: E0224 02:01:55.315381 4207 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 02:01:55.315690 master-0 kubenswrapper[4207]: I0224 02:01:55.315487 4207 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 24 02:01:55.315690 master-0 kubenswrapper[4207]: I0224 02:01:55.315509 4207 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 24 02:01:55.315690 master-0 kubenswrapper[4207]: I0224 02:01:55.315519 4207 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 24 02:01:55.315690 master-0 kubenswrapper[4207]: I0224 02:01:55.315528 4207 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 24 02:01:55.315690 master-0 kubenswrapper[4207]: I0224 02:01:55.315538 4207 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 24 02:01:55.315690 master-0 kubenswrapper[4207]: I0224 02:01:55.315547 4207 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 24 02:01:55.315690 master-0 kubenswrapper[4207]: I0224 02:01:55.315556 4207 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 24 02:01:55.315690 master-0 kubenswrapper[4207]: I0224 02:01:55.315564 4207 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 24 02:01:55.315690 master-0 kubenswrapper[4207]: I0224 02:01:55.315594 4207 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 24 02:01:55.315690 master-0 kubenswrapper[4207]: I0224 02:01:55.315604 4207 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 24 02:01:55.315690 master-0 kubenswrapper[4207]: I0224 02:01:55.315617 4207 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 24 02:01:55.316230 master-0 kubenswrapper[4207]: I0224 02:01:55.316114 4207 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 24 02:01:55.317987 master-0 kubenswrapper[4207]: I0224 02:01:55.317956 4207 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 24 02:01:55.318440 master-0 kubenswrapper[4207]: I0224 02:01:55.318411 4207 server.go:1280] "Started kubelet" Feb 24 02:01:55.319970 master-0 kubenswrapper[4207]: I0224 02:01:55.319859 4207 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 24 02:01:55.320016 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 24 02:01:55.325916 master-0 kubenswrapper[4207]: I0224 02:01:55.325549 4207 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 24 02:01:55.326061 master-0 kubenswrapper[4207]: I0224 02:01:55.325940 4207 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 24 02:01:55.326293 master-0 kubenswrapper[4207]: I0224 02:01:55.326153 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:01:55.326619 master-0 kubenswrapper[4207]: I0224 02:01:55.326530 4207 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 24 02:01:55.332200 master-0 kubenswrapper[4207]: E0224 02:01:55.330803 4207 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.18970c4fcf981f1c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:55.318382364 +0000 UTC m=+0.661686614,LastTimestamp:2026-02-24 02:01:55.318382364 +0000 UTC m=+0.661686614,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:01:55.332200 master-0 kubenswrapper[4207]: I0224 02:01:55.332191 4207 server.go:449] "Adding debug handlers to kubelet server" Feb 24 02:01:55.333268 master-0 kubenswrapper[4207]: I0224 02:01:55.333228 4207 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 24 02:01:55.333430 master-0 kubenswrapper[4207]: I0224 02:01:55.333391 4207 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 24 02:01:55.334104 master-0 kubenswrapper[4207]: E0224 02:01:55.334009 4207 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 02:01:55.334219 master-0 kubenswrapper[4207]: I0224 02:01:55.334151 4207 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 24 02:01:55.334219 master-0 kubenswrapper[4207]: I0224 02:01:55.334179 4207 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 24 02:01:55.334403 master-0 kubenswrapper[4207]: I0224 02:01:55.334339 4207 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 24 02:01:55.334521 master-0 kubenswrapper[4207]: I0224 02:01:55.334474 4207 reconstruct.go:97] "Volume reconstruction finished" Feb 24 02:01:55.334521 master-0 kubenswrapper[4207]: I0224 02:01:55.334516 4207 reconciler.go:26] "Reconciler: start to sync state" Feb 24 02:01:55.336145 master-0 kubenswrapper[4207]: E0224 02:01:55.336058 4207 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Feb 24 02:01:55.336145 master-0 kubenswrapper[4207]: W0224 02:01:55.335995 4207 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:01:55.336340 master-0 kubenswrapper[4207]: E0224 02:01:55.336164 4207 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 02:01:55.336902 master-0 kubenswrapper[4207]: I0224 02:01:55.336831 4207 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 24 02:01:55.336902 master-0 kubenswrapper[4207]: I0224 02:01:55.336876 4207 factory.go:55] Registering systemd factory Feb 24 02:01:55.336902 master-0 kubenswrapper[4207]: I0224 02:01:55.336892 4207 factory.go:221] Registration of the systemd container factory successfully Feb 24 02:01:55.337556 master-0 kubenswrapper[4207]: I0224 02:01:55.337500 4207 factory.go:153] Registering CRI-O factory Feb 24 02:01:55.337556 master-0 kubenswrapper[4207]: I0224 02:01:55.337550 4207 factory.go:221] Registration of the crio container factory successfully Feb 24 02:01:55.337718 master-0 kubenswrapper[4207]: I0224 02:01:55.337622 4207 factory.go:103] Registering Raw factory Feb 24 02:01:55.337718 master-0 kubenswrapper[4207]: I0224 02:01:55.337660 4207 manager.go:1196] Started watching for new ooms in manager Feb 24 02:01:55.339154 master-0 kubenswrapper[4207]: I0224 02:01:55.339098 4207 manager.go:319] Starting recovery of all containers Feb 24 02:01:55.340229 master-0 kubenswrapper[4207]: E0224 02:01:55.340063 4207 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 24 02:01:55.371855 master-0 kubenswrapper[4207]: I0224 02:01:55.371796 4207 manager.go:324] Recovery completed Feb 24 02:01:55.391631 master-0 kubenswrapper[4207]: I0224 02:01:55.391440 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:01:55.394466 master-0 kubenswrapper[4207]: I0224 02:01:55.392897 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:01:55.394466 master-0 kubenswrapper[4207]: I0224 02:01:55.392939 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:01:55.394466 master-0 kubenswrapper[4207]: I0224 02:01:55.392952 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:01:55.395394 master-0 kubenswrapper[4207]: I0224 02:01:55.395354 4207 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 24 02:01:55.395394 master-0 kubenswrapper[4207]: I0224 02:01:55.395378 4207 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 24 02:01:55.395569 master-0 kubenswrapper[4207]: I0224 02:01:55.395406 4207 state_mem.go:36] "Initialized new in-memory state store" Feb 24 02:01:55.401811 master-0 kubenswrapper[4207]: I0224 02:01:55.401758 4207 policy_none.go:49] "None policy: Start" Feb 24 02:01:55.402860 master-0 kubenswrapper[4207]: I0224 02:01:55.402806 4207 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 24 02:01:55.403011 master-0 kubenswrapper[4207]: I0224 02:01:55.402865 4207 state_mem.go:35] "Initializing new in-memory state store" Feb 24 02:01:55.435001 master-0 kubenswrapper[4207]: E0224 02:01:55.434927 4207 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 02:01:55.474090 master-0 kubenswrapper[4207]: I0224 02:01:55.474045 4207 manager.go:334] "Starting Device Plugin manager" Feb 24 02:01:55.507707 master-0 kubenswrapper[4207]: I0224 02:01:55.474196 4207 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 24 02:01:55.507707 master-0 kubenswrapper[4207]: I0224 02:01:55.474219 4207 server.go:79] "Starting device plugin registration server" Feb 24 02:01:55.507707 master-0 kubenswrapper[4207]: I0224 02:01:55.474721 4207 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 24 02:01:55.507707 master-0 kubenswrapper[4207]: I0224 02:01:55.474746 4207 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 24 02:01:55.507707 master-0 kubenswrapper[4207]: I0224 02:01:55.477587 4207 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 24 02:01:55.507707 master-0 kubenswrapper[4207]: I0224 02:01:55.477759 4207 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 24 02:01:55.507707 master-0 kubenswrapper[4207]: I0224 02:01:55.477769 4207 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 24 02:01:55.507707 master-0 kubenswrapper[4207]: E0224 02:01:55.479043 4207 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 24 02:01:55.507707 master-0 kubenswrapper[4207]: I0224 02:01:55.500010 4207 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 24 02:01:55.507707 master-0 kubenswrapper[4207]: I0224 02:01:55.502920 4207 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 24 02:01:55.507707 master-0 kubenswrapper[4207]: I0224 02:01:55.503032 4207 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 24 02:01:55.507707 master-0 kubenswrapper[4207]: I0224 02:01:55.503070 4207 kubelet.go:2335] "Starting kubelet main sync loop" Feb 24 02:01:55.507707 master-0 kubenswrapper[4207]: E0224 02:01:55.503154 4207 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 24 02:01:55.507707 master-0 kubenswrapper[4207]: W0224 02:01:55.507071 4207 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:01:55.507707 master-0 kubenswrapper[4207]: E0224 02:01:55.507195 4207 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 02:01:55.537530 master-0 kubenswrapper[4207]: E0224 02:01:55.537434 4207 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 24 02:01:55.575786 master-0 kubenswrapper[4207]: I0224 02:01:55.575677 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:01:55.577492 master-0 kubenswrapper[4207]: I0224 02:01:55.577444 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:01:55.577604 master-0 kubenswrapper[4207]: I0224 02:01:55.577499 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:01:55.577604 master-0 kubenswrapper[4207]: I0224 02:01:55.577520 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:01:55.577604 master-0 kubenswrapper[4207]: I0224 02:01:55.577564 4207 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 24 02:01:55.578660 master-0 kubenswrapper[4207]: E0224 02:01:55.578560 4207 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 24 02:01:55.603861 master-0 kubenswrapper[4207]: I0224 02:01:55.603740 4207 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0"] Feb 24 02:01:55.603861 master-0 kubenswrapper[4207]: I0224 02:01:55.603864 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:01:55.605125 master-0 kubenswrapper[4207]: I0224 02:01:55.605073 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:01:55.605315 master-0 kubenswrapper[4207]: I0224 02:01:55.605163 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:01:55.605315 master-0 kubenswrapper[4207]: I0224 02:01:55.605184 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:01:55.605429 master-0 kubenswrapper[4207]: I0224 02:01:55.605394 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:01:55.605833 master-0 kubenswrapper[4207]: I0224 02:01:55.605786 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:01:55.605915 master-0 kubenswrapper[4207]: I0224 02:01:55.605854 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:01:55.606521 master-0 kubenswrapper[4207]: I0224 02:01:55.606479 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:01:55.606637 master-0 kubenswrapper[4207]: I0224 02:01:55.606527 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:01:55.606637 master-0 kubenswrapper[4207]: I0224 02:01:55.606546 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:01:55.606782 master-0 kubenswrapper[4207]: I0224 02:01:55.606727 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:01:55.606887 master-0 kubenswrapper[4207]: I0224 02:01:55.606837 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:01:55.606887 master-0 kubenswrapper[4207]: I0224 02:01:55.606880 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:01:55.607074 master-0 kubenswrapper[4207]: I0224 02:01:55.606901 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:01:55.607074 master-0 kubenswrapper[4207]: I0224 02:01:55.607042 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:01:55.607199 master-0 kubenswrapper[4207]: I0224 02:01:55.607110 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:01:55.608269 master-0 kubenswrapper[4207]: I0224 02:01:55.608217 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:01:55.608348 master-0 kubenswrapper[4207]: I0224 02:01:55.608260 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:01:55.608348 master-0 kubenswrapper[4207]: I0224 02:01:55.608307 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:01:55.608348 master-0 kubenswrapper[4207]: I0224 02:01:55.608337 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:01:55.608515 master-0 kubenswrapper[4207]: I0224 02:01:55.608273 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:01:55.608515 master-0 kubenswrapper[4207]: I0224 02:01:55.608436 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:01:55.609443 master-0 kubenswrapper[4207]: I0224 02:01:55.609394 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:01:55.609613 master-0 kubenswrapper[4207]: I0224 02:01:55.609537 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 02:01:55.609691 master-0 kubenswrapper[4207]: I0224 02:01:55.609618 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:01:55.610591 master-0 kubenswrapper[4207]: I0224 02:01:55.610520 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:01:55.610591 master-0 kubenswrapper[4207]: I0224 02:01:55.610561 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:01:55.610727 master-0 kubenswrapper[4207]: I0224 02:01:55.610605 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:01:55.610782 master-0 kubenswrapper[4207]: I0224 02:01:55.610745 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:01:55.610782 master-0 kubenswrapper[4207]: I0224 02:01:55.610773 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:01:55.610890 master-0 kubenswrapper[4207]: I0224 02:01:55.610790 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:01:55.611012 master-0 kubenswrapper[4207]: I0224 02:01:55.610946 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:01:55.611274 master-0 kubenswrapper[4207]: I0224 02:01:55.611226 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 02:01:55.611364 master-0 kubenswrapper[4207]: I0224 02:01:55.611299 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:01:55.612495 master-0 kubenswrapper[4207]: I0224 02:01:55.612450 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:01:55.612495 master-0 kubenswrapper[4207]: I0224 02:01:55.612490 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:01:55.612653 master-0 kubenswrapper[4207]: I0224 02:01:55.612508 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:01:55.613135 master-0 kubenswrapper[4207]: I0224 02:01:55.613083 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:01:55.613135 master-0 kubenswrapper[4207]: I0224 02:01:55.613127 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:01:55.613257 master-0 kubenswrapper[4207]: I0224 02:01:55.613146 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:01:55.613448 master-0 kubenswrapper[4207]: I0224 02:01:55.613404 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 24 02:01:55.613516 master-0 kubenswrapper[4207]: I0224 02:01:55.613459 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:01:55.614816 master-0 kubenswrapper[4207]: I0224 02:01:55.614769 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:01:55.614816 master-0 kubenswrapper[4207]: I0224 02:01:55.614807 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:01:55.614941 master-0 kubenswrapper[4207]: I0224 02:01:55.614826 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:01:55.735594 master-0 kubenswrapper[4207]: I0224 02:01:55.735509 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:01:55.735728 master-0 kubenswrapper[4207]: I0224 02:01:55.735599 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:01:55.735837 master-0 kubenswrapper[4207]: I0224 02:01:55.735780 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 02:01:55.735914 master-0 kubenswrapper[4207]: I0224 02:01:55.735881 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 02:01:55.736025 master-0 kubenswrapper[4207]: I0224 02:01:55.735970 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 02:01:55.736146 master-0 kubenswrapper[4207]: I0224 02:01:55.736097 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 24 02:01:55.736282 master-0 kubenswrapper[4207]: I0224 02:01:55.736231 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:01:55.736354 master-0 kubenswrapper[4207]: I0224 02:01:55.736288 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:01:55.736426 master-0 kubenswrapper[4207]: I0224 02:01:55.736397 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:01:55.736489 master-0 kubenswrapper[4207]: I0224 02:01:55.736438 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:01:55.736551 master-0 kubenswrapper[4207]: I0224 02:01:55.736512 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:01:55.736651 master-0 kubenswrapper[4207]: I0224 02:01:55.736549 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 24 02:01:55.736651 master-0 kubenswrapper[4207]: I0224 02:01:55.736618 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:01:55.736776 master-0 kubenswrapper[4207]: I0224 02:01:55.736654 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:01:55.736776 master-0 kubenswrapper[4207]: I0224 02:01:55.736691 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:01:55.736776 master-0 kubenswrapper[4207]: I0224 02:01:55.736729 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:01:55.736943 master-0 kubenswrapper[4207]: I0224 02:01:55.736785 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 02:01:55.778904 master-0 kubenswrapper[4207]: I0224 02:01:55.778795 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:01:55.780357 master-0 kubenswrapper[4207]: I0224 02:01:55.780304 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:01:55.780447 master-0 kubenswrapper[4207]: I0224 02:01:55.780367 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:01:55.780447 master-0 kubenswrapper[4207]: I0224 02:01:55.780388 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:01:55.780560 master-0 kubenswrapper[4207]: I0224 02:01:55.780449 4207 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 24 02:01:55.781722 master-0 kubenswrapper[4207]: E0224 02:01:55.781660 4207 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 24 02:01:55.837594 master-0 kubenswrapper[4207]: I0224 02:01:55.837458 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:01:55.837790 master-0 kubenswrapper[4207]: I0224 02:01:55.837725 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:01:55.837790 master-0 kubenswrapper[4207]: I0224 02:01:55.837756 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:01:55.837929 master-0 kubenswrapper[4207]: I0224 02:01:55.837817 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:01:55.837929 master-0 kubenswrapper[4207]: I0224 02:01:55.837909 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:01:55.838205 master-0 kubenswrapper[4207]: I0224 02:01:55.837959 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:01:55.838205 master-0 kubenswrapper[4207]: I0224 02:01:55.838024 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:01:55.838205 master-0 kubenswrapper[4207]: I0224 02:01:55.838063 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:01:55.838205 master-0 kubenswrapper[4207]: I0224 02:01:55.838120 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:01:55.838205 master-0 kubenswrapper[4207]: I0224 02:01:55.838128 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:01:55.838470 master-0 kubenswrapper[4207]: I0224 02:01:55.838213 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 24 02:01:55.838470 master-0 kubenswrapper[4207]: I0224 02:01:55.838255 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:01:55.838470 master-0 kubenswrapper[4207]: I0224 02:01:55.838321 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:01:55.838470 master-0 kubenswrapper[4207]: I0224 02:01:55.838359 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:01:55.838470 master-0 kubenswrapper[4207]: I0224 02:01:55.838395 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:01:55.838470 master-0 kubenswrapper[4207]: I0224 02:01:55.838426 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 24 02:01:55.838470 master-0 kubenswrapper[4207]: I0224 02:01:55.838429 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:01:55.838470 master-0 kubenswrapper[4207]: I0224 02:01:55.838467 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:01:55.839063 master-0 kubenswrapper[4207]: I0224 02:01:55.838492 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:01:55.839063 master-0 kubenswrapper[4207]: I0224 02:01:55.838487 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 02:01:55.839063 master-0 kubenswrapper[4207]: I0224 02:01:55.838529 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:01:55.839063 master-0 kubenswrapper[4207]: I0224 02:01:55.838642 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 02:01:55.839063 master-0 kubenswrapper[4207]: I0224 02:01:55.838692 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:01:55.839063 master-0 kubenswrapper[4207]: I0224 02:01:55.838714 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:01:55.839063 master-0 kubenswrapper[4207]: I0224 02:01:55.838743 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:01:55.839063 master-0 kubenswrapper[4207]: I0224 02:01:55.838783 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 02:01:55.839063 master-0 kubenswrapper[4207]: I0224 02:01:55.838815 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 02:01:55.839063 master-0 kubenswrapper[4207]: I0224 02:01:55.838864 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:01:55.839063 master-0 kubenswrapper[4207]: I0224 02:01:55.838888 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 02:01:55.839063 master-0 kubenswrapper[4207]: I0224 02:01:55.838950 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 02:01:55.839063 master-0 kubenswrapper[4207]: I0224 02:01:55.838979 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 02:01:55.839063 master-0 kubenswrapper[4207]: I0224 02:01:55.838987 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 24 02:01:55.839063 master-0 kubenswrapper[4207]: I0224 02:01:55.839022 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 24 02:01:55.839063 master-0 kubenswrapper[4207]: I0224 02:01:55.839058 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 02:01:55.940240 master-0 kubenswrapper[4207]: E0224 02:01:55.940121 4207 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 24 02:01:55.973264 master-0 kubenswrapper[4207]: I0224 02:01:55.973151 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:01:55.986606 master-0 kubenswrapper[4207]: I0224 02:01:55.986266 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:01:56.009864 master-0 kubenswrapper[4207]: I0224 02:01:56.009788 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 02:01:56.026113 master-0 kubenswrapper[4207]: I0224 02:01:56.026050 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 02:01:56.036683 master-0 kubenswrapper[4207]: I0224 02:01:56.036629 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 24 02:01:56.411903 master-0 kubenswrapper[4207]: I0224 02:01:56.183888 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:01:56.411903 master-0 kubenswrapper[4207]: W0224 02:01:56.411791 4207 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:01:56.411903 master-0 kubenswrapper[4207]: I0224 02:01:56.411840 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:01:56.412936 master-0 kubenswrapper[4207]: E0224 02:01:56.411905 4207 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 02:01:56.412936 master-0 kubenswrapper[4207]: W0224 02:01:56.411791 4207 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:01:56.412936 master-0 kubenswrapper[4207]: E0224 02:01:56.411965 4207 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 02:01:56.412936 master-0 kubenswrapper[4207]: I0224 02:01:56.412868 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:01:56.412936 master-0 kubenswrapper[4207]: I0224 02:01:56.412899 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:01:56.412936 master-0 kubenswrapper[4207]: I0224 02:01:56.412908 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:01:56.413091 master-0 kubenswrapper[4207]: I0224 02:01:56.412968 4207 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 24 02:01:56.413822 master-0 kubenswrapper[4207]: E0224 02:01:56.413766 4207 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 24 02:01:56.741940 master-0 kubenswrapper[4207]: E0224 02:01:56.741759 4207 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Feb 24 02:01:56.798862 master-0 kubenswrapper[4207]: W0224 02:01:56.798743 4207 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:01:56.798991 master-0 kubenswrapper[4207]: E0224 02:01:56.798882 4207 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 02:01:56.857196 master-0 kubenswrapper[4207]: W0224 02:01:56.857065 4207 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:01:56.857196 master-0 kubenswrapper[4207]: E0224 02:01:56.857189 4207 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 02:01:57.021664 master-0 kubenswrapper[4207]: W0224 02:01:57.021550 4207 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod687e92a6cecf1e2beeef16a0b322ad08.slice/crio-97c357985567dbd0d0a6d267a8cc4448d666a74cc353c0643a50d8ab3f6c2302 WatchSource:0}: Error finding container 97c357985567dbd0d0a6d267a8cc4448d666a74cc353c0643a50d8ab3f6c2302: Status 404 returned error can't find the container with id 97c357985567dbd0d0a6d267a8cc4448d666a74cc353c0643a50d8ab3f6c2302 Feb 24 02:01:57.026997 master-0 kubenswrapper[4207]: I0224 02:01:57.026946 4207 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 02:01:57.033564 master-0 kubenswrapper[4207]: W0224 02:01:57.033504 4207 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc997c8e9d3be51d454d8e61e376bef08.slice/crio-fb524201fadda92a97019a1e36f215d113e21212244e9e77433e72e6adcfc793 WatchSource:0}: Error finding container fb524201fadda92a97019a1e36f215d113e21212244e9e77433e72e6adcfc793: Status 404 returned error can't find the container with id fb524201fadda92a97019a1e36f215d113e21212244e9e77433e72e6adcfc793 Feb 24 02:01:57.035674 master-0 kubenswrapper[4207]: W0224 02:01:57.035618 4207 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9ad9373c007a4fcd25e70622bdc8deb.slice/crio-d130af32cfc701e2db477383fd28dc66d411455c0f39e12c729c963e3f569427 WatchSource:0}: Error finding container d130af32cfc701e2db477383fd28dc66d411455c0f39e12c729c963e3f569427: Status 404 returned error can't find the container with id d130af32cfc701e2db477383fd28dc66d411455c0f39e12c729c963e3f569427 Feb 24 02:01:57.050371 master-0 kubenswrapper[4207]: W0224 02:01:57.050288 4207 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56c3cb71c9851003c8de7e7c5db4b87e.slice/crio-6d1c4a7e4e4241cdd4f673e537ec599a9ec1bd539d78669446c1a36b609a7a02 WatchSource:0}: Error finding container 6d1c4a7e4e4241cdd4f673e537ec599a9ec1bd539d78669446c1a36b609a7a02: Status 404 returned error can't find the container with id 6d1c4a7e4e4241cdd4f673e537ec599a9ec1bd539d78669446c1a36b609a7a02 Feb 24 02:01:57.063303 master-0 kubenswrapper[4207]: W0224 02:01:57.063254 4207 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12dab5d350ebc129b0bfa4714d330b15.slice/crio-8ce6ca3d3b13c63a9ea107eaabf4c609711cee1bd75660ff7fc88de79d18620c WatchSource:0}: Error finding container 8ce6ca3d3b13c63a9ea107eaabf4c609711cee1bd75660ff7fc88de79d18620c: Status 404 returned error can't find the container with id 8ce6ca3d3b13c63a9ea107eaabf4c609711cee1bd75660ff7fc88de79d18620c Feb 24 02:01:57.214399 master-0 kubenswrapper[4207]: I0224 02:01:57.214321 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:01:57.215504 master-0 kubenswrapper[4207]: I0224 02:01:57.215453 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:01:57.215504 master-0 kubenswrapper[4207]: I0224 02:01:57.215501 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:01:57.215671 master-0 kubenswrapper[4207]: I0224 02:01:57.215524 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:01:57.215671 master-0 kubenswrapper[4207]: I0224 02:01:57.215624 4207 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 24 02:01:57.217329 master-0 kubenswrapper[4207]: E0224 02:01:57.217255 4207 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 24 02:01:57.228438 master-0 kubenswrapper[4207]: I0224 02:01:57.228396 4207 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 02:01:57.231268 master-0 kubenswrapper[4207]: E0224 02:01:57.231202 4207 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 02:01:57.327800 master-0 kubenswrapper[4207]: I0224 02:01:57.327686 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:01:57.509545 master-0 kubenswrapper[4207]: I0224 02:01:57.509364 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"8ce6ca3d3b13c63a9ea107eaabf4c609711cee1bd75660ff7fc88de79d18620c"} Feb 24 02:01:57.512700 master-0 kubenswrapper[4207]: I0224 02:01:57.512632 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"6d1c4a7e4e4241cdd4f673e537ec599a9ec1bd539d78669446c1a36b609a7a02"} Feb 24 02:01:57.514426 master-0 kubenswrapper[4207]: I0224 02:01:57.514358 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"d130af32cfc701e2db477383fd28dc66d411455c0f39e12c729c963e3f569427"} Feb 24 02:01:57.515858 master-0 kubenswrapper[4207]: I0224 02:01:57.515798 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"fb524201fadda92a97019a1e36f215d113e21212244e9e77433e72e6adcfc793"} Feb 24 02:01:57.516989 master-0 kubenswrapper[4207]: I0224 02:01:57.516931 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"97c357985567dbd0d0a6d267a8cc4448d666a74cc353c0643a50d8ab3f6c2302"} Feb 24 02:01:58.328856 master-0 kubenswrapper[4207]: I0224 02:01:58.328697 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:01:58.343877 master-0 kubenswrapper[4207]: E0224 02:01:58.343766 4207 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Feb 24 02:01:58.737746 master-0 kubenswrapper[4207]: W0224 02:01:58.737391 4207 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:01:58.737746 master-0 kubenswrapper[4207]: E0224 02:01:58.737493 4207 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 02:01:58.738707 master-0 kubenswrapper[4207]: W0224 02:01:58.738359 4207 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:01:58.738707 master-0 kubenswrapper[4207]: E0224 02:01:58.738438 4207 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 02:01:58.818423 master-0 kubenswrapper[4207]: I0224 02:01:58.818322 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:01:58.819828 master-0 kubenswrapper[4207]: I0224 02:01:58.819748 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:01:58.819828 master-0 kubenswrapper[4207]: I0224 02:01:58.819806 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:01:58.819828 master-0 kubenswrapper[4207]: I0224 02:01:58.819824 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:01:58.820040 master-0 kubenswrapper[4207]: I0224 02:01:58.819898 4207 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 24 02:01:58.821065 master-0 kubenswrapper[4207]: E0224 02:01:58.820998 4207 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 24 02:01:58.894284 master-0 kubenswrapper[4207]: E0224 02:01:58.894071 4207 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.18970c4fcf981f1c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:55.318382364 +0000 UTC m=+0.661686614,LastTimestamp:2026-02-24 02:01:55.318382364 +0000 UTC m=+0.661686614,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:01:59.143239 master-0 kubenswrapper[4207]: W0224 02:01:59.143085 4207 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:01:59.143239 master-0 kubenswrapper[4207]: E0224 02:01:59.143172 4207 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 02:01:59.328442 master-0 kubenswrapper[4207]: I0224 02:01:59.328355 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:01:59.502398 master-0 kubenswrapper[4207]: W0224 02:01:59.502256 4207 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:01:59.502398 master-0 kubenswrapper[4207]: E0224 02:01:59.502339 4207 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 02:02:00.372746 master-0 kubenswrapper[4207]: I0224 02:02:00.372662 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:02:01.328875 master-0 kubenswrapper[4207]: I0224 02:02:01.328804 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:02:01.530614 master-0 kubenswrapper[4207]: I0224 02:02:01.529657 4207 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="17bbd7fbbeaf0dec034d902b8e7575b6559c59f312f1a534ffc4119208ba1272" exitCode=0 Feb 24 02:02:01.530614 master-0 kubenswrapper[4207]: I0224 02:02:01.529712 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"17bbd7fbbeaf0dec034d902b8e7575b6559c59f312f1a534ffc4119208ba1272"} Feb 24 02:02:01.530614 master-0 kubenswrapper[4207]: I0224 02:02:01.529820 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:01.531499 master-0 kubenswrapper[4207]: I0224 02:02:01.531117 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:01.531499 master-0 kubenswrapper[4207]: I0224 02:02:01.531141 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:01.531499 master-0 kubenswrapper[4207]: I0224 02:02:01.531150 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:01.545682 master-0 kubenswrapper[4207]: E0224 02:02:01.545619 4207 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Feb 24 02:02:01.555109 master-0 kubenswrapper[4207]: I0224 02:02:01.550628 4207 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 02:02:01.557060 master-0 kubenswrapper[4207]: E0224 02:02:01.557018 4207 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 02:02:02.022063 master-0 kubenswrapper[4207]: I0224 02:02:02.021988 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:02.023391 master-0 kubenswrapper[4207]: I0224 02:02:02.023361 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:02.023460 master-0 kubenswrapper[4207]: I0224 02:02:02.023399 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:02.023460 master-0 kubenswrapper[4207]: I0224 02:02:02.023409 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:02.023460 master-0 kubenswrapper[4207]: I0224 02:02:02.023462 4207 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 24 02:02:02.024460 master-0 kubenswrapper[4207]: E0224 02:02:02.024426 4207 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 24 02:02:02.143852 master-0 kubenswrapper[4207]: W0224 02:02:02.143733 4207 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:02:02.143930 master-0 kubenswrapper[4207]: E0224 02:02:02.143849 4207 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 02:02:02.327658 master-0 kubenswrapper[4207]: I0224 02:02:02.327605 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:02:02.533021 master-0 kubenswrapper[4207]: I0224 02:02:02.532991 4207 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/0.log" Feb 24 02:02:02.533675 master-0 kubenswrapper[4207]: I0224 02:02:02.533592 4207 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="bbf3cc0a2ff472fed2887229f2720f2c8e23530630465193a1527330f5b07f8d" exitCode=1 Feb 24 02:02:02.533734 master-0 kubenswrapper[4207]: I0224 02:02:02.533673 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"bbf3cc0a2ff472fed2887229f2720f2c8e23530630465193a1527330f5b07f8d"} Feb 24 02:02:02.533767 master-0 kubenswrapper[4207]: I0224 02:02:02.533736 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:02.535100 master-0 kubenswrapper[4207]: I0224 02:02:02.534524 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:02.535100 master-0 kubenswrapper[4207]: I0224 02:02:02.534549 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:02.535100 master-0 kubenswrapper[4207]: I0224 02:02:02.534559 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:02.535100 master-0 kubenswrapper[4207]: I0224 02:02:02.534856 4207 scope.go:117] "RemoveContainer" containerID="bbf3cc0a2ff472fed2887229f2720f2c8e23530630465193a1527330f5b07f8d" Feb 24 02:02:02.535401 master-0 kubenswrapper[4207]: I0224 02:02:02.535369 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"5dc9293fc7d43ce4e59058df58bcbd64acfc36ec310c06377528876b47e64e22"} Feb 24 02:02:02.923373 master-0 kubenswrapper[4207]: W0224 02:02:02.922805 4207 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:02:02.923373 master-0 kubenswrapper[4207]: E0224 02:02:02.923312 4207 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 02:02:03.327870 master-0 kubenswrapper[4207]: I0224 02:02:03.327798 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:02:03.540870 master-0 kubenswrapper[4207]: I0224 02:02:03.540826 4207 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/1.log" Feb 24 02:02:03.541482 master-0 kubenswrapper[4207]: I0224 02:02:03.541400 4207 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/0.log" Feb 24 02:02:03.541813 master-0 kubenswrapper[4207]: I0224 02:02:03.541774 4207 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="6c175419567b5ca9a91e007584c1efdbed9f795fb8d90c8307dd30f12d0f572e" exitCode=1 Feb 24 02:02:03.541883 master-0 kubenswrapper[4207]: I0224 02:02:03.541849 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"6c175419567b5ca9a91e007584c1efdbed9f795fb8d90c8307dd30f12d0f572e"} Feb 24 02:02:03.541922 master-0 kubenswrapper[4207]: I0224 02:02:03.541899 4207 scope.go:117] "RemoveContainer" containerID="bbf3cc0a2ff472fed2887229f2720f2c8e23530630465193a1527330f5b07f8d" Feb 24 02:02:03.542095 master-0 kubenswrapper[4207]: I0224 02:02:03.542072 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:03.543175 master-0 kubenswrapper[4207]: I0224 02:02:03.543150 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:03.543222 master-0 kubenswrapper[4207]: I0224 02:02:03.543184 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:03.543222 master-0 kubenswrapper[4207]: I0224 02:02:03.543199 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:03.543563 master-0 kubenswrapper[4207]: I0224 02:02:03.543541 4207 scope.go:117] "RemoveContainer" containerID="6c175419567b5ca9a91e007584c1efdbed9f795fb8d90c8307dd30f12d0f572e" Feb 24 02:02:03.543780 master-0 kubenswrapper[4207]: E0224 02:02:03.543751 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 24 02:02:03.545430 master-0 kubenswrapper[4207]: I0224 02:02:03.545171 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"e4194596cf49a5fc19a191ab2dc31b30d69af67944bd0d82e51ad0a2c8b76803"} Feb 24 02:02:03.545430 master-0 kubenswrapper[4207]: I0224 02:02:03.545237 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:03.545821 master-0 kubenswrapper[4207]: I0224 02:02:03.545794 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:03.545821 master-0 kubenswrapper[4207]: I0224 02:02:03.545821 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:03.545912 master-0 kubenswrapper[4207]: I0224 02:02:03.545834 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:04.328297 master-0 kubenswrapper[4207]: I0224 02:02:04.328235 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:02:04.550224 master-0 kubenswrapper[4207]: I0224 02:02:04.550156 4207 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/1.log" Feb 24 02:02:04.552162 master-0 kubenswrapper[4207]: I0224 02:02:04.552117 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:04.552327 master-0 kubenswrapper[4207]: I0224 02:02:04.552284 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:04.553234 master-0 kubenswrapper[4207]: I0224 02:02:04.553203 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:04.553289 master-0 kubenswrapper[4207]: I0224 02:02:04.553240 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:04.553289 master-0 kubenswrapper[4207]: I0224 02:02:04.553252 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:04.553803 master-0 kubenswrapper[4207]: I0224 02:02:04.553764 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:04.553864 master-0 kubenswrapper[4207]: I0224 02:02:04.553824 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:04.553864 master-0 kubenswrapper[4207]: I0224 02:02:04.553860 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:04.554457 master-0 kubenswrapper[4207]: I0224 02:02:04.554425 4207 scope.go:117] "RemoveContainer" containerID="6c175419567b5ca9a91e007584c1efdbed9f795fb8d90c8307dd30f12d0f572e" Feb 24 02:02:04.554805 master-0 kubenswrapper[4207]: E0224 02:02:04.554767 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 24 02:02:05.098738 master-0 kubenswrapper[4207]: W0224 02:02:05.098548 4207 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:02:05.098738 master-0 kubenswrapper[4207]: E0224 02:02:05.098678 4207 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 02:02:05.328481 master-0 kubenswrapper[4207]: I0224 02:02:05.328328 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:02:05.479281 master-0 kubenswrapper[4207]: E0224 02:02:05.479244 4207 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 24 02:02:05.836309 master-0 kubenswrapper[4207]: W0224 02:02:05.836080 4207 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:02:05.836309 master-0 kubenswrapper[4207]: E0224 02:02:05.836226 4207 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 02:02:06.327891 master-0 kubenswrapper[4207]: I0224 02:02:06.327819 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:02:07.328342 master-0 kubenswrapper[4207]: I0224 02:02:07.328125 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:02:07.561007 master-0 kubenswrapper[4207]: I0224 02:02:07.560907 4207 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="2d7d0e564c8b8a31e63160aa69eba9da27910f88d4e6e998d994db3817f8b416" exitCode=0 Feb 24 02:02:07.561161 master-0 kubenswrapper[4207]: I0224 02:02:07.561016 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerDied","Data":"2d7d0e564c8b8a31e63160aa69eba9da27910f88d4e6e998d994db3817f8b416"} Feb 24 02:02:07.561246 master-0 kubenswrapper[4207]: I0224 02:02:07.561178 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:07.562384 master-0 kubenswrapper[4207]: I0224 02:02:07.562337 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:07.562384 master-0 kubenswrapper[4207]: I0224 02:02:07.562382 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:07.562523 master-0 kubenswrapper[4207]: I0224 02:02:07.562401 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:07.565301 master-0 kubenswrapper[4207]: I0224 02:02:07.565248 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:07.565796 master-0 kubenswrapper[4207]: I0224 02:02:07.565736 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"0be92c811440a62e120f914b735c96a60861f339574fbe9008068727fac04419"} Feb 24 02:02:07.566612 master-0 kubenswrapper[4207]: I0224 02:02:07.566536 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:07.566737 master-0 kubenswrapper[4207]: I0224 02:02:07.566618 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:07.566737 master-0 kubenswrapper[4207]: I0224 02:02:07.566633 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:07.567661 master-0 kubenswrapper[4207]: I0224 02:02:07.567631 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:07.568088 master-0 kubenswrapper[4207]: I0224 02:02:07.568055 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"25efea610d9bc2514ea6f62c6d5763641769d0262eae03f839a9b98d1e3382eb"} Feb 24 02:02:07.568319 master-0 kubenswrapper[4207]: I0224 02:02:07.568282 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:07.568319 master-0 kubenswrapper[4207]: I0224 02:02:07.568322 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:07.568446 master-0 kubenswrapper[4207]: I0224 02:02:07.568334 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:07.948106 master-0 kubenswrapper[4207]: E0224 02:02:07.947996 4207 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="7s" Feb 24 02:02:08.425229 master-0 kubenswrapper[4207]: I0224 02:02:08.425145 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:08.427659 master-0 kubenswrapper[4207]: I0224 02:02:08.427605 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:08.427759 master-0 kubenswrapper[4207]: I0224 02:02:08.427672 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:08.427759 master-0 kubenswrapper[4207]: I0224 02:02:08.427693 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:08.427759 master-0 kubenswrapper[4207]: I0224 02:02:08.427760 4207 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 24 02:02:08.575891 master-0 kubenswrapper[4207]: I0224 02:02:08.575812 4207 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="25efea610d9bc2514ea6f62c6d5763641769d0262eae03f839a9b98d1e3382eb" exitCode=1 Feb 24 02:02:08.576102 master-0 kubenswrapper[4207]: I0224 02:02:08.575909 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"25efea610d9bc2514ea6f62c6d5763641769d0262eae03f839a9b98d1e3382eb"} Feb 24 02:02:08.578597 master-0 kubenswrapper[4207]: I0224 02:02:08.578518 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:08.579237 master-0 kubenswrapper[4207]: I0224 02:02:08.579165 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"c3247cb01609ca2f589c633a4d3bc99376b997d00c142d6d2beea7dccc5eceb8"} Feb 24 02:02:08.579709 master-0 kubenswrapper[4207]: I0224 02:02:08.579662 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:08.579788 master-0 kubenswrapper[4207]: I0224 02:02:08.579709 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:08.579788 master-0 kubenswrapper[4207]: I0224 02:02:08.579730 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:09.542787 master-0 kubenswrapper[4207]: I0224 02:02:09.542733 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 02:02:09.542787 master-0 kubenswrapper[4207]: E0224 02:02:09.542753 4207 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 24 02:02:09.543423 master-0 kubenswrapper[4207]: E0224 02:02:09.542883 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.18970c4fcf981f1c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:55.318382364 +0000 UTC m=+0.661686614,LastTimestamp:2026-02-24 02:01:55.318382364 +0000 UTC m=+0.661686614,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.548648 master-0 kubenswrapper[4207]: E0224 02:02:09.548138 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.18970c4fd4099fb5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:55.392929717 +0000 UTC m=+0.736233967,LastTimestamp:2026-02-24 02:01:55.392929717 +0000 UTC m=+0.736233967,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.555240 master-0 kubenswrapper[4207]: E0224 02:02:09.555133 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.18970c4fd409e2db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:55.392946907 +0000 UTC m=+0.736251157,LastTimestamp:2026-02-24 02:01:55.392946907 +0000 UTC m=+0.736251157,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.563124 master-0 kubenswrapper[4207]: E0224 02:02:09.563041 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.18970c4fd40a123d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:55.392959037 +0000 UTC m=+0.736263287,LastTimestamp:2026-02-24 02:01:55.392959037 +0000 UTC m=+0.736263287,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.567834 master-0 kubenswrapper[4207]: E0224 02:02:09.567770 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" event="&Event{ObjectMeta:{master-0.18970c4fd910edf0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:55.477294576 +0000 UTC m=+0.820598856,LastTimestamp:2026-02-24 02:01:55.477294576 +0000 UTC m=+0.820598856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.576179 master-0 kubenswrapper[4207]: E0224 02:02:09.575810 4207 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.18970c4fd4099fb5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.18970c4fd4099fb5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:55.392929717 +0000 UTC m=+0.736233967,LastTimestamp:2026-02-24 02:01:55.577478741 +0000 UTC m=+0.920783021,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.583984 master-0 kubenswrapper[4207]: E0224 02:02:09.583878 4207 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.18970c4fd409e2db\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.18970c4fd409e2db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:55.392946907 +0000 UTC m=+0.736251157,LastTimestamp:2026-02-24 02:01:55.577512362 +0000 UTC m=+0.920816642,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.590080 master-0 kubenswrapper[4207]: E0224 02:02:09.589920 4207 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.18970c4fd40a123d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.18970c4fd40a123d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:55.392959037 +0000 UTC m=+0.736263287,LastTimestamp:2026-02-24 02:01:55.577530223 +0000 UTC m=+0.920834503,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.595348 master-0 kubenswrapper[4207]: E0224 02:02:09.595223 4207 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.18970c4fd4099fb5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.18970c4fd4099fb5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:55.392929717 +0000 UTC m=+0.736233967,LastTimestamp:2026-02-24 02:01:55.60511339 +0000 UTC m=+0.948417660,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.602541 master-0 kubenswrapper[4207]: E0224 02:02:09.602109 4207 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.18970c4fd409e2db\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.18970c4fd409e2db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:55.392946907 +0000 UTC m=+0.736251157,LastTimestamp:2026-02-24 02:01:55.605176101 +0000 UTC m=+0.948480381,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.608950 master-0 kubenswrapper[4207]: E0224 02:02:09.608886 4207 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.18970c4fd40a123d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.18970c4fd40a123d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:55.392959037 +0000 UTC m=+0.736263287,LastTimestamp:2026-02-24 02:01:55.605195012 +0000 UTC m=+0.948499282,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.617113 master-0 kubenswrapper[4207]: E0224 02:02:09.617006 4207 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.18970c4fd4099fb5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.18970c4fd4099fb5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:55.392929717 +0000 UTC m=+0.736233967,LastTimestamp:2026-02-24 02:01:55.606505772 +0000 UTC m=+0.949810042,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.623748 master-0 kubenswrapper[4207]: E0224 02:02:09.623363 4207 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.18970c4fd409e2db\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.18970c4fd409e2db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:55.392946907 +0000 UTC m=+0.736251157,LastTimestamp:2026-02-24 02:01:55.606539393 +0000 UTC m=+0.949843673,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.632211 master-0 kubenswrapper[4207]: E0224 02:02:09.632077 4207 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.18970c4fd40a123d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.18970c4fd40a123d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:55.392959037 +0000 UTC m=+0.736263287,LastTimestamp:2026-02-24 02:01:55.606557003 +0000 UTC m=+0.949861283,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.654606 master-0 kubenswrapper[4207]: E0224 02:02:09.654419 4207 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.18970c4fd4099fb5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.18970c4fd4099fb5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:55.392929717 +0000 UTC m=+0.736233967,LastTimestamp:2026-02-24 02:01:55.60686776 +0000 UTC m=+0.950172040,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.661152 master-0 kubenswrapper[4207]: E0224 02:02:09.661066 4207 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.18970c4fd409e2db\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.18970c4fd409e2db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:55.392946907 +0000 UTC m=+0.736251157,LastTimestamp:2026-02-24 02:01:55.606891901 +0000 UTC m=+0.950196171,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.664880 master-0 kubenswrapper[4207]: E0224 02:02:09.664804 4207 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.18970c4fd40a123d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.18970c4fd40a123d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:55.392959037 +0000 UTC m=+0.736263287,LastTimestamp:2026-02-24 02:01:55.606912192 +0000 UTC m=+0.950216472,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.670661 master-0 kubenswrapper[4207]: E0224 02:02:09.670485 4207 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.18970c4fd4099fb5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.18970c4fd4099fb5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:55.392929717 +0000 UTC m=+0.736233967,LastTimestamp:2026-02-24 02:01:55.608253593 +0000 UTC m=+0.951557873,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.693879 master-0 kubenswrapper[4207]: E0224 02:02:09.693722 4207 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.18970c4fd4099fb5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.18970c4fd4099fb5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:55.392929717 +0000 UTC m=+0.736233967,LastTimestamp:2026-02-24 02:01:55.608289203 +0000 UTC m=+0.951593483,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.710841 master-0 kubenswrapper[4207]: E0224 02:02:09.710654 4207 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.18970c4fd409e2db\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.18970c4fd409e2db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:55.392946907 +0000 UTC m=+0.736251157,LastTimestamp:2026-02-24 02:01:55.608321944 +0000 UTC m=+0.951626224,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.715918 master-0 kubenswrapper[4207]: E0224 02:02:09.715763 4207 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.18970c4fd40a123d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.18970c4fd40a123d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:55.392959037 +0000 UTC m=+0.736263287,LastTimestamp:2026-02-24 02:01:55.608350995 +0000 UTC m=+0.951655285,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.721074 master-0 kubenswrapper[4207]: E0224 02:02:09.720901 4207 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.18970c4fd409e2db\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.18970c4fd409e2db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:55.392946907 +0000 UTC m=+0.736251157,LastTimestamp:2026-02-24 02:01:55.608400126 +0000 UTC m=+0.951704406,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.726651 master-0 kubenswrapper[4207]: E0224 02:02:09.726521 4207 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.18970c4fd40a123d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.18970c4fd40a123d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:55.392959037 +0000 UTC m=+0.736263287,LastTimestamp:2026-02-24 02:01:55.608454227 +0000 UTC m=+0.951758507,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.732750 master-0 kubenswrapper[4207]: E0224 02:02:09.732631 4207 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.18970c4fd4099fb5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.18970c4fd4099fb5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:55.392929717 +0000 UTC m=+0.736233967,LastTimestamp:2026-02-24 02:01:55.610549876 +0000 UTC m=+0.953854156,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.739206 master-0 kubenswrapper[4207]: E0224 02:02:09.739081 4207 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.18970c4fd409e2db\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.18970c4fd409e2db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:55.392946907 +0000 UTC m=+0.736251157,LastTimestamp:2026-02-24 02:01:55.610597687 +0000 UTC m=+0.953901967,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.749343 master-0 kubenswrapper[4207]: E0224 02:02:09.749213 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.18970c50356d98a6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:57.026871462 +0000 UTC m=+2.370175732,LastTimestamp:2026-02-24 02:01:57.026871462 +0000 UTC m=+2.370175732,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.756599 master-0 kubenswrapper[4207]: E0224 02:02:09.756460 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.18970c50365de098 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:57.04261852 +0000 UTC m=+2.385922790,LastTimestamp:2026-02-24 02:01:57.04261852 +0000 UTC m=+2.385922790,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.783593 master-0 kubenswrapper[4207]: E0224 02:02:09.782948 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.18970c503667290d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:57.043226893 +0000 UTC m=+2.386531173,LastTimestamp:2026-02-24 02:01:57.043226893 +0000 UTC m=+2.386531173,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.793833 master-0 kubenswrapper[4207]: E0224 02:02:09.793656 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.18970c503715d8bf kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:56c3cb71c9851003c8de7e7c5db4b87e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:57.054675135 +0000 UTC m=+2.397979405,LastTimestamp:2026-02-24 02:01:57.054675135 +0000 UTC m=+2.397979405,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.803959 master-0 kubenswrapper[4207]: E0224 02:02:09.803794 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.18970c5037c2d56d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:01:57.066012013 +0000 UTC m=+2.409316293,LastTimestamp:2026-02-24 02:01:57.066012013 +0000 UTC m=+2.409316293,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.810983 master-0 kubenswrapper[4207]: E0224 02:02:09.810412 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.18970c51283309f2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\" in 4.057s (4.057s including waiting). Image size: 464984427 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:01.09989733 +0000 UTC m=+6.443201580,LastTimestamp:2026-02-24 02:02:01.09989733 +0000 UTC m=+6.443201580,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.817632 master-0 kubenswrapper[4207]: E0224 02:02:09.817060 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.18970c513780feee openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:01.356664558 +0000 UTC m=+6.699968798,LastTimestamp:2026-02-24 02:02:01.356664558 +0000 UTC m=+6.699968798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.822863 master-0 kubenswrapper[4207]: E0224 02:02:09.822747 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.18970c513983b75f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:01.390397279 +0000 UTC m=+6.733701529,LastTimestamp:2026-02-24 02:02:01.390397279 +0000 UTC m=+6.733701529,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.828010 master-0 kubenswrapper[4207]: E0224 02:02:09.827923 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.18970c516823c59c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:02.17263862 +0000 UTC m=+7.515942890,LastTimestamp:2026-02-24 02:02:02.17263862 +0000 UTC m=+7.515942890,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.832670 master-0 kubenswrapper[4207]: E0224 02:02:09.832514 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.18970c516b357a9d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\" in 5.158s (5.158s including waiting). Image size: 529218694 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:02.224130717 +0000 UTC m=+7.567434957,LastTimestamp:2026-02-24 02:02:02.224130717 +0000 UTC m=+7.567434957,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.837515 master-0 kubenswrapper[4207]: E0224 02:02:09.837263 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.18970c51789f0352 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:02.449150802 +0000 UTC m=+7.792455042,LastTimestamp:2026-02-24 02:02:02.449150802 +0000 UTC m=+7.792455042,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.841619 master-0 kubenswrapper[4207]: E0224 02:02:09.841502 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.18970c5178c028f6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:02.451323126 +0000 UTC m=+7.794627366,LastTimestamp:2026-02-24 02:02:02.451323126 +0000 UTC m=+7.794627366,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.846281 master-0 kubenswrapper[4207]: E0224 02:02:09.846017 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.18970c5179589bb9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:02.461313977 +0000 UTC m=+7.804618217,LastTimestamp:2026-02-24 02:02:02.461313977 +0000 UTC m=+7.804618217,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.855354 master-0 kubenswrapper[4207]: E0224 02:02:09.855111 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.18970c51797336ee openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:02.463057646 +0000 UTC m=+7.806361886,LastTimestamp:2026-02-24 02:02:02.463057646 +0000 UTC m=+7.806361886,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.860273 master-0 kubenswrapper[4207]: E0224 02:02:09.860146 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.18970c51797a876b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:02.463537003 +0000 UTC m=+7.806841243,LastTimestamp:2026-02-24 02:02:02.463537003 +0000 UTC m=+7.806841243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.866298 master-0 kubenswrapper[4207]: E0224 02:02:09.866187 4207 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.18970c516823c59c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.18970c516823c59c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:02.17263862 +0000 UTC m=+7.515942890,LastTimestamp:2026-02-24 02:02:02.53704789 +0000 UTC m=+7.880352120,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.872379 master-0 kubenswrapper[4207]: E0224 02:02:09.872136 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.18970c51873a2d68 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:02.69420068 +0000 UTC m=+8.037504950,LastTimestamp:2026-02-24 02:02:02.69420068 +0000 UTC m=+8.037504950,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.878408 master-0 kubenswrapper[4207]: E0224 02:02:09.878284 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.18970c518842eda0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:02.711551392 +0000 UTC m=+8.054855632,LastTimestamp:2026-02-24 02:02:02.711551392 +0000 UTC m=+8.054855632,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.885723 master-0 kubenswrapper[4207]: E0224 02:02:09.885554 4207 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.18970c5178c028f6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.18970c5178c028f6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:02.451323126 +0000 UTC m=+7.794627366,LastTimestamp:2026-02-24 02:02:02.711988077 +0000 UTC m=+8.055292317,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.892671 master-0 kubenswrapper[4207]: E0224 02:02:09.892080 4207 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.18970c51797336ee\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.18970c51797336ee openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:02.463057646 +0000 UTC m=+7.806361886,LastTimestamp:2026-02-24 02:02:02.72263717 +0000 UTC m=+8.065941410,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.916596 master-0 kubenswrapper[4207]: E0224 02:02:09.900527 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.18970c51b9dca7b9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:03.543709625 +0000 UTC m=+8.887013875,LastTimestamp:2026-02-24 02:02:03.543709625 +0000 UTC m=+8.887013875,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.923367 master-0 kubenswrapper[4207]: E0224 02:02:09.923251 4207 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.18970c51b9dca7b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.18970c51b9dca7b9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:03.543709625 +0000 UTC m=+8.887013875,LastTimestamp:2026-02-24 02:02:04.55471069 +0000 UTC m=+9.898014970,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.930201 master-0 kubenswrapper[4207]: E0224 02:02:09.929957 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.18970c526041ea3e kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" in 9.292s (9.292s including waiting). Image size: 943734757 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:06.335363646 +0000 UTC m=+11.678667926,LastTimestamp:2026-02-24 02:02:06.335363646 +0000 UTC m=+11.678667926,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.936828 master-0 kubenswrapper[4207]: E0224 02:02:09.936658 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.18970c5263e02369 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:56c3cb71c9851003c8de7e7c5db4b87e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" in 9.341s (9.341s including waiting). Image size: 943734757 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:06.396064617 +0000 UTC m=+11.739368887,LastTimestamp:2026-02-24 02:02:06.396064617 +0000 UTC m=+11.739368887,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.942568 master-0 kubenswrapper[4207]: E0224 02:02:09.942503 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.18970c52646c803c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" in 9.378s (9.378s including waiting). Image size: 943734757 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:06.40526342 +0000 UTC m=+11.748567691,LastTimestamp:2026-02-24 02:02:06.40526342 +0000 UTC m=+11.748567691,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.947461 master-0 kubenswrapper[4207]: E0224 02:02:09.947405 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.18970c5270bddd36 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:06.61192223 +0000 UTC m=+11.955226480,LastTimestamp:2026-02-24 02:02:06.61192223 +0000 UTC m=+11.955226480,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.953454 master-0 kubenswrapper[4207]: E0224 02:02:09.953402 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.18970c5271a331b2 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:06.626951602 +0000 UTC m=+11.970255892,LastTimestamp:2026-02-24 02:02:06.626951602 +0000 UTC m=+11.970255892,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.959683 master-0 kubenswrapper[4207]: E0224 02:02:09.958645 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.18970c5271bfd1f4 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:06.628827636 +0000 UTC m=+11.972131916,LastTimestamp:2026-02-24 02:02:06.628827636 +0000 UTC m=+11.972131916,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.968007 master-0 kubenswrapper[4207]: E0224 02:02:09.967870 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.18970c52720d179b kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:56c3cb71c9851003c8de7e7c5db4b87e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:06.633891739 +0000 UTC m=+11.977195979,LastTimestamp:2026-02-24 02:02:06.633891739 +0000 UTC m=+11.977195979,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.976946 master-0 kubenswrapper[4207]: E0224 02:02:09.976865 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.18970c5272fe4692 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:56c3cb71c9851003c8de7e7c5db4b87e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:06.649697938 +0000 UTC m=+11.993002178,LastTimestamp:2026-02-24 02:02:06.649697938 +0000 UTC m=+11.993002178,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.984840 master-0 kubenswrapper[4207]: E0224 02:02:09.984759 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.18970c52744be69a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:06.671562394 +0000 UTC m=+12.014866674,LastTimestamp:2026-02-24 02:02:06.671562394 +0000 UTC m=+12.014866674,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.990940 master-0 kubenswrapper[4207]: E0224 02:02:09.990814 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.18970c5275bcd25e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:06.695739998 +0000 UTC m=+12.039044268,LastTimestamp:2026-02-24 02:02:06.695739998 +0000 UTC m=+12.039044268,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:09.997657 master-0 kubenswrapper[4207]: E0224 02:02:09.997496 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.18970c52a9b32fa9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:07.567523753 +0000 UTC m=+12.910828003,LastTimestamp:2026-02-24 02:02:07.567523753 +0000 UTC m=+12.910828003,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:10.004492 master-0 kubenswrapper[4207]: E0224 02:02:10.004374 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.18970c52bb91729e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:07.867302558 +0000 UTC m=+13.210606828,LastTimestamp:2026-02-24 02:02:07.867302558 +0000 UTC m=+13.210606828,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:10.008540 master-0 kubenswrapper[4207]: E0224 02:02:10.008424 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.18970c52c4fff10a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:08.025538826 +0000 UTC m=+13.368843096,LastTimestamp:2026-02-24 02:02:08.025538826 +0000 UTC m=+13.368843096,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:10.014475 master-0 kubenswrapper[4207]: E0224 02:02:10.014409 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.18970c52c51de9d7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:08.027503063 +0000 UTC m=+13.370807343,LastTimestamp:2026-02-24 02:02:08.027503063 +0000 UTC m=+13.370807343,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:10.018584 master-0 kubenswrapper[4207]: E0224 02:02:10.018455 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.18970c5322edc603 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\" in 2.972s (2.972s including waiting). Image size: 505137106 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:09.601406467 +0000 UTC m=+14.944710737,LastTimestamp:2026-02-24 02:02:09.601406467 +0000 UTC m=+14.944710737,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:10.019714 master-0 kubenswrapper[4207]: I0224 02:02:10.019664 4207 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 02:02:10.029722 master-0 kubenswrapper[4207]: E0224 02:02:10.029558 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.18970c5330c99d7a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:09.833917818 +0000 UTC m=+15.177222088,LastTimestamp:2026-02-24 02:02:09.833917818 +0000 UTC m=+15.177222088,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:10.036375 master-0 kubenswrapper[4207]: E0224 02:02:10.036289 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.18970c5331bbc155 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:09.849786709 +0000 UTC m=+15.193090959,LastTimestamp:2026-02-24 02:02:09.849786709 +0000 UTC m=+15.193090959,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:10.038493 master-0 kubenswrapper[4207]: I0224 02:02:10.038456 4207 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 24 02:02:10.335091 master-0 kubenswrapper[4207]: I0224 02:02:10.335046 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 02:02:10.592359 master-0 kubenswrapper[4207]: I0224 02:02:10.592232 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"1547c14f4eb619b96bd863bc221c6f76692c0c715d2c89a10b3b0a90e8c9a765"} Feb 24 02:02:10.593022 master-0 kubenswrapper[4207]: I0224 02:02:10.592378 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:10.593294 master-0 kubenswrapper[4207]: I0224 02:02:10.593259 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:10.593335 master-0 kubenswrapper[4207]: I0224 02:02:10.593300 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:10.593335 master-0 kubenswrapper[4207]: I0224 02:02:10.593310 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:10.593561 master-0 kubenswrapper[4207]: I0224 02:02:10.593534 4207 scope.go:117] "RemoveContainer" containerID="25efea610d9bc2514ea6f62c6d5763641769d0262eae03f839a9b98d1e3382eb" Feb 24 02:02:10.760933 master-0 kubenswrapper[4207]: W0224 02:02:10.759607 4207 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 24 02:02:10.760933 master-0 kubenswrapper[4207]: E0224 02:02:10.760159 4207 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 24 02:02:10.901134 master-0 kubenswrapper[4207]: E0224 02:02:10.900958 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.18970c536fc7b1df kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:10.890756575 +0000 UTC m=+16.234060815,LastTimestamp:2026-02-24 02:02:10.890756575 +0000 UTC m=+16.234060815,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:10.960205 master-0 kubenswrapper[4207]: E0224 02:02:10.960033 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.18970c537388b5d3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\" in 2.926s (2.926s including waiting). Image size: 514875199 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:10.953737683 +0000 UTC m=+16.297041923,LastTimestamp:2026-02-24 02:02:10.953737683 +0000 UTC m=+16.297041923,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:11.126825 master-0 kubenswrapper[4207]: E0224 02:02:11.126645 4207 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.18970c5270bddd36\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.18970c5270bddd36 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:06.61192223 +0000 UTC m=+11.955226480,LastTimestamp:2026-02-24 02:02:11.118465811 +0000 UTC m=+16.461770091,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:11.144417 master-0 kubenswrapper[4207]: E0224 02:02:11.144176 4207 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.18970c5271a331b2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.18970c5271a331b2 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:06.626951602 +0000 UTC m=+11.970255892,LastTimestamp:2026-02-24 02:02:11.135397659 +0000 UTC m=+16.478701929,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:11.153124 master-0 kubenswrapper[4207]: W0224 02:02:11.153066 4207 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 24 02:02:11.153247 master-0 kubenswrapper[4207]: E0224 02:02:11.153132 4207 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 24 02:02:11.223869 master-0 kubenswrapper[4207]: E0224 02:02:11.223709 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.18970c53834ad3d7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:11.218117591 +0000 UTC m=+16.561421871,LastTimestamp:2026-02-24 02:02:11.218117591 +0000 UTC m=+16.561421871,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:11.239446 master-0 kubenswrapper[4207]: E0224 02:02:11.239192 4207 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.18970c538431b451 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:11.233248337 +0000 UTC m=+16.576552617,LastTimestamp:2026-02-24 02:02:11.233248337 +0000 UTC m=+16.576552617,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:11.334161 master-0 kubenswrapper[4207]: I0224 02:02:11.334096 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 02:02:11.599594 master-0 kubenswrapper[4207]: I0224 02:02:11.599432 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"bb19051dc2b31dca07092846d1f69e1993bc40ba384cd5a5b58d6d990afdcb5d"} Feb 24 02:02:11.599594 master-0 kubenswrapper[4207]: I0224 02:02:11.599560 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:11.601284 master-0 kubenswrapper[4207]: I0224 02:02:11.601183 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:11.601284 master-0 kubenswrapper[4207]: I0224 02:02:11.601243 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:11.601284 master-0 kubenswrapper[4207]: I0224 02:02:11.601262 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:11.603098 master-0 kubenswrapper[4207]: I0224 02:02:11.603007 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"5049829a75ed57116cd28e395328151f0338930890f9c71ea2b44193759d72ce"} Feb 24 02:02:11.603234 master-0 kubenswrapper[4207]: I0224 02:02:11.603150 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:11.604246 master-0 kubenswrapper[4207]: I0224 02:02:11.604190 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:11.604318 master-0 kubenswrapper[4207]: I0224 02:02:11.604248 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:11.604318 master-0 kubenswrapper[4207]: I0224 02:02:11.604272 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:11.873885 master-0 kubenswrapper[4207]: W0224 02:02:11.873687 4207 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 24 02:02:11.873885 master-0 kubenswrapper[4207]: E0224 02:02:11.873770 4207 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 24 02:02:12.336128 master-0 kubenswrapper[4207]: I0224 02:02:12.336013 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 02:02:12.609122 master-0 kubenswrapper[4207]: I0224 02:02:12.609064 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:12.609709 master-0 kubenswrapper[4207]: I0224 02:02:12.609066 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:12.621992 master-0 kubenswrapper[4207]: I0224 02:02:12.621924 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:12.621992 master-0 kubenswrapper[4207]: I0224 02:02:12.621981 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:12.622163 master-0 kubenswrapper[4207]: I0224 02:02:12.622035 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:12.622163 master-0 kubenswrapper[4207]: I0224 02:02:12.622068 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:12.622163 master-0 kubenswrapper[4207]: I0224 02:02:12.622000 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:12.622163 master-0 kubenswrapper[4207]: I0224 02:02:12.622122 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:13.334080 master-0 kubenswrapper[4207]: I0224 02:02:13.333985 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 02:02:13.666638 master-0 kubenswrapper[4207]: I0224 02:02:13.666461 4207 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:02:13.667553 master-0 kubenswrapper[4207]: I0224 02:02:13.666720 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:13.668201 master-0 kubenswrapper[4207]: I0224 02:02:13.668138 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:13.668201 master-0 kubenswrapper[4207]: I0224 02:02:13.668199 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:13.668342 master-0 kubenswrapper[4207]: I0224 02:02:13.668218 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:13.675000 master-0 kubenswrapper[4207]: I0224 02:02:13.674950 4207 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:02:13.980165 master-0 kubenswrapper[4207]: I0224 02:02:13.980085 4207 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:02:13.980366 master-0 kubenswrapper[4207]: I0224 02:02:13.980324 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:13.981853 master-0 kubenswrapper[4207]: I0224 02:02:13.981789 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:13.981931 master-0 kubenswrapper[4207]: I0224 02:02:13.981863 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:13.981931 master-0 kubenswrapper[4207]: I0224 02:02:13.981884 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:14.175728 master-0 kubenswrapper[4207]: I0224 02:02:14.175564 4207 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:02:14.182896 master-0 kubenswrapper[4207]: I0224 02:02:14.182824 4207 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:02:14.200661 master-0 kubenswrapper[4207]: I0224 02:02:14.200563 4207 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:02:14.336934 master-0 kubenswrapper[4207]: I0224 02:02:14.336774 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 02:02:14.616933 master-0 kubenswrapper[4207]: I0224 02:02:14.616764 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:14.617127 master-0 kubenswrapper[4207]: I0224 02:02:14.616808 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:14.618294 master-0 kubenswrapper[4207]: I0224 02:02:14.618241 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:14.618369 master-0 kubenswrapper[4207]: I0224 02:02:14.618302 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:14.618369 master-0 kubenswrapper[4207]: I0224 02:02:14.618324 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:14.619224 master-0 kubenswrapper[4207]: I0224 02:02:14.619173 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:14.619224 master-0 kubenswrapper[4207]: I0224 02:02:14.619224 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:14.619357 master-0 kubenswrapper[4207]: I0224 02:02:14.619243 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:14.626838 master-0 kubenswrapper[4207]: I0224 02:02:14.626789 4207 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:02:14.955766 master-0 kubenswrapper[4207]: E0224 02:02:14.955694 4207 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 02:02:15.334395 master-0 kubenswrapper[4207]: I0224 02:02:15.334206 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 02:02:15.479568 master-0 kubenswrapper[4207]: E0224 02:02:15.479444 4207 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 24 02:02:15.619217 master-0 kubenswrapper[4207]: I0224 02:02:15.619080 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:15.619341 master-0 kubenswrapper[4207]: I0224 02:02:15.619288 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:15.620614 master-0 kubenswrapper[4207]: I0224 02:02:15.620474 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:15.620852 master-0 kubenswrapper[4207]: I0224 02:02:15.620805 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:15.620934 master-0 kubenswrapper[4207]: I0224 02:02:15.620863 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:15.620934 master-0 kubenswrapper[4207]: I0224 02:02:15.620908 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:15.621716 master-0 kubenswrapper[4207]: I0224 02:02:15.621674 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:15.621716 master-0 kubenswrapper[4207]: I0224 02:02:15.621711 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:16.335184 master-0 kubenswrapper[4207]: I0224 02:02:16.335116 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 02:02:16.543969 master-0 kubenswrapper[4207]: I0224 02:02:16.543787 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:16.546734 master-0 kubenswrapper[4207]: I0224 02:02:16.546677 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:16.546852 master-0 kubenswrapper[4207]: I0224 02:02:16.546759 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:16.546852 master-0 kubenswrapper[4207]: I0224 02:02:16.546780 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:16.546852 master-0 kubenswrapper[4207]: I0224 02:02:16.546850 4207 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 24 02:02:16.554540 master-0 kubenswrapper[4207]: E0224 02:02:16.554473 4207 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 24 02:02:16.622469 master-0 kubenswrapper[4207]: I0224 02:02:16.622294 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:16.623369 master-0 kubenswrapper[4207]: I0224 02:02:16.623319 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:16.623369 master-0 kubenswrapper[4207]: I0224 02:02:16.623368 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:16.623522 master-0 kubenswrapper[4207]: I0224 02:02:16.623386 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:17.193816 master-0 kubenswrapper[4207]: W0224 02:02:17.193600 4207 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 24 02:02:17.193816 master-0 kubenswrapper[4207]: E0224 02:02:17.193667 4207 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 24 02:02:17.335900 master-0 kubenswrapper[4207]: I0224 02:02:17.335769 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 02:02:18.334136 master-0 kubenswrapper[4207]: I0224 02:02:18.333932 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 02:02:18.367457 master-0 kubenswrapper[4207]: I0224 02:02:18.367372 4207 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:02:18.368245 master-0 kubenswrapper[4207]: I0224 02:02:18.367659 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:18.369239 master-0 kubenswrapper[4207]: I0224 02:02:18.369178 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:18.369393 master-0 kubenswrapper[4207]: I0224 02:02:18.369262 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:18.369393 master-0 kubenswrapper[4207]: I0224 02:02:18.369286 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:18.374683 master-0 kubenswrapper[4207]: I0224 02:02:18.374618 4207 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:02:18.504209 master-0 kubenswrapper[4207]: I0224 02:02:18.504130 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:18.505831 master-0 kubenswrapper[4207]: I0224 02:02:18.505688 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:18.505831 master-0 kubenswrapper[4207]: I0224 02:02:18.505755 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:18.505831 master-0 kubenswrapper[4207]: I0224 02:02:18.505776 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:18.506266 master-0 kubenswrapper[4207]: I0224 02:02:18.506200 4207 scope.go:117] "RemoveContainer" containerID="6c175419567b5ca9a91e007584c1efdbed9f795fb8d90c8307dd30f12d0f572e" Feb 24 02:02:18.519242 master-0 kubenswrapper[4207]: E0224 02:02:18.519068 4207 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.18970c516823c59c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.18970c516823c59c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:02.17263862 +0000 UTC m=+7.515942890,LastTimestamp:2026-02-24 02:02:18.510690981 +0000 UTC m=+23.853995251,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:18.631068 master-0 kubenswrapper[4207]: I0224 02:02:18.630494 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:18.631250 master-0 kubenswrapper[4207]: I0224 02:02:18.630640 4207 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:02:18.632333 master-0 kubenswrapper[4207]: I0224 02:02:18.632260 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:18.632333 master-0 kubenswrapper[4207]: I0224 02:02:18.632323 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:18.632333 master-0 kubenswrapper[4207]: I0224 02:02:18.632343 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:18.852071 master-0 kubenswrapper[4207]: E0224 02:02:18.851900 4207 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.18970c5178c028f6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.18970c5178c028f6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:02.451323126 +0000 UTC m=+7.794627366,LastTimestamp:2026-02-24 02:02:18.839865145 +0000 UTC m=+24.183169415,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:18.868119 master-0 kubenswrapper[4207]: E0224 02:02:18.867870 4207 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.18970c51797336ee\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.18970c51797336ee openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:02.463057646 +0000 UTC m=+7.806361886,LastTimestamp:2026-02-24 02:02:18.85945358 +0000 UTC m=+24.202757850,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:19.335013 master-0 kubenswrapper[4207]: I0224 02:02:19.334950 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 02:02:19.636911 master-0 kubenswrapper[4207]: I0224 02:02:19.636766 4207 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 24 02:02:19.637828 master-0 kubenswrapper[4207]: I0224 02:02:19.637656 4207 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/1.log" Feb 24 02:02:19.638294 master-0 kubenswrapper[4207]: I0224 02:02:19.638214 4207 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="4ea164dd4d44c905424ce0b0b3ea58702494938b88cbbbe52d4ce16914c7762b" exitCode=1 Feb 24 02:02:19.638294 master-0 kubenswrapper[4207]: I0224 02:02:19.638268 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"4ea164dd4d44c905424ce0b0b3ea58702494938b88cbbbe52d4ce16914c7762b"} Feb 24 02:02:19.638432 master-0 kubenswrapper[4207]: I0224 02:02:19.638350 4207 scope.go:117] "RemoveContainer" containerID="6c175419567b5ca9a91e007584c1efdbed9f795fb8d90c8307dd30f12d0f572e" Feb 24 02:02:19.638432 master-0 kubenswrapper[4207]: I0224 02:02:19.638394 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:19.638776 master-0 kubenswrapper[4207]: I0224 02:02:19.638722 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:19.639474 master-0 kubenswrapper[4207]: I0224 02:02:19.639428 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:19.639544 master-0 kubenswrapper[4207]: I0224 02:02:19.639484 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:19.639544 master-0 kubenswrapper[4207]: I0224 02:02:19.639503 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:19.640244 master-0 kubenswrapper[4207]: I0224 02:02:19.640185 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:19.640310 master-0 kubenswrapper[4207]: I0224 02:02:19.640289 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:19.640310 master-0 kubenswrapper[4207]: I0224 02:02:19.640305 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:19.641257 master-0 kubenswrapper[4207]: I0224 02:02:19.640800 4207 scope.go:117] "RemoveContainer" containerID="4ea164dd4d44c905424ce0b0b3ea58702494938b88cbbbe52d4ce16914c7762b" Feb 24 02:02:19.641257 master-0 kubenswrapper[4207]: E0224 02:02:19.640995 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 24 02:02:19.645106 master-0 kubenswrapper[4207]: I0224 02:02:19.645052 4207 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:02:19.652388 master-0 kubenswrapper[4207]: E0224 02:02:19.652209 4207 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.18970c51b9dca7b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.18970c51b9dca7b9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:03.543709625 +0000 UTC m=+8.887013875,LastTimestamp:2026-02-24 02:02:19.640959945 +0000 UTC m=+24.984264185,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:20.294355 master-0 kubenswrapper[4207]: I0224 02:02:20.294283 4207 csr.go:261] certificate signing request csr-94djb is approved, waiting to be issued Feb 24 02:02:20.334280 master-0 kubenswrapper[4207]: I0224 02:02:20.334211 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 02:02:20.643682 master-0 kubenswrapper[4207]: I0224 02:02:20.643518 4207 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 24 02:02:20.644635 master-0 kubenswrapper[4207]: I0224 02:02:20.644495 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:20.645796 master-0 kubenswrapper[4207]: I0224 02:02:20.645738 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:20.645926 master-0 kubenswrapper[4207]: I0224 02:02:20.645800 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:20.645926 master-0 kubenswrapper[4207]: I0224 02:02:20.645820 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:21.334196 master-0 kubenswrapper[4207]: I0224 02:02:21.334078 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 02:02:21.964020 master-0 kubenswrapper[4207]: E0224 02:02:21.963913 4207 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 02:02:22.333911 master-0 kubenswrapper[4207]: I0224 02:02:22.333851 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 02:02:23.334430 master-0 kubenswrapper[4207]: I0224 02:02:23.334392 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 02:02:23.555307 master-0 kubenswrapper[4207]: I0224 02:02:23.555257 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:23.556558 master-0 kubenswrapper[4207]: I0224 02:02:23.556511 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:23.556693 master-0 kubenswrapper[4207]: I0224 02:02:23.556616 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:23.556693 master-0 kubenswrapper[4207]: I0224 02:02:23.556636 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:23.556807 master-0 kubenswrapper[4207]: I0224 02:02:23.556698 4207 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 24 02:02:23.563972 master-0 kubenswrapper[4207]: E0224 02:02:23.563914 4207 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 24 02:02:24.334465 master-0 kubenswrapper[4207]: I0224 02:02:24.334331 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 02:02:25.334215 master-0 kubenswrapper[4207]: I0224 02:02:25.334111 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 02:02:25.480158 master-0 kubenswrapper[4207]: E0224 02:02:25.480110 4207 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 24 02:02:26.335446 master-0 kubenswrapper[4207]: I0224 02:02:26.335288 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 02:02:27.334739 master-0 kubenswrapper[4207]: I0224 02:02:27.334664 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 02:02:28.334106 master-0 kubenswrapper[4207]: I0224 02:02:28.333985 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 02:02:28.971654 master-0 kubenswrapper[4207]: E0224 02:02:28.971513 4207 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 02:02:29.184034 master-0 kubenswrapper[4207]: W0224 02:02:29.183921 4207 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 24 02:02:29.184034 master-0 kubenswrapper[4207]: E0224 02:02:29.183998 4207 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 24 02:02:29.285678 master-0 kubenswrapper[4207]: I0224 02:02:29.285155 4207 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:02:29.285678 master-0 kubenswrapper[4207]: I0224 02:02:29.285335 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:29.286935 master-0 kubenswrapper[4207]: I0224 02:02:29.286770 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:29.286935 master-0 kubenswrapper[4207]: I0224 02:02:29.286909 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:29.287152 master-0 kubenswrapper[4207]: I0224 02:02:29.286985 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:29.334598 master-0 kubenswrapper[4207]: I0224 02:02:29.334463 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 02:02:30.334874 master-0 kubenswrapper[4207]: I0224 02:02:30.334742 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 02:02:30.564590 master-0 kubenswrapper[4207]: I0224 02:02:30.564484 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:30.566187 master-0 kubenswrapper[4207]: I0224 02:02:30.566133 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:30.566285 master-0 kubenswrapper[4207]: I0224 02:02:30.566190 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:30.566285 master-0 kubenswrapper[4207]: I0224 02:02:30.566211 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:30.566285 master-0 kubenswrapper[4207]: I0224 02:02:30.566280 4207 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 24 02:02:30.573698 master-0 kubenswrapper[4207]: E0224 02:02:30.573640 4207 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 24 02:02:31.335078 master-0 kubenswrapper[4207]: I0224 02:02:31.334968 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 02:02:32.335749 master-0 kubenswrapper[4207]: I0224 02:02:32.335673 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 02:02:32.504141 master-0 kubenswrapper[4207]: I0224 02:02:32.504031 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:32.505727 master-0 kubenswrapper[4207]: I0224 02:02:32.505669 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:32.505832 master-0 kubenswrapper[4207]: I0224 02:02:32.505745 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:32.505832 master-0 kubenswrapper[4207]: I0224 02:02:32.505764 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:32.506403 master-0 kubenswrapper[4207]: I0224 02:02:32.506358 4207 scope.go:117] "RemoveContainer" containerID="4ea164dd4d44c905424ce0b0b3ea58702494938b88cbbbe52d4ce16914c7762b" Feb 24 02:02:32.506663 master-0 kubenswrapper[4207]: E0224 02:02:32.506616 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 24 02:02:32.515500 master-0 kubenswrapper[4207]: E0224 02:02:32.515235 4207 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.18970c51b9dca7b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.18970c51b9dca7b9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:02:03.543709625 +0000 UTC m=+8.887013875,LastTimestamp:2026-02-24 02:02:32.506545332 +0000 UTC m=+37.849849612,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:02:33.334172 master-0 kubenswrapper[4207]: I0224 02:02:33.334114 4207 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 02:02:33.461554 master-0 kubenswrapper[4207]: I0224 02:02:33.461503 4207 csr.go:257] certificate signing request csr-94djb is issued Feb 24 02:02:33.868793 master-0 kubenswrapper[4207]: I0224 02:02:33.868720 4207 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 24 02:02:34.175153 master-0 kubenswrapper[4207]: I0224 02:02:34.174985 4207 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 24 02:02:34.175374 master-0 kubenswrapper[4207]: W0224 02:02:34.175286 4207 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 24 02:02:34.342065 master-0 kubenswrapper[4207]: I0224 02:02:34.341984 4207 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 24 02:02:34.357110 master-0 kubenswrapper[4207]: I0224 02:02:34.357042 4207 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 24 02:02:34.414607 master-0 kubenswrapper[4207]: I0224 02:02:34.414538 4207 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 24 02:02:34.464465 master-0 kubenswrapper[4207]: I0224 02:02:34.464369 4207 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-25 01:54:04 +0000 UTC, rotation deadline is 2026-02-24 20:54:51.276303087 +0000 UTC Feb 24 02:02:34.465215 master-0 kubenswrapper[4207]: I0224 02:02:34.465186 4207 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 18h52m16.811131881s for next certificate rotation Feb 24 02:02:34.679149 master-0 kubenswrapper[4207]: I0224 02:02:34.679059 4207 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 24 02:02:34.679149 master-0 kubenswrapper[4207]: E0224 02:02:34.679107 4207 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 24 02:02:34.700857 master-0 kubenswrapper[4207]: I0224 02:02:34.700796 4207 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 24 02:02:34.718705 master-0 kubenswrapper[4207]: I0224 02:02:34.718554 4207 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 24 02:02:34.776305 master-0 kubenswrapper[4207]: I0224 02:02:34.776275 4207 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 24 02:02:34.790246 master-0 kubenswrapper[4207]: I0224 02:02:34.790186 4207 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 24 02:02:35.040305 master-0 kubenswrapper[4207]: I0224 02:02:35.040180 4207 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 24 02:02:35.040614 master-0 kubenswrapper[4207]: E0224 02:02:35.040559 4207 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 24 02:02:35.137191 master-0 kubenswrapper[4207]: I0224 02:02:35.137150 4207 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 24 02:02:35.151539 master-0 kubenswrapper[4207]: I0224 02:02:35.151477 4207 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 24 02:02:35.207593 master-0 kubenswrapper[4207]: I0224 02:02:35.207538 4207 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 24 02:02:35.325151 master-0 kubenswrapper[4207]: I0224 02:02:35.325069 4207 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 24 02:02:35.439971 master-0 kubenswrapper[4207]: I0224 02:02:35.439904 4207 apiserver.go:52] "Watching apiserver" Feb 24 02:02:35.441846 master-0 kubenswrapper[4207]: I0224 02:02:35.441808 4207 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 24 02:02:35.442032 master-0 kubenswrapper[4207]: I0224 02:02:35.441956 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=[] Feb 24 02:02:35.471413 master-0 kubenswrapper[4207]: I0224 02:02:35.471382 4207 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 24 02:02:35.472127 master-0 kubenswrapper[4207]: E0224 02:02:35.472103 4207 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 24 02:02:35.480615 master-0 kubenswrapper[4207]: E0224 02:02:35.480554 4207 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 24 02:02:35.534854 master-0 kubenswrapper[4207]: I0224 02:02:35.534813 4207 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 24 02:02:35.977956 master-0 kubenswrapper[4207]: E0224 02:02:35.977887 4207 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-0\" not found" node="master-0" Feb 24 02:02:36.061159 master-0 kubenswrapper[4207]: I0224 02:02:36.061124 4207 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 24 02:02:36.077593 master-0 kubenswrapper[4207]: I0224 02:02:36.077528 4207 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 24 02:02:36.133352 master-0 kubenswrapper[4207]: I0224 02:02:36.133301 4207 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 24 02:02:36.400867 master-0 kubenswrapper[4207]: I0224 02:02:36.400766 4207 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 24 02:02:36.400867 master-0 kubenswrapper[4207]: E0224 02:02:36.400803 4207 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 24 02:02:37.574648 master-0 kubenswrapper[4207]: I0224 02:02:37.574606 4207 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:02:37.577100 master-0 kubenswrapper[4207]: I0224 02:02:37.577057 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:02:37.577239 master-0 kubenswrapper[4207]: I0224 02:02:37.577121 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:02:37.577239 master-0 kubenswrapper[4207]: I0224 02:02:37.577141 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:02:37.577239 master-0 kubenswrapper[4207]: I0224 02:02:37.577197 4207 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 24 02:02:37.589528 master-0 kubenswrapper[4207]: I0224 02:02:37.589497 4207 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 24 02:02:38.271009 master-0 kubenswrapper[4207]: I0224 02:02:38.270928 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-7d7db75979-drrqm"] Feb 24 02:02:38.271440 master-0 kubenswrapper[4207]: I0224 02:02:38.271352 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7d7db75979-drrqm" Feb 24 02:02:38.274025 master-0 kubenswrapper[4207]: I0224 02:02:38.273966 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 24 02:02:38.274428 master-0 kubenswrapper[4207]: I0224 02:02:38.274383 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 24 02:02:38.275155 master-0 kubenswrapper[4207]: I0224 02:02:38.274549 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 24 02:02:38.421953 master-0 kubenswrapper[4207]: I0224 02:02:38.421634 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/3332acec-1553-4594-a903-a322399f6d9d-host-etc-kube\") pod \"network-operator-7d7db75979-drrqm\" (UID: \"3332acec-1553-4594-a903-a322399f6d9d\") " pod="openshift-network-operator/network-operator-7d7db75979-drrqm" Feb 24 02:02:38.421953 master-0 kubenswrapper[4207]: I0224 02:02:38.421765 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3332acec-1553-4594-a903-a322399f6d9d-metrics-tls\") pod \"network-operator-7d7db75979-drrqm\" (UID: \"3332acec-1553-4594-a903-a322399f6d9d\") " pod="openshift-network-operator/network-operator-7d7db75979-drrqm" Feb 24 02:02:38.421953 master-0 kubenswrapper[4207]: I0224 02:02:38.421865 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6qs2\" (UniqueName: \"kubernetes.io/projected/3332acec-1553-4594-a903-a322399f6d9d-kube-api-access-x6qs2\") pod \"network-operator-7d7db75979-drrqm\" (UID: \"3332acec-1553-4594-a903-a322399f6d9d\") " pod="openshift-network-operator/network-operator-7d7db75979-drrqm" Feb 24 02:02:38.441592 master-0 kubenswrapper[4207]: I0224 02:02:38.441513 4207 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 24 02:02:38.451852 master-0 kubenswrapper[4207]: I0224 02:02:38.451823 4207 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 24 02:02:38.522798 master-0 kubenswrapper[4207]: I0224 02:02:38.522311 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6qs2\" (UniqueName: \"kubernetes.io/projected/3332acec-1553-4594-a903-a322399f6d9d-kube-api-access-x6qs2\") pod \"network-operator-7d7db75979-drrqm\" (UID: \"3332acec-1553-4594-a903-a322399f6d9d\") " pod="openshift-network-operator/network-operator-7d7db75979-drrqm" Feb 24 02:02:38.522798 master-0 kubenswrapper[4207]: I0224 02:02:38.522401 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3332acec-1553-4594-a903-a322399f6d9d-metrics-tls\") pod \"network-operator-7d7db75979-drrqm\" (UID: \"3332acec-1553-4594-a903-a322399f6d9d\") " pod="openshift-network-operator/network-operator-7d7db75979-drrqm" Feb 24 02:02:38.522798 master-0 kubenswrapper[4207]: I0224 02:02:38.522437 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/3332acec-1553-4594-a903-a322399f6d9d-host-etc-kube\") pod \"network-operator-7d7db75979-drrqm\" (UID: \"3332acec-1553-4594-a903-a322399f6d9d\") " pod="openshift-network-operator/network-operator-7d7db75979-drrqm" Feb 24 02:02:38.523171 master-0 kubenswrapper[4207]: I0224 02:02:38.522810 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/3332acec-1553-4594-a903-a322399f6d9d-host-etc-kube\") pod \"network-operator-7d7db75979-drrqm\" (UID: \"3332acec-1553-4594-a903-a322399f6d9d\") " pod="openshift-network-operator/network-operator-7d7db75979-drrqm" Feb 24 02:02:38.524923 master-0 kubenswrapper[4207]: I0224 02:02:38.524827 4207 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 24 02:02:38.534461 master-0 kubenswrapper[4207]: I0224 02:02:38.534392 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3332acec-1553-4594-a903-a322399f6d9d-metrics-tls\") pod \"network-operator-7d7db75979-drrqm\" (UID: \"3332acec-1553-4594-a903-a322399f6d9d\") " pod="openshift-network-operator/network-operator-7d7db75979-drrqm" Feb 24 02:02:38.550358 master-0 kubenswrapper[4207]: I0224 02:02:38.550232 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6qs2\" (UniqueName: \"kubernetes.io/projected/3332acec-1553-4594-a903-a322399f6d9d-kube-api-access-x6qs2\") pod \"network-operator-7d7db75979-drrqm\" (UID: \"3332acec-1553-4594-a903-a322399f6d9d\") " pod="openshift-network-operator/network-operator-7d7db75979-drrqm" Feb 24 02:02:38.594955 master-0 kubenswrapper[4207]: I0224 02:02:38.594829 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7d7db75979-drrqm" Feb 24 02:02:38.692259 master-0 kubenswrapper[4207]: I0224 02:02:38.692157 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-drrqm" event={"ID":"3332acec-1553-4594-a903-a322399f6d9d","Type":"ContainerStarted","Data":"46f23e74184a869450a53e076049b086fc11c3d08fab3acc813aa63061b356f3"} Feb 24 02:02:38.963689 master-0 kubenswrapper[4207]: I0224 02:02:38.963601 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt"] Feb 24 02:02:38.964127 master-0 kubenswrapper[4207]: I0224 02:02:38.964087 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:02:38.966128 master-0 kubenswrapper[4207]: I0224 02:02:38.966083 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 24 02:02:38.966217 master-0 kubenswrapper[4207]: I0224 02:02:38.966168 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 24 02:02:38.966278 master-0 kubenswrapper[4207]: I0224 02:02:38.966184 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 24 02:02:39.222288 master-0 kubenswrapper[4207]: I0224 02:02:39.126974 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4f72a322-2142-482a-9b0b-2ad890181d7a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:02:39.222288 master-0 kubenswrapper[4207]: I0224 02:02:39.127013 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:02:39.222288 master-0 kubenswrapper[4207]: I0224 02:02:39.127040 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f72a322-2142-482a-9b0b-2ad890181d7a-service-ca\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:02:39.222288 master-0 kubenswrapper[4207]: I0224 02:02:39.127146 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f72a322-2142-482a-9b0b-2ad890181d7a-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:02:39.222288 master-0 kubenswrapper[4207]: I0224 02:02:39.127244 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4f72a322-2142-482a-9b0b-2ad890181d7a-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:02:39.228204 master-0 kubenswrapper[4207]: I0224 02:02:39.228151 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f72a322-2142-482a-9b0b-2ad890181d7a-service-ca\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:02:39.228291 master-0 kubenswrapper[4207]: I0224 02:02:39.228209 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f72a322-2142-482a-9b0b-2ad890181d7a-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:02:39.228473 master-0 kubenswrapper[4207]: I0224 02:02:39.228425 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4f72a322-2142-482a-9b0b-2ad890181d7a-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:02:39.228526 master-0 kubenswrapper[4207]: I0224 02:02:39.228508 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4f72a322-2142-482a-9b0b-2ad890181d7a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:02:39.228606 master-0 kubenswrapper[4207]: I0224 02:02:39.228546 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:02:39.228606 master-0 kubenswrapper[4207]: I0224 02:02:39.228563 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4f72a322-2142-482a-9b0b-2ad890181d7a-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:02:39.228731 master-0 kubenswrapper[4207]: I0224 02:02:39.228679 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4f72a322-2142-482a-9b0b-2ad890181d7a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:02:39.228857 master-0 kubenswrapper[4207]: E0224 02:02:39.228817 4207 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 24 02:02:39.228978 master-0 kubenswrapper[4207]: E0224 02:02:39.228949 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert podName:4f72a322-2142-482a-9b0b-2ad890181d7a nodeName:}" failed. No retries permitted until 2026-02-24 02:02:39.728919306 +0000 UTC m=+45.072223566 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert") pod "cluster-version-operator-5cfd9759cf-v5tpt" (UID: "4f72a322-2142-482a-9b0b-2ad890181d7a") : secret "cluster-version-operator-serving-cert" not found Feb 24 02:02:39.229419 master-0 kubenswrapper[4207]: I0224 02:02:39.229364 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f72a322-2142-482a-9b0b-2ad890181d7a-service-ca\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:02:39.245321 master-0 kubenswrapper[4207]: I0224 02:02:39.245271 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f72a322-2142-482a-9b0b-2ad890181d7a-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:02:39.737761 master-0 kubenswrapper[4207]: I0224 02:02:39.737227 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:02:39.737761 master-0 kubenswrapper[4207]: E0224 02:02:39.737423 4207 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 24 02:02:39.738662 master-0 kubenswrapper[4207]: E0224 02:02:39.737837 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert podName:4f72a322-2142-482a-9b0b-2ad890181d7a nodeName:}" failed. No retries permitted until 2026-02-24 02:02:40.737807841 +0000 UTC m=+46.081112111 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert") pod "cluster-version-operator-5cfd9759cf-v5tpt" (UID: "4f72a322-2142-482a-9b0b-2ad890181d7a") : secret "cluster-version-operator-serving-cert" not found Feb 24 02:02:40.751385 master-0 kubenswrapper[4207]: I0224 02:02:40.751252 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:02:40.752147 master-0 kubenswrapper[4207]: E0224 02:02:40.751494 4207 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 24 02:02:40.752147 master-0 kubenswrapper[4207]: E0224 02:02:40.751612 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert podName:4f72a322-2142-482a-9b0b-2ad890181d7a nodeName:}" failed. No retries permitted until 2026-02-24 02:02:42.751562365 +0000 UTC m=+48.094866645 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert") pod "cluster-version-operator-5cfd9759cf-v5tpt" (UID: "4f72a322-2142-482a-9b0b-2ad890181d7a") : secret "cluster-version-operator-serving-cert" not found Feb 24 02:02:41.215621 master-0 kubenswrapper[4207]: I0224 02:02:41.215591 4207 csr.go:261] certificate signing request csr-qmpxg is approved, waiting to be issued Feb 24 02:02:41.223951 master-0 kubenswrapper[4207]: I0224 02:02:41.223917 4207 csr.go:257] certificate signing request csr-qmpxg is issued Feb 24 02:02:42.225303 master-0 kubenswrapper[4207]: I0224 02:02:42.225167 4207 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-25 01:54:04 +0000 UTC, rotation deadline is 2026-02-24 19:20:44.649315704 +0000 UTC Feb 24 02:02:42.225303 master-0 kubenswrapper[4207]: I0224 02:02:42.225225 4207 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h18m2.424096202s for next certificate rotation Feb 24 02:02:42.704082 master-0 kubenswrapper[4207]: I0224 02:02:42.703993 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-drrqm" event={"ID":"3332acec-1553-4594-a903-a322399f6d9d","Type":"ContainerStarted","Data":"bd04ca4878f34a8e0c0c455c1d43cdf6ed71c1c4d7bdddea11524004be4de241"} Feb 24 02:02:42.767472 master-0 kubenswrapper[4207]: I0224 02:02:42.767385 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:02:42.767657 master-0 kubenswrapper[4207]: E0224 02:02:42.767611 4207 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 24 02:02:42.767779 master-0 kubenswrapper[4207]: E0224 02:02:42.767731 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert podName:4f72a322-2142-482a-9b0b-2ad890181d7a nodeName:}" failed. No retries permitted until 2026-02-24 02:02:46.767697586 +0000 UTC m=+52.111001866 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert") pod "cluster-version-operator-5cfd9759cf-v5tpt" (UID: "4f72a322-2142-482a-9b0b-2ad890181d7a") : secret "cluster-version-operator-serving-cert" not found Feb 24 02:02:43.225521 master-0 kubenswrapper[4207]: I0224 02:02:43.225434 4207 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-25 01:54:04 +0000 UTC, rotation deadline is 2026-02-24 22:43:49.457513398 +0000 UTC Feb 24 02:02:43.225521 master-0 kubenswrapper[4207]: I0224 02:02:43.225484 4207 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h41m6.232035375s for next certificate rotation Feb 24 02:02:43.520658 master-0 kubenswrapper[4207]: I0224 02:02:43.520459 4207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-7d7db75979-drrqm" podStartSLOduration=2.014198683 podStartE2EDuration="5.520430931s" podCreationTimestamp="2026-02-24 02:02:38 +0000 UTC" firstStartedPulling="2026-02-24 02:02:38.610272573 +0000 UTC m=+43.953576853" lastFinishedPulling="2026-02-24 02:02:42.116504851 +0000 UTC m=+47.459809101" observedRunningTime="2026-02-24 02:02:42.722935098 +0000 UTC m=+48.066239368" watchObservedRunningTime="2026-02-24 02:02:43.520430931 +0000 UTC m=+48.863735211" Feb 24 02:02:43.521005 master-0 kubenswrapper[4207]: I0224 02:02:43.520822 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Feb 24 02:02:43.528916 master-0 kubenswrapper[4207]: I0224 02:02:43.528851 4207 scope.go:117] "RemoveContainer" containerID="4ea164dd4d44c905424ce0b0b3ea58702494938b88cbbbe52d4ce16914c7762b" Feb 24 02:02:44.671254 master-0 kubenswrapper[4207]: I0224 02:02:44.671163 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-f2lj9"] Feb 24 02:02:44.672200 master-0 kubenswrapper[4207]: I0224 02:02:44.671727 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-f2lj9" Feb 24 02:02:44.674237 master-0 kubenswrapper[4207]: I0224 02:02:44.674183 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Feb 24 02:02:44.674792 master-0 kubenswrapper[4207]: I0224 02:02:44.674714 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Feb 24 02:02:44.675026 master-0 kubenswrapper[4207]: I0224 02:02:44.674982 4207 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Feb 24 02:02:44.675618 master-0 kubenswrapper[4207]: I0224 02:02:44.675553 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Feb 24 02:02:44.711781 master-0 kubenswrapper[4207]: I0224 02:02:44.711715 4207 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 24 02:02:44.712601 master-0 kubenswrapper[4207]: I0224 02:02:44.712489 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"73250fbf83eb734a494f12593474f38faaba12f425754ad28c833c6cc94b24a7"} Feb 24 02:02:44.727921 master-0 kubenswrapper[4207]: I0224 02:02:44.727798 4207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=1.727750077 podStartE2EDuration="1.727750077s" podCreationTimestamp="2026-02-24 02:02:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:02:44.727633044 +0000 UTC m=+50.070937314" watchObservedRunningTime="2026-02-24 02:02:44.727750077 +0000 UTC m=+50.071054357" Feb 24 02:02:44.783249 master-0 kubenswrapper[4207]: I0224 02:02:44.783172 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/7fa1462b-8f1c-4a77-9c1c-f0f79910737f-host-var-run-resolv-conf\") pod \"assisted-installer-controller-f2lj9\" (UID: \"7fa1462b-8f1c-4a77-9c1c-f0f79910737f\") " pod="assisted-installer/assisted-installer-controller-f2lj9" Feb 24 02:02:44.783249 master-0 kubenswrapper[4207]: I0224 02:02:44.783235 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfjsk\" (UniqueName: \"kubernetes.io/projected/7fa1462b-8f1c-4a77-9c1c-f0f79910737f-kube-api-access-gfjsk\") pod \"assisted-installer-controller-f2lj9\" (UID: \"7fa1462b-8f1c-4a77-9c1c-f0f79910737f\") " pod="assisted-installer/assisted-installer-controller-f2lj9" Feb 24 02:02:44.783421 master-0 kubenswrapper[4207]: I0224 02:02:44.783281 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/7fa1462b-8f1c-4a77-9c1c-f0f79910737f-host-ca-bundle\") pod \"assisted-installer-controller-f2lj9\" (UID: \"7fa1462b-8f1c-4a77-9c1c-f0f79910737f\") " pod="assisted-installer/assisted-installer-controller-f2lj9" Feb 24 02:02:44.783421 master-0 kubenswrapper[4207]: I0224 02:02:44.783318 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/7fa1462b-8f1c-4a77-9c1c-f0f79910737f-host-resolv-conf\") pod \"assisted-installer-controller-f2lj9\" (UID: \"7fa1462b-8f1c-4a77-9c1c-f0f79910737f\") " pod="assisted-installer/assisted-installer-controller-f2lj9" Feb 24 02:02:44.783543 master-0 kubenswrapper[4207]: I0224 02:02:44.783470 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/7fa1462b-8f1c-4a77-9c1c-f0f79910737f-sno-bootstrap-files\") pod \"assisted-installer-controller-f2lj9\" (UID: \"7fa1462b-8f1c-4a77-9c1c-f0f79910737f\") " pod="assisted-installer/assisted-installer-controller-f2lj9" Feb 24 02:02:44.884201 master-0 kubenswrapper[4207]: I0224 02:02:44.884091 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfjsk\" (UniqueName: \"kubernetes.io/projected/7fa1462b-8f1c-4a77-9c1c-f0f79910737f-kube-api-access-gfjsk\") pod \"assisted-installer-controller-f2lj9\" (UID: \"7fa1462b-8f1c-4a77-9c1c-f0f79910737f\") " pod="assisted-installer/assisted-installer-controller-f2lj9" Feb 24 02:02:44.884201 master-0 kubenswrapper[4207]: I0224 02:02:44.884157 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/7fa1462b-8f1c-4a77-9c1c-f0f79910737f-host-var-run-resolv-conf\") pod \"assisted-installer-controller-f2lj9\" (UID: \"7fa1462b-8f1c-4a77-9c1c-f0f79910737f\") " pod="assisted-installer/assisted-installer-controller-f2lj9" Feb 24 02:02:44.884201 master-0 kubenswrapper[4207]: I0224 02:02:44.884198 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/7fa1462b-8f1c-4a77-9c1c-f0f79910737f-host-ca-bundle\") pod \"assisted-installer-controller-f2lj9\" (UID: \"7fa1462b-8f1c-4a77-9c1c-f0f79910737f\") " pod="assisted-installer/assisted-installer-controller-f2lj9" Feb 24 02:02:44.884456 master-0 kubenswrapper[4207]: I0224 02:02:44.884288 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/7fa1462b-8f1c-4a77-9c1c-f0f79910737f-host-var-run-resolv-conf\") pod \"assisted-installer-controller-f2lj9\" (UID: \"7fa1462b-8f1c-4a77-9c1c-f0f79910737f\") " pod="assisted-installer/assisted-installer-controller-f2lj9" Feb 24 02:02:44.884523 master-0 kubenswrapper[4207]: I0224 02:02:44.884454 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/7fa1462b-8f1c-4a77-9c1c-f0f79910737f-host-resolv-conf\") pod \"assisted-installer-controller-f2lj9\" (UID: \"7fa1462b-8f1c-4a77-9c1c-f0f79910737f\") " pod="assisted-installer/assisted-installer-controller-f2lj9" Feb 24 02:02:44.884523 master-0 kubenswrapper[4207]: I0224 02:02:44.884499 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/7fa1462b-8f1c-4a77-9c1c-f0f79910737f-host-resolv-conf\") pod \"assisted-installer-controller-f2lj9\" (UID: \"7fa1462b-8f1c-4a77-9c1c-f0f79910737f\") " pod="assisted-installer/assisted-installer-controller-f2lj9" Feb 24 02:02:44.884688 master-0 kubenswrapper[4207]: I0224 02:02:44.884461 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/7fa1462b-8f1c-4a77-9c1c-f0f79910737f-host-ca-bundle\") pod \"assisted-installer-controller-f2lj9\" (UID: \"7fa1462b-8f1c-4a77-9c1c-f0f79910737f\") " pod="assisted-installer/assisted-installer-controller-f2lj9" Feb 24 02:02:44.884688 master-0 kubenswrapper[4207]: I0224 02:02:44.884535 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/7fa1462b-8f1c-4a77-9c1c-f0f79910737f-sno-bootstrap-files\") pod \"assisted-installer-controller-f2lj9\" (UID: \"7fa1462b-8f1c-4a77-9c1c-f0f79910737f\") " pod="assisted-installer/assisted-installer-controller-f2lj9" Feb 24 02:02:44.884688 master-0 kubenswrapper[4207]: I0224 02:02:44.884633 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/7fa1462b-8f1c-4a77-9c1c-f0f79910737f-sno-bootstrap-files\") pod \"assisted-installer-controller-f2lj9\" (UID: \"7fa1462b-8f1c-4a77-9c1c-f0f79910737f\") " pod="assisted-installer/assisted-installer-controller-f2lj9" Feb 24 02:02:44.914134 master-0 kubenswrapper[4207]: I0224 02:02:44.914029 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfjsk\" (UniqueName: \"kubernetes.io/projected/7fa1462b-8f1c-4a77-9c1c-f0f79910737f-kube-api-access-gfjsk\") pod \"assisted-installer-controller-f2lj9\" (UID: \"7fa1462b-8f1c-4a77-9c1c-f0f79910737f\") " pod="assisted-installer/assisted-installer-controller-f2lj9" Feb 24 02:02:45.007451 master-0 kubenswrapper[4207]: I0224 02:02:45.007343 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-f2lj9" Feb 24 02:02:45.023525 master-0 kubenswrapper[4207]: W0224 02:02:45.023464 4207 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fa1462b_8f1c_4a77_9c1c_f0f79910737f.slice/crio-acf2d712c422a3cffe6f1ecfbb7b0b5262ef2f0636bb7727cd7fd72186b9cf69 WatchSource:0}: Error finding container acf2d712c422a3cffe6f1ecfbb7b0b5262ef2f0636bb7727cd7fd72186b9cf69: Status 404 returned error can't find the container with id acf2d712c422a3cffe6f1ecfbb7b0b5262ef2f0636bb7727cd7fd72186b9cf69 Feb 24 02:02:45.088502 master-0 kubenswrapper[4207]: I0224 02:02:45.088010 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-dm7d7"] Feb 24 02:02:45.088669 master-0 kubenswrapper[4207]: I0224 02:02:45.088502 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-dm7d7" Feb 24 02:02:45.186639 master-0 kubenswrapper[4207]: I0224 02:02:45.186522 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6ms8\" (UniqueName: \"kubernetes.io/projected/e8e1e397-edd4-4278-b3ea-25fe829de509-kube-api-access-n6ms8\") pod \"mtu-prober-dm7d7\" (UID: \"e8e1e397-edd4-4278-b3ea-25fe829de509\") " pod="openshift-network-operator/mtu-prober-dm7d7" Feb 24 02:02:45.287561 master-0 kubenswrapper[4207]: I0224 02:02:45.287369 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6ms8\" (UniqueName: \"kubernetes.io/projected/e8e1e397-edd4-4278-b3ea-25fe829de509-kube-api-access-n6ms8\") pod \"mtu-prober-dm7d7\" (UID: \"e8e1e397-edd4-4278-b3ea-25fe829de509\") " pod="openshift-network-operator/mtu-prober-dm7d7" Feb 24 02:02:45.314994 master-0 kubenswrapper[4207]: I0224 02:02:45.314903 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6ms8\" (UniqueName: \"kubernetes.io/projected/e8e1e397-edd4-4278-b3ea-25fe829de509-kube-api-access-n6ms8\") pod \"mtu-prober-dm7d7\" (UID: \"e8e1e397-edd4-4278-b3ea-25fe829de509\") " pod="openshift-network-operator/mtu-prober-dm7d7" Feb 24 02:02:45.407800 master-0 kubenswrapper[4207]: I0224 02:02:45.407655 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-dm7d7" Feb 24 02:02:45.423825 master-0 kubenswrapper[4207]: W0224 02:02:45.423724 4207 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8e1e397_edd4_4278_b3ea_25fe829de509.slice/crio-88f77a5110cbd6ffb82334a858bc74333d14fee5bea94889fdaf87723880303b WatchSource:0}: Error finding container 88f77a5110cbd6ffb82334a858bc74333d14fee5bea94889fdaf87723880303b: Status 404 returned error can't find the container with id 88f77a5110cbd6ffb82334a858bc74333d14fee5bea94889fdaf87723880303b Feb 24 02:02:45.718632 master-0 kubenswrapper[4207]: I0224 02:02:45.718117 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-dm7d7" event={"ID":"e8e1e397-edd4-4278-b3ea-25fe829de509","Type":"ContainerStarted","Data":"9084ba926bf7975865b803686ed689ae33dbbe263dc377c963e7af79a6dfafbb"} Feb 24 02:02:45.718632 master-0 kubenswrapper[4207]: I0224 02:02:45.718201 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-dm7d7" event={"ID":"e8e1e397-edd4-4278-b3ea-25fe829de509","Type":"ContainerStarted","Data":"88f77a5110cbd6ffb82334a858bc74333d14fee5bea94889fdaf87723880303b"} Feb 24 02:02:45.720351 master-0 kubenswrapper[4207]: I0224 02:02:45.720301 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-f2lj9" event={"ID":"7fa1462b-8f1c-4a77-9c1c-f0f79910737f","Type":"ContainerStarted","Data":"acf2d712c422a3cffe6f1ecfbb7b0b5262ef2f0636bb7727cd7fd72186b9cf69"} Feb 24 02:02:45.733742 master-0 kubenswrapper[4207]: I0224 02:02:45.733664 4207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/mtu-prober-dm7d7" podStartSLOduration=0.733644079 podStartE2EDuration="733.644079ms" podCreationTimestamp="2026-02-24 02:02:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:02:45.732424567 +0000 UTC m=+51.075728837" watchObservedRunningTime="2026-02-24 02:02:45.733644079 +0000 UTC m=+51.076948359" Feb 24 02:02:46.726531 master-0 kubenswrapper[4207]: I0224 02:02:46.726437 4207 generic.go:334] "Generic (PLEG): container finished" podID="e8e1e397-edd4-4278-b3ea-25fe829de509" containerID="9084ba926bf7975865b803686ed689ae33dbbe263dc377c963e7af79a6dfafbb" exitCode=0 Feb 24 02:02:46.727430 master-0 kubenswrapper[4207]: I0224 02:02:46.726505 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-dm7d7" event={"ID":"e8e1e397-edd4-4278-b3ea-25fe829de509","Type":"ContainerDied","Data":"9084ba926bf7975865b803686ed689ae33dbbe263dc377c963e7af79a6dfafbb"} Feb 24 02:02:46.798803 master-0 kubenswrapper[4207]: I0224 02:02:46.798755 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:02:46.799118 master-0 kubenswrapper[4207]: E0224 02:02:46.799053 4207 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 24 02:02:46.799224 master-0 kubenswrapper[4207]: E0224 02:02:46.799162 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert podName:4f72a322-2142-482a-9b0b-2ad890181d7a nodeName:}" failed. No retries permitted until 2026-02-24 02:02:54.799134877 +0000 UTC m=+60.142439157 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert") pod "cluster-version-operator-5cfd9759cf-v5tpt" (UID: "4f72a322-2142-482a-9b0b-2ad890181d7a") : secret "cluster-version-operator-serving-cert" not found Feb 24 02:02:47.747451 master-0 kubenswrapper[4207]: I0224 02:02:47.747389 4207 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-dm7d7" Feb 24 02:02:47.906802 master-0 kubenswrapper[4207]: I0224 02:02:47.906701 4207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6ms8\" (UniqueName: \"kubernetes.io/projected/e8e1e397-edd4-4278-b3ea-25fe829de509-kube-api-access-n6ms8\") pod \"e8e1e397-edd4-4278-b3ea-25fe829de509\" (UID: \"e8e1e397-edd4-4278-b3ea-25fe829de509\") " Feb 24 02:02:47.911209 master-0 kubenswrapper[4207]: I0224 02:02:47.911151 4207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8e1e397-edd4-4278-b3ea-25fe829de509-kube-api-access-n6ms8" (OuterVolumeSpecName: "kube-api-access-n6ms8") pod "e8e1e397-edd4-4278-b3ea-25fe829de509" (UID: "e8e1e397-edd4-4278-b3ea-25fe829de509"). InnerVolumeSpecName "kube-api-access-n6ms8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:02:48.007423 master-0 kubenswrapper[4207]: I0224 02:02:48.007281 4207 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6ms8\" (UniqueName: \"kubernetes.io/projected/e8e1e397-edd4-4278-b3ea-25fe829de509-kube-api-access-n6ms8\") on node \"master-0\" DevicePath \"\"" Feb 24 02:02:48.734193 master-0 kubenswrapper[4207]: I0224 02:02:48.734110 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-dm7d7" event={"ID":"e8e1e397-edd4-4278-b3ea-25fe829de509","Type":"ContainerDied","Data":"88f77a5110cbd6ffb82334a858bc74333d14fee5bea94889fdaf87723880303b"} Feb 24 02:02:48.734512 master-0 kubenswrapper[4207]: I0224 02:02:48.734208 4207 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88f77a5110cbd6ffb82334a858bc74333d14fee5bea94889fdaf87723880303b" Feb 24 02:02:48.734512 master-0 kubenswrapper[4207]: I0224 02:02:48.734337 4207 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-dm7d7" Feb 24 02:02:50.098017 master-0 kubenswrapper[4207]: I0224 02:02:50.097961 4207 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-dm7d7"] Feb 24 02:02:50.101696 master-0 kubenswrapper[4207]: I0224 02:02:50.101646 4207 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-dm7d7"] Feb 24 02:02:50.741161 master-0 kubenswrapper[4207]: I0224 02:02:50.741050 4207 generic.go:334] "Generic (PLEG): container finished" podID="7fa1462b-8f1c-4a77-9c1c-f0f79910737f" containerID="35c312973828464e3d9786034ffddad219bbd2d62792822db99238b48a9c981d" exitCode=0 Feb 24 02:02:50.741161 master-0 kubenswrapper[4207]: I0224 02:02:50.741117 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-f2lj9" event={"ID":"7fa1462b-8f1c-4a77-9c1c-f0f79910737f","Type":"ContainerDied","Data":"35c312973828464e3d9786034ffddad219bbd2d62792822db99238b48a9c981d"} Feb 24 02:02:51.510274 master-0 kubenswrapper[4207]: I0224 02:02:51.509862 4207 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8e1e397-edd4-4278-b3ea-25fe829de509" path="/var/lib/kubelet/pods/e8e1e397-edd4-4278-b3ea-25fe829de509/volumes" Feb 24 02:02:51.769236 master-0 kubenswrapper[4207]: I0224 02:02:51.769182 4207 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-f2lj9" Feb 24 02:02:51.936789 master-0 kubenswrapper[4207]: I0224 02:02:51.936715 4207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfjsk\" (UniqueName: \"kubernetes.io/projected/7fa1462b-8f1c-4a77-9c1c-f0f79910737f-kube-api-access-gfjsk\") pod \"7fa1462b-8f1c-4a77-9c1c-f0f79910737f\" (UID: \"7fa1462b-8f1c-4a77-9c1c-f0f79910737f\") " Feb 24 02:02:51.936789 master-0 kubenswrapper[4207]: I0224 02:02:51.936777 4207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/7fa1462b-8f1c-4a77-9c1c-f0f79910737f-host-ca-bundle\") pod \"7fa1462b-8f1c-4a77-9c1c-f0f79910737f\" (UID: \"7fa1462b-8f1c-4a77-9c1c-f0f79910737f\") " Feb 24 02:02:51.937000 master-0 kubenswrapper[4207]: I0224 02:02:51.936814 4207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/7fa1462b-8f1c-4a77-9c1c-f0f79910737f-host-var-run-resolv-conf\") pod \"7fa1462b-8f1c-4a77-9c1c-f0f79910737f\" (UID: \"7fa1462b-8f1c-4a77-9c1c-f0f79910737f\") " Feb 24 02:02:51.937000 master-0 kubenswrapper[4207]: I0224 02:02:51.936848 4207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/7fa1462b-8f1c-4a77-9c1c-f0f79910737f-sno-bootstrap-files\") pod \"7fa1462b-8f1c-4a77-9c1c-f0f79910737f\" (UID: \"7fa1462b-8f1c-4a77-9c1c-f0f79910737f\") " Feb 24 02:02:51.937000 master-0 kubenswrapper[4207]: I0224 02:02:51.936877 4207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/7fa1462b-8f1c-4a77-9c1c-f0f79910737f-host-resolv-conf\") pod \"7fa1462b-8f1c-4a77-9c1c-f0f79910737f\" (UID: \"7fa1462b-8f1c-4a77-9c1c-f0f79910737f\") " Feb 24 02:02:51.937000 master-0 kubenswrapper[4207]: I0224 02:02:51.936977 4207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fa1462b-8f1c-4a77-9c1c-f0f79910737f-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "7fa1462b-8f1c-4a77-9c1c-f0f79910737f" (UID: "7fa1462b-8f1c-4a77-9c1c-f0f79910737f"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:02:51.937227 master-0 kubenswrapper[4207]: I0224 02:02:51.936976 4207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fa1462b-8f1c-4a77-9c1c-f0f79910737f-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "7fa1462b-8f1c-4a77-9c1c-f0f79910737f" (UID: "7fa1462b-8f1c-4a77-9c1c-f0f79910737f"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:02:51.937227 master-0 kubenswrapper[4207]: I0224 02:02:51.937015 4207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fa1462b-8f1c-4a77-9c1c-f0f79910737f-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "7fa1462b-8f1c-4a77-9c1c-f0f79910737f" (UID: "7fa1462b-8f1c-4a77-9c1c-f0f79910737f"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:02:51.937227 master-0 kubenswrapper[4207]: I0224 02:02:51.937064 4207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fa1462b-8f1c-4a77-9c1c-f0f79910737f-sno-bootstrap-files" (OuterVolumeSpecName: "sno-bootstrap-files") pod "7fa1462b-8f1c-4a77-9c1c-f0f79910737f" (UID: "7fa1462b-8f1c-4a77-9c1c-f0f79910737f"). InnerVolumeSpecName "sno-bootstrap-files". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:02:51.941719 master-0 kubenswrapper[4207]: I0224 02:02:51.941625 4207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fa1462b-8f1c-4a77-9c1c-f0f79910737f-kube-api-access-gfjsk" (OuterVolumeSpecName: "kube-api-access-gfjsk") pod "7fa1462b-8f1c-4a77-9c1c-f0f79910737f" (UID: "7fa1462b-8f1c-4a77-9c1c-f0f79910737f"). InnerVolumeSpecName "kube-api-access-gfjsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:02:52.037658 master-0 kubenswrapper[4207]: I0224 02:02:52.037539 4207 reconciler_common.go:293] "Volume detached for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/7fa1462b-8f1c-4a77-9c1c-f0f79910737f-sno-bootstrap-files\") on node \"master-0\" DevicePath \"\"" Feb 24 02:02:52.037658 master-0 kubenswrapper[4207]: I0224 02:02:52.037623 4207 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/7fa1462b-8f1c-4a77-9c1c-f0f79910737f-host-var-run-resolv-conf\") on node \"master-0\" DevicePath \"\"" Feb 24 02:02:52.037658 master-0 kubenswrapper[4207]: I0224 02:02:52.037644 4207 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/7fa1462b-8f1c-4a77-9c1c-f0f79910737f-host-resolv-conf\") on node \"master-0\" DevicePath \"\"" Feb 24 02:02:52.037658 master-0 kubenswrapper[4207]: I0224 02:02:52.037662 4207 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfjsk\" (UniqueName: \"kubernetes.io/projected/7fa1462b-8f1c-4a77-9c1c-f0f79910737f-kube-api-access-gfjsk\") on node \"master-0\" DevicePath \"\"" Feb 24 02:02:52.037932 master-0 kubenswrapper[4207]: I0224 02:02:52.037681 4207 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/7fa1462b-8f1c-4a77-9c1c-f0f79910737f-host-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:02:52.748742 master-0 kubenswrapper[4207]: I0224 02:02:52.748655 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-f2lj9" event={"ID":"7fa1462b-8f1c-4a77-9c1c-f0f79910737f","Type":"ContainerDied","Data":"acf2d712c422a3cffe6f1ecfbb7b0b5262ef2f0636bb7727cd7fd72186b9cf69"} Feb 24 02:02:52.749766 master-0 kubenswrapper[4207]: I0224 02:02:52.748760 4207 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acf2d712c422a3cffe6f1ecfbb7b0b5262ef2f0636bb7727cd7fd72186b9cf69" Feb 24 02:02:52.749766 master-0 kubenswrapper[4207]: I0224 02:02:52.748683 4207 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-f2lj9" Feb 24 02:02:54.860727 master-0 kubenswrapper[4207]: I0224 02:02:54.860608 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:02:54.861458 master-0 kubenswrapper[4207]: E0224 02:02:54.860810 4207 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 24 02:02:54.861458 master-0 kubenswrapper[4207]: E0224 02:02:54.860961 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert podName:4f72a322-2142-482a-9b0b-2ad890181d7a nodeName:}" failed. No retries permitted until 2026-02-24 02:03:10.860896822 +0000 UTC m=+76.204201102 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert") pod "cluster-version-operator-5cfd9759cf-v5tpt" (UID: "4f72a322-2142-482a-9b0b-2ad890181d7a") : secret "cluster-version-operator-serving-cert" not found Feb 24 02:02:54.999676 master-0 kubenswrapper[4207]: I0224 02:02:54.999561 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-7fbjw"] Feb 24 02:02:54.999928 master-0 kubenswrapper[4207]: E0224 02:02:54.999774 4207 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fa1462b-8f1c-4a77-9c1c-f0f79910737f" containerName="assisted-installer-controller" Feb 24 02:02:54.999928 master-0 kubenswrapper[4207]: I0224 02:02:54.999806 4207 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fa1462b-8f1c-4a77-9c1c-f0f79910737f" containerName="assisted-installer-controller" Feb 24 02:02:54.999928 master-0 kubenswrapper[4207]: E0224 02:02:54.999826 4207 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8e1e397-edd4-4278-b3ea-25fe829de509" containerName="prober" Feb 24 02:02:54.999928 master-0 kubenswrapper[4207]: I0224 02:02:54.999843 4207 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8e1e397-edd4-4278-b3ea-25fe829de509" containerName="prober" Feb 24 02:02:54.999928 master-0 kubenswrapper[4207]: I0224 02:02:54.999886 4207 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8e1e397-edd4-4278-b3ea-25fe829de509" containerName="prober" Feb 24 02:02:54.999928 master-0 kubenswrapper[4207]: I0224 02:02:54.999900 4207 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fa1462b-8f1c-4a77-9c1c-f0f79910737f" containerName="assisted-installer-controller" Feb 24 02:02:55.000262 master-0 kubenswrapper[4207]: I0224 02:02:55.000193 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.005998 master-0 kubenswrapper[4207]: I0224 02:02:55.005944 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 24 02:02:55.006996 master-0 kubenswrapper[4207]: I0224 02:02:55.006948 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 24 02:02:55.007240 master-0 kubenswrapper[4207]: I0224 02:02:55.007203 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 24 02:02:55.007663 master-0 kubenswrapper[4207]: I0224 02:02:55.007617 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 24 02:02:55.162944 master-0 kubenswrapper[4207]: I0224 02:02:55.162744 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-conf-dir\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.162944 master-0 kubenswrapper[4207]: I0224 02:02:55.162822 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-socket-dir-parent\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.163263 master-0 kubenswrapper[4207]: I0224 02:02:55.162947 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-system-cni-dir\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.163263 master-0 kubenswrapper[4207]: I0224 02:02:55.163011 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-run-multus-certs\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.163263 master-0 kubenswrapper[4207]: I0224 02:02:55.163061 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-var-lib-cni-bin\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.163263 master-0 kubenswrapper[4207]: I0224 02:02:55.163095 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-hostroot\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.163491 master-0 kubenswrapper[4207]: I0224 02:02:55.163252 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-run-netns\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.163491 master-0 kubenswrapper[4207]: I0224 02:02:55.163320 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-cnibin\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.163491 master-0 kubenswrapper[4207]: I0224 02:02:55.163352 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-os-release\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.163491 master-0 kubenswrapper[4207]: I0224 02:02:55.163382 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-cni-binary-copy\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.163491 master-0 kubenswrapper[4207]: I0224 02:02:55.163416 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-var-lib-cni-multus\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.163804 master-0 kubenswrapper[4207]: I0224 02:02:55.163514 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-daemon-config\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.163804 master-0 kubenswrapper[4207]: I0224 02:02:55.163621 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-etc-kubernetes\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.163804 master-0 kubenswrapper[4207]: I0224 02:02:55.163684 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-cni-dir\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.163804 master-0 kubenswrapper[4207]: I0224 02:02:55.163730 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbnd2\" (UniqueName: \"kubernetes.io/projected/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-kube-api-access-bbnd2\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.164033 master-0 kubenswrapper[4207]: I0224 02:02:55.163805 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-run-k8s-cni-cncf-io\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.164033 master-0 kubenswrapper[4207]: I0224 02:02:55.163894 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-var-lib-kubelet\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.193444 master-0 kubenswrapper[4207]: I0224 02:02:55.193354 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-jtdht"] Feb 24 02:02:55.194354 master-0 kubenswrapper[4207]: I0224 02:02:55.194302 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:02:55.197226 master-0 kubenswrapper[4207]: I0224 02:02:55.197163 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 24 02:02:55.197799 master-0 kubenswrapper[4207]: I0224 02:02:55.197726 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 24 02:02:55.266895 master-0 kubenswrapper[4207]: I0224 02:02:55.266835 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-conf-dir\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.266895 master-0 kubenswrapper[4207]: I0224 02:02:55.266895 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-socket-dir-parent\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.267274 master-0 kubenswrapper[4207]: I0224 02:02:55.266931 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-system-cni-dir\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.267274 master-0 kubenswrapper[4207]: I0224 02:02:55.267032 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-conf-dir\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.267274 master-0 kubenswrapper[4207]: I0224 02:02:55.267156 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-run-multus-certs\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.267274 master-0 kubenswrapper[4207]: I0224 02:02:55.267170 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-socket-dir-parent\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.267274 master-0 kubenswrapper[4207]: I0224 02:02:55.267218 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-run-netns\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.267274 master-0 kubenswrapper[4207]: I0224 02:02:55.267258 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-var-lib-cni-bin\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.267612 master-0 kubenswrapper[4207]: I0224 02:02:55.267290 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-system-cni-dir\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.267612 master-0 kubenswrapper[4207]: I0224 02:02:55.267297 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-hostroot\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.267612 master-0 kubenswrapper[4207]: I0224 02:02:55.267333 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-run-netns\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.267612 master-0 kubenswrapper[4207]: I0224 02:02:55.267341 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-cnibin\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.267612 master-0 kubenswrapper[4207]: I0224 02:02:55.267371 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-run-multus-certs\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.267612 master-0 kubenswrapper[4207]: I0224 02:02:55.267378 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-os-release\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.267612 master-0 kubenswrapper[4207]: I0224 02:02:55.267410 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-hostroot\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.267612 master-0 kubenswrapper[4207]: I0224 02:02:55.267420 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-cni-dir\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.267612 master-0 kubenswrapper[4207]: I0224 02:02:55.267448 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-var-lib-cni-bin\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.267612 master-0 kubenswrapper[4207]: I0224 02:02:55.267458 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-cni-binary-copy\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.267612 master-0 kubenswrapper[4207]: I0224 02:02:55.267490 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-var-lib-cni-multus\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.267612 master-0 kubenswrapper[4207]: I0224 02:02:55.267508 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-os-release\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.267612 master-0 kubenswrapper[4207]: I0224 02:02:55.267515 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-daemon-config\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.267612 master-0 kubenswrapper[4207]: I0224 02:02:55.267542 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-etc-kubernetes\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.267612 master-0 kubenswrapper[4207]: I0224 02:02:55.267559 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-cnibin\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.267612 master-0 kubenswrapper[4207]: I0224 02:02:55.267614 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-run-k8s-cni-cncf-io\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.268151 master-0 kubenswrapper[4207]: I0224 02:02:55.267640 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-var-lib-kubelet\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.268151 master-0 kubenswrapper[4207]: I0224 02:02:55.267665 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbnd2\" (UniqueName: \"kubernetes.io/projected/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-kube-api-access-bbnd2\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.268151 master-0 kubenswrapper[4207]: I0224 02:02:55.267764 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-cni-dir\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.268151 master-0 kubenswrapper[4207]: I0224 02:02:55.267801 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-etc-kubernetes\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.268151 master-0 kubenswrapper[4207]: I0224 02:02:55.267834 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-var-lib-cni-multus\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.268352 master-0 kubenswrapper[4207]: I0224 02:02:55.268326 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-run-k8s-cni-cncf-io\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.268643 master-0 kubenswrapper[4207]: I0224 02:02:55.268615 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-daemon-config\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.268702 master-0 kubenswrapper[4207]: I0224 02:02:55.268671 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-var-lib-kubelet\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.269047 master-0 kubenswrapper[4207]: I0224 02:02:55.269019 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-cni-binary-copy\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.289837 master-0 kubenswrapper[4207]: I0224 02:02:55.289770 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbnd2\" (UniqueName: \"kubernetes.io/projected/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-kube-api-access-bbnd2\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.323942 master-0 kubenswrapper[4207]: I0224 02:02:55.323855 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-7fbjw" Feb 24 02:02:55.341844 master-0 kubenswrapper[4207]: W0224 02:02:55.341803 4207 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5da829af_05fb_4f6e_9bec_c4dcc9cbec4b.slice/crio-ca4a08102d80addfcc85dbdd564f6e40965982eca8126d325ae121c2e1c48c40 WatchSource:0}: Error finding container ca4a08102d80addfcc85dbdd564f6e40965982eca8126d325ae121c2e1c48c40: Status 404 returned error can't find the container with id ca4a08102d80addfcc85dbdd564f6e40965982eca8126d325ae121c2e1c48c40 Feb 24 02:02:55.367943 master-0 kubenswrapper[4207]: I0224 02:02:55.367877 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-cnibin\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:02:55.368083 master-0 kubenswrapper[4207]: I0224 02:02:55.367948 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/57811d07-ae8a-44b7-8efb-dafc5afad31e-whereabouts-configmap\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:02:55.368083 master-0 kubenswrapper[4207]: I0224 02:02:55.367997 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrmsh\" (UniqueName: \"kubernetes.io/projected/57811d07-ae8a-44b7-8efb-dafc5afad31e-kube-api-access-vrmsh\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:02:55.368083 master-0 kubenswrapper[4207]: I0224 02:02:55.368035 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-os-release\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:02:55.368395 master-0 kubenswrapper[4207]: I0224 02:02:55.368087 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:02:55.368395 master-0 kubenswrapper[4207]: I0224 02:02:55.368124 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-system-cni-dir\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:02:55.368395 master-0 kubenswrapper[4207]: I0224 02:02:55.368232 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/57811d07-ae8a-44b7-8efb-dafc5afad31e-cni-binary-copy\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:02:55.368395 master-0 kubenswrapper[4207]: I0224 02:02:55.368333 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/57811d07-ae8a-44b7-8efb-dafc5afad31e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:02:55.468687 master-0 kubenswrapper[4207]: I0224 02:02:55.468623 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:02:55.468821 master-0 kubenswrapper[4207]: I0224 02:02:55.468760 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-system-cni-dir\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:02:55.468821 master-0 kubenswrapper[4207]: I0224 02:02:55.468811 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/57811d07-ae8a-44b7-8efb-dafc5afad31e-cni-binary-copy\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:02:55.468945 master-0 kubenswrapper[4207]: I0224 02:02:55.468845 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/57811d07-ae8a-44b7-8efb-dafc5afad31e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:02:55.468945 master-0 kubenswrapper[4207]: I0224 02:02:55.468887 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-cnibin\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:02:55.468945 master-0 kubenswrapper[4207]: I0224 02:02:55.468903 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:02:55.468945 master-0 kubenswrapper[4207]: I0224 02:02:55.468932 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-system-cni-dir\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:02:55.469165 master-0 kubenswrapper[4207]: I0224 02:02:55.468920 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/57811d07-ae8a-44b7-8efb-dafc5afad31e-whereabouts-configmap\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:02:55.469165 master-0 kubenswrapper[4207]: I0224 02:02:55.469025 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrmsh\" (UniqueName: \"kubernetes.io/projected/57811d07-ae8a-44b7-8efb-dafc5afad31e-kube-api-access-vrmsh\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:02:55.469340 master-0 kubenswrapper[4207]: I0224 02:02:55.469268 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-cnibin\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:02:55.469496 master-0 kubenswrapper[4207]: I0224 02:02:55.469450 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-os-release\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:02:55.469638 master-0 kubenswrapper[4207]: I0224 02:02:55.469563 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-os-release\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:02:55.470439 master-0 kubenswrapper[4207]: I0224 02:02:55.470373 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/57811d07-ae8a-44b7-8efb-dafc5afad31e-cni-binary-copy\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:02:55.470516 master-0 kubenswrapper[4207]: I0224 02:02:55.470456 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 24 02:02:55.471733 master-0 kubenswrapper[4207]: I0224 02:02:55.471651 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 24 02:02:55.480712 master-0 kubenswrapper[4207]: I0224 02:02:55.480655 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/57811d07-ae8a-44b7-8efb-dafc5afad31e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:02:55.481085 master-0 kubenswrapper[4207]: I0224 02:02:55.481050 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/57811d07-ae8a-44b7-8efb-dafc5afad31e-whereabouts-configmap\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:02:55.497796 master-0 kubenswrapper[4207]: I0224 02:02:55.497727 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrmsh\" (UniqueName: \"kubernetes.io/projected/57811d07-ae8a-44b7-8efb-dafc5afad31e-kube-api-access-vrmsh\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:02:55.537072 master-0 kubenswrapper[4207]: I0224 02:02:55.536735 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:02:55.557890 master-0 kubenswrapper[4207]: W0224 02:02:55.557831 4207 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57811d07_ae8a_44b7_8efb_dafc5afad31e.slice/crio-db3e2d765c6b8a0f8e83a15ab78326f0bd14411e923e027c42dbca04e32ebad8 WatchSource:0}: Error finding container db3e2d765c6b8a0f8e83a15ab78326f0bd14411e923e027c42dbca04e32ebad8: Status 404 returned error can't find the container with id db3e2d765c6b8a0f8e83a15ab78326f0bd14411e923e027c42dbca04e32ebad8 Feb 24 02:02:55.758471 master-0 kubenswrapper[4207]: I0224 02:02:55.758335 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jtdht" event={"ID":"57811d07-ae8a-44b7-8efb-dafc5afad31e","Type":"ContainerStarted","Data":"db3e2d765c6b8a0f8e83a15ab78326f0bd14411e923e027c42dbca04e32ebad8"} Feb 24 02:02:55.760564 master-0 kubenswrapper[4207]: I0224 02:02:55.760520 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7fbjw" event={"ID":"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b","Type":"ContainerStarted","Data":"ca4a08102d80addfcc85dbdd564f6e40965982eca8126d325ae121c2e1c48c40"} Feb 24 02:02:55.978909 master-0 kubenswrapper[4207]: I0224 02:02:55.978814 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-tntcf"] Feb 24 02:02:55.980031 master-0 kubenswrapper[4207]: I0224 02:02:55.979313 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:02:55.980031 master-0 kubenswrapper[4207]: E0224 02:02:55.979403 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tntcf" podUID="70e2ba24-4871-4d1d-9935-156fdbeb2810" Feb 24 02:02:56.175232 master-0 kubenswrapper[4207]: I0224 02:02:56.175092 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs\") pod \"network-metrics-daemon-tntcf\" (UID: \"70e2ba24-4871-4d1d-9935-156fdbeb2810\") " pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:02:56.175401 master-0 kubenswrapper[4207]: I0224 02:02:56.175356 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nmd6\" (UniqueName: \"kubernetes.io/projected/70e2ba24-4871-4d1d-9935-156fdbeb2810-kube-api-access-4nmd6\") pod \"network-metrics-daemon-tntcf\" (UID: \"70e2ba24-4871-4d1d-9935-156fdbeb2810\") " pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:02:56.275721 master-0 kubenswrapper[4207]: I0224 02:02:56.275669 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs\") pod \"network-metrics-daemon-tntcf\" (UID: \"70e2ba24-4871-4d1d-9935-156fdbeb2810\") " pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:02:56.275828 master-0 kubenswrapper[4207]: I0224 02:02:56.275800 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nmd6\" (UniqueName: \"kubernetes.io/projected/70e2ba24-4871-4d1d-9935-156fdbeb2810-kube-api-access-4nmd6\") pod \"network-metrics-daemon-tntcf\" (UID: \"70e2ba24-4871-4d1d-9935-156fdbeb2810\") " pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:02:56.275948 master-0 kubenswrapper[4207]: E0224 02:02:56.275879 4207 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 02:02:56.276032 master-0 kubenswrapper[4207]: E0224 02:02:56.276007 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs podName:70e2ba24-4871-4d1d-9935-156fdbeb2810 nodeName:}" failed. No retries permitted until 2026-02-24 02:02:56.775976961 +0000 UTC m=+62.119281241 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs") pod "network-metrics-daemon-tntcf" (UID: "70e2ba24-4871-4d1d-9935-156fdbeb2810") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 02:02:56.308305 master-0 kubenswrapper[4207]: I0224 02:02:56.308252 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nmd6\" (UniqueName: \"kubernetes.io/projected/70e2ba24-4871-4d1d-9935-156fdbeb2810-kube-api-access-4nmd6\") pod \"network-metrics-daemon-tntcf\" (UID: \"70e2ba24-4871-4d1d-9935-156fdbeb2810\") " pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:02:56.780904 master-0 kubenswrapper[4207]: I0224 02:02:56.780855 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs\") pod \"network-metrics-daemon-tntcf\" (UID: \"70e2ba24-4871-4d1d-9935-156fdbeb2810\") " pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:02:56.781093 master-0 kubenswrapper[4207]: E0224 02:02:56.781069 4207 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 02:02:56.781172 master-0 kubenswrapper[4207]: E0224 02:02:56.781154 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs podName:70e2ba24-4871-4d1d-9935-156fdbeb2810 nodeName:}" failed. No retries permitted until 2026-02-24 02:02:57.781133967 +0000 UTC m=+63.124438207 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs") pod "network-metrics-daemon-tntcf" (UID: "70e2ba24-4871-4d1d-9935-156fdbeb2810") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 02:02:57.504148 master-0 kubenswrapper[4207]: I0224 02:02:57.504037 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:02:57.504926 master-0 kubenswrapper[4207]: E0224 02:02:57.504353 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tntcf" podUID="70e2ba24-4871-4d1d-9935-156fdbeb2810" Feb 24 02:02:57.787437 master-0 kubenswrapper[4207]: I0224 02:02:57.787293 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs\") pod \"network-metrics-daemon-tntcf\" (UID: \"70e2ba24-4871-4d1d-9935-156fdbeb2810\") " pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:02:57.787437 master-0 kubenswrapper[4207]: E0224 02:02:57.787419 4207 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 02:02:57.787693 master-0 kubenswrapper[4207]: E0224 02:02:57.787487 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs podName:70e2ba24-4871-4d1d-9935-156fdbeb2810 nodeName:}" failed. No retries permitted until 2026-02-24 02:02:59.787466781 +0000 UTC m=+65.130771031 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs") pod "network-metrics-daemon-tntcf" (UID: "70e2ba24-4871-4d1d-9935-156fdbeb2810") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 02:02:58.773213 master-0 kubenswrapper[4207]: I0224 02:02:58.773151 4207 generic.go:334] "Generic (PLEG): container finished" podID="57811d07-ae8a-44b7-8efb-dafc5afad31e" containerID="78fb207cbc767c0fee7b7d210f99c9aaf3165a7c791dd4e586c95fb618507ed8" exitCode=0 Feb 24 02:02:58.773213 master-0 kubenswrapper[4207]: I0224 02:02:58.773213 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jtdht" event={"ID":"57811d07-ae8a-44b7-8efb-dafc5afad31e","Type":"ContainerDied","Data":"78fb207cbc767c0fee7b7d210f99c9aaf3165a7c791dd4e586c95fb618507ed8"} Feb 24 02:02:59.503644 master-0 kubenswrapper[4207]: I0224 02:02:59.503502 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:02:59.504200 master-0 kubenswrapper[4207]: E0224 02:02:59.503733 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tntcf" podUID="70e2ba24-4871-4d1d-9935-156fdbeb2810" Feb 24 02:02:59.805722 master-0 kubenswrapper[4207]: I0224 02:02:59.805590 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs\") pod \"network-metrics-daemon-tntcf\" (UID: \"70e2ba24-4871-4d1d-9935-156fdbeb2810\") " pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:02:59.806346 master-0 kubenswrapper[4207]: E0224 02:02:59.805792 4207 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 02:02:59.806346 master-0 kubenswrapper[4207]: E0224 02:02:59.805913 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs podName:70e2ba24-4871-4d1d-9935-156fdbeb2810 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:03.805882505 +0000 UTC m=+69.149186945 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs") pod "network-metrics-daemon-tntcf" (UID: "70e2ba24-4871-4d1d-9935-156fdbeb2810") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 02:03:01.503792 master-0 kubenswrapper[4207]: I0224 02:03:01.503694 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:01.504737 master-0 kubenswrapper[4207]: E0224 02:03:01.503938 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tntcf" podUID="70e2ba24-4871-4d1d-9935-156fdbeb2810" Feb 24 02:03:03.506070 master-0 kubenswrapper[4207]: I0224 02:03:03.505985 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:03.506793 master-0 kubenswrapper[4207]: E0224 02:03:03.506282 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tntcf" podUID="70e2ba24-4871-4d1d-9935-156fdbeb2810" Feb 24 02:03:03.835728 master-0 kubenswrapper[4207]: I0224 02:03:03.835584 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs\") pod \"network-metrics-daemon-tntcf\" (UID: \"70e2ba24-4871-4d1d-9935-156fdbeb2810\") " pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:03.835728 master-0 kubenswrapper[4207]: E0224 02:03:03.835722 4207 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 02:03:03.835913 master-0 kubenswrapper[4207]: E0224 02:03:03.835780 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs podName:70e2ba24-4871-4d1d-9935-156fdbeb2810 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:11.835764415 +0000 UTC m=+77.179068655 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs") pod "network-metrics-daemon-tntcf" (UID: "70e2ba24-4871-4d1d-9935-156fdbeb2810") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 02:03:05.146217 master-0 kubenswrapper[4207]: I0224 02:03:05.146160 4207 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 24 02:03:05.503660 master-0 kubenswrapper[4207]: I0224 02:03:05.503612 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:05.504304 master-0 kubenswrapper[4207]: E0224 02:03:05.504241 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tntcf" podUID="70e2ba24-4871-4d1d-9935-156fdbeb2810" Feb 24 02:03:07.376701 master-0 kubenswrapper[4207]: I0224 02:03:07.376183 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k"] Feb 24 02:03:07.376701 master-0 kubenswrapper[4207]: I0224 02:03:07.376567 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" Feb 24 02:03:07.379595 master-0 kubenswrapper[4207]: I0224 02:03:07.379530 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 24 02:03:07.380028 master-0 kubenswrapper[4207]: I0224 02:03:07.379702 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 24 02:03:07.380028 master-0 kubenswrapper[4207]: I0224 02:03:07.379917 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 24 02:03:07.380028 master-0 kubenswrapper[4207]: I0224 02:03:07.379990 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 24 02:03:07.380117 master-0 kubenswrapper[4207]: I0224 02:03:07.380075 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 24 02:03:07.504133 master-0 kubenswrapper[4207]: I0224 02:03:07.504056 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:07.504323 master-0 kubenswrapper[4207]: E0224 02:03:07.504274 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tntcf" podUID="70e2ba24-4871-4d1d-9935-156fdbeb2810" Feb 24 02:03:07.566495 master-0 kubenswrapper[4207]: I0224 02:03:07.566429 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/523033b8-4101-4a55-8320-55bef04ddaaf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-bb22k\" (UID: \"523033b8-4101-4a55-8320-55bef04ddaaf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" Feb 24 02:03:07.566883 master-0 kubenswrapper[4207]: I0224 02:03:07.566842 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/523033b8-4101-4a55-8320-55bef04ddaaf-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-bb22k\" (UID: \"523033b8-4101-4a55-8320-55bef04ddaaf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" Feb 24 02:03:07.566939 master-0 kubenswrapper[4207]: I0224 02:03:07.566909 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlg2j\" (UniqueName: \"kubernetes.io/projected/523033b8-4101-4a55-8320-55bef04ddaaf-kube-api-access-dlg2j\") pod \"ovnkube-control-plane-5d8dfcdc87-bb22k\" (UID: \"523033b8-4101-4a55-8320-55bef04ddaaf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" Feb 24 02:03:07.567023 master-0 kubenswrapper[4207]: I0224 02:03:07.566981 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/523033b8-4101-4a55-8320-55bef04ddaaf-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-bb22k\" (UID: \"523033b8-4101-4a55-8320-55bef04ddaaf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" Feb 24 02:03:07.592643 master-0 kubenswrapper[4207]: I0224 02:03:07.592592 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-m5kbp"] Feb 24 02:03:07.593736 master-0 kubenswrapper[4207]: I0224 02:03:07.593703 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.596303 master-0 kubenswrapper[4207]: I0224 02:03:07.596253 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 24 02:03:07.598348 master-0 kubenswrapper[4207]: I0224 02:03:07.598304 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 24 02:03:07.667768 master-0 kubenswrapper[4207]: I0224 02:03:07.667541 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/70c4541e-cb82-4d13-95b4-905dda52bd9a-env-overrides\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.667768 master-0 kubenswrapper[4207]: I0224 02:03:07.667660 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-slash\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.667768 master-0 kubenswrapper[4207]: I0224 02:03:07.667700 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-node-log\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.667768 master-0 kubenswrapper[4207]: I0224 02:03:07.667737 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-log-socket\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.668190 master-0 kubenswrapper[4207]: I0224 02:03:07.667803 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/523033b8-4101-4a55-8320-55bef04ddaaf-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-bb22k\" (UID: \"523033b8-4101-4a55-8320-55bef04ddaaf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" Feb 24 02:03:07.668190 master-0 kubenswrapper[4207]: I0224 02:03:07.667911 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-run-netns\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.668190 master-0 kubenswrapper[4207]: I0224 02:03:07.668002 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-cni-bin\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.668312 master-0 kubenswrapper[4207]: I0224 02:03:07.668199 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-systemd-units\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.668312 master-0 kubenswrapper[4207]: I0224 02:03:07.668241 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpdd5\" (UniqueName: \"kubernetes.io/projected/70c4541e-cb82-4d13-95b4-905dda52bd9a-kube-api-access-kpdd5\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.668312 master-0 kubenswrapper[4207]: I0224 02:03:07.668284 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.668435 master-0 kubenswrapper[4207]: I0224 02:03:07.668326 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/70c4541e-cb82-4d13-95b4-905dda52bd9a-ovn-node-metrics-cert\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.668435 master-0 kubenswrapper[4207]: I0224 02:03:07.668368 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/523033b8-4101-4a55-8320-55bef04ddaaf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-bb22k\" (UID: \"523033b8-4101-4a55-8320-55bef04ddaaf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" Feb 24 02:03:07.668435 master-0 kubenswrapper[4207]: I0224 02:03:07.668410 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-run-systemd\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.668551 master-0 kubenswrapper[4207]: I0224 02:03:07.668489 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-var-lib-openvswitch\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.668715 master-0 kubenswrapper[4207]: I0224 02:03:07.668654 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-run-ovn\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.668791 master-0 kubenswrapper[4207]: I0224 02:03:07.668727 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/70c4541e-cb82-4d13-95b4-905dda52bd9a-ovnkube-script-lib\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.668791 master-0 kubenswrapper[4207]: I0224 02:03:07.668774 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-kubelet\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.668876 master-0 kubenswrapper[4207]: I0224 02:03:07.668810 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/523033b8-4101-4a55-8320-55bef04ddaaf-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-bb22k\" (UID: \"523033b8-4101-4a55-8320-55bef04ddaaf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" Feb 24 02:03:07.668876 master-0 kubenswrapper[4207]: I0224 02:03:07.668847 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-cni-netd\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.668959 master-0 kubenswrapper[4207]: I0224 02:03:07.668891 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlg2j\" (UniqueName: \"kubernetes.io/projected/523033b8-4101-4a55-8320-55bef04ddaaf-kube-api-access-dlg2j\") pod \"ovnkube-control-plane-5d8dfcdc87-bb22k\" (UID: \"523033b8-4101-4a55-8320-55bef04ddaaf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" Feb 24 02:03:07.668959 master-0 kubenswrapper[4207]: I0224 02:03:07.668919 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/523033b8-4101-4a55-8320-55bef04ddaaf-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-bb22k\" (UID: \"523033b8-4101-4a55-8320-55bef04ddaaf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" Feb 24 02:03:07.669038 master-0 kubenswrapper[4207]: I0224 02:03:07.668937 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-etc-openvswitch\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.669082 master-0 kubenswrapper[4207]: I0224 02:03:07.669031 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-run-ovn-kubernetes\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.669082 master-0 kubenswrapper[4207]: I0224 02:03:07.669074 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/70c4541e-cb82-4d13-95b4-905dda52bd9a-ovnkube-config\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.669220 master-0 kubenswrapper[4207]: I0224 02:03:07.669176 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-run-openvswitch\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.671060 master-0 kubenswrapper[4207]: I0224 02:03:07.669771 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/523033b8-4101-4a55-8320-55bef04ddaaf-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-bb22k\" (UID: \"523033b8-4101-4a55-8320-55bef04ddaaf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" Feb 24 02:03:07.672743 master-0 kubenswrapper[4207]: I0224 02:03:07.672697 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/523033b8-4101-4a55-8320-55bef04ddaaf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-bb22k\" (UID: \"523033b8-4101-4a55-8320-55bef04ddaaf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" Feb 24 02:03:07.696430 master-0 kubenswrapper[4207]: I0224 02:03:07.696362 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlg2j\" (UniqueName: \"kubernetes.io/projected/523033b8-4101-4a55-8320-55bef04ddaaf-kube-api-access-dlg2j\") pod \"ovnkube-control-plane-5d8dfcdc87-bb22k\" (UID: \"523033b8-4101-4a55-8320-55bef04ddaaf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" Feb 24 02:03:07.710017 master-0 kubenswrapper[4207]: I0224 02:03:07.709960 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" Feb 24 02:03:07.770314 master-0 kubenswrapper[4207]: I0224 02:03:07.770253 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-slash\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.770445 master-0 kubenswrapper[4207]: I0224 02:03:07.770323 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-run-netns\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.770445 master-0 kubenswrapper[4207]: I0224 02:03:07.770343 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-slash\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.770445 master-0 kubenswrapper[4207]: I0224 02:03:07.770360 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-node-log\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.770445 master-0 kubenswrapper[4207]: I0224 02:03:07.770416 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-log-socket\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.770727 master-0 kubenswrapper[4207]: I0224 02:03:07.770469 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-cni-bin\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.770727 master-0 kubenswrapper[4207]: I0224 02:03:07.770479 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-run-netns\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.770727 master-0 kubenswrapper[4207]: I0224 02:03:07.770511 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-log-socket\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.770727 master-0 kubenswrapper[4207]: I0224 02:03:07.770422 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-node-log\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.770727 master-0 kubenswrapper[4207]: I0224 02:03:07.770552 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-cni-bin\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.770727 master-0 kubenswrapper[4207]: I0224 02:03:07.770678 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-systemd-units\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.770727 master-0 kubenswrapper[4207]: I0224 02:03:07.770719 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpdd5\" (UniqueName: \"kubernetes.io/projected/70c4541e-cb82-4d13-95b4-905dda52bd9a-kube-api-access-kpdd5\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.771117 master-0 kubenswrapper[4207]: I0224 02:03:07.770764 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.771117 master-0 kubenswrapper[4207]: I0224 02:03:07.770821 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/70c4541e-cb82-4d13-95b4-905dda52bd9a-ovn-node-metrics-cert\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.771117 master-0 kubenswrapper[4207]: I0224 02:03:07.770860 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-run-systemd\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.771117 master-0 kubenswrapper[4207]: I0224 02:03:07.770892 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-var-lib-openvswitch\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.771117 master-0 kubenswrapper[4207]: I0224 02:03:07.770928 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-run-ovn\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.771117 master-0 kubenswrapper[4207]: I0224 02:03:07.770963 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/70c4541e-cb82-4d13-95b4-905dda52bd9a-ovnkube-script-lib\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.771117 master-0 kubenswrapper[4207]: I0224 02:03:07.770998 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-kubelet\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.771117 master-0 kubenswrapper[4207]: I0224 02:03:07.771038 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-cni-netd\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.771117 master-0 kubenswrapper[4207]: I0224 02:03:07.771083 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-etc-openvswitch\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.771117 master-0 kubenswrapper[4207]: I0224 02:03:07.771120 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-run-ovn-kubernetes\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.771759 master-0 kubenswrapper[4207]: I0224 02:03:07.771178 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/70c4541e-cb82-4d13-95b4-905dda52bd9a-ovnkube-config\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.771759 master-0 kubenswrapper[4207]: I0224 02:03:07.771236 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-run-openvswitch\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.771759 master-0 kubenswrapper[4207]: I0224 02:03:07.771274 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/70c4541e-cb82-4d13-95b4-905dda52bd9a-env-overrides\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.772146 master-0 kubenswrapper[4207]: I0224 02:03:07.772091 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/70c4541e-cb82-4d13-95b4-905dda52bd9a-env-overrides\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.772222 master-0 kubenswrapper[4207]: I0224 02:03:07.772170 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-systemd-units\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.772653 master-0 kubenswrapper[4207]: I0224 02:03:07.772602 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.773192 master-0 kubenswrapper[4207]: I0224 02:03:07.773135 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-kubelet\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.773267 master-0 kubenswrapper[4207]: I0224 02:03:07.773215 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-var-lib-openvswitch\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.773267 master-0 kubenswrapper[4207]: I0224 02:03:07.773225 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-run-ovn\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.773385 master-0 kubenswrapper[4207]: I0224 02:03:07.773299 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-run-systemd\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.773385 master-0 kubenswrapper[4207]: I0224 02:03:07.773356 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-run-ovn-kubernetes\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.773511 master-0 kubenswrapper[4207]: I0224 02:03:07.773430 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-cni-netd\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.773511 master-0 kubenswrapper[4207]: I0224 02:03:07.773482 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-etc-openvswitch\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.774059 master-0 kubenswrapper[4207]: I0224 02:03:07.774011 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/70c4541e-cb82-4d13-95b4-905dda52bd9a-ovnkube-script-lib\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.774142 master-0 kubenswrapper[4207]: I0224 02:03:07.774080 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-run-openvswitch\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.774340 master-0 kubenswrapper[4207]: I0224 02:03:07.774287 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/70c4541e-cb82-4d13-95b4-905dda52bd9a-ovnkube-config\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.784428 master-0 kubenswrapper[4207]: I0224 02:03:07.784369 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/70c4541e-cb82-4d13-95b4-905dda52bd9a-ovn-node-metrics-cert\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.792845 master-0 kubenswrapper[4207]: I0224 02:03:07.792778 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpdd5\" (UniqueName: \"kubernetes.io/projected/70c4541e-cb82-4d13-95b4-905dda52bd9a-kube-api-access-kpdd5\") pod \"ovnkube-node-m5kbp\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:07.912590 master-0 kubenswrapper[4207]: I0224 02:03:07.912523 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:09.504222 master-0 kubenswrapper[4207]: I0224 02:03:09.503711 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:09.511870 master-0 kubenswrapper[4207]: E0224 02:03:09.504345 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tntcf" podUID="70e2ba24-4871-4d1d-9935-156fdbeb2810" Feb 24 02:03:09.517869 master-0 kubenswrapper[4207]: W0224 02:03:09.517824 4207 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Feb 24 02:03:09.518886 master-0 kubenswrapper[4207]: I0224 02:03:09.518838 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 24 02:03:09.802867 master-0 kubenswrapper[4207]: I0224 02:03:09.802809 4207 generic.go:334] "Generic (PLEG): container finished" podID="57811d07-ae8a-44b7-8efb-dafc5afad31e" containerID="73bcd3ba04771dbfaf54cb795e59bd88d55d88d355f426be066ffb50beee1f86" exitCode=0 Feb 24 02:03:09.803040 master-0 kubenswrapper[4207]: I0224 02:03:09.802936 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jtdht" event={"ID":"57811d07-ae8a-44b7-8efb-dafc5afad31e","Type":"ContainerDied","Data":"73bcd3ba04771dbfaf54cb795e59bd88d55d88d355f426be066ffb50beee1f86"} Feb 24 02:03:09.805571 master-0 kubenswrapper[4207]: I0224 02:03:09.805527 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7fbjw" event={"ID":"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b","Type":"ContainerStarted","Data":"6f09a47b48e34a23fe4540e9e16e00fd21fba6e23645bfc37a6eb3a8315d55ca"} Feb 24 02:03:09.807431 master-0 kubenswrapper[4207]: I0224 02:03:09.807370 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" event={"ID":"70c4541e-cb82-4d13-95b4-905dda52bd9a","Type":"ContainerStarted","Data":"39e4dd152418288c83854aae3e150fd3be1fd966f2ead04dae32ed3cad75dace"} Feb 24 02:03:09.809539 master-0 kubenswrapper[4207]: I0224 02:03:09.809495 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" event={"ID":"523033b8-4101-4a55-8320-55bef04ddaaf","Type":"ContainerStarted","Data":"fe6c9d0cd94245484579be53b58d962cf0308943b0463bad3a228d1517043027"} Feb 24 02:03:09.809672 master-0 kubenswrapper[4207]: I0224 02:03:09.809547 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" event={"ID":"523033b8-4101-4a55-8320-55bef04ddaaf","Type":"ContainerStarted","Data":"5ba2f6486b90f665f4193dee37876ce40336ba0c3b009bf85c911f6014a84585"} Feb 24 02:03:09.821368 master-0 kubenswrapper[4207]: I0224 02:03:09.821304 4207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0-master-0" podStartSLOduration=0.821285004 podStartE2EDuration="821.285004ms" podCreationTimestamp="2026-02-24 02:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:03:09.819533585 +0000 UTC m=+75.162837865" watchObservedRunningTime="2026-02-24 02:03:09.821285004 +0000 UTC m=+75.164589274" Feb 24 02:03:10.515213 master-0 kubenswrapper[4207]: I0224 02:03:10.515146 4207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-7fbjw" podStartSLOduration=2.418109543 podStartE2EDuration="16.515126765s" podCreationTimestamp="2026-02-24 02:02:54 +0000 UTC" firstStartedPulling="2026-02-24 02:02:55.34485861 +0000 UTC m=+60.688162880" lastFinishedPulling="2026-02-24 02:03:09.441875832 +0000 UTC m=+74.785180102" observedRunningTime="2026-02-24 02:03:09.883034816 +0000 UTC m=+75.226339096" watchObservedRunningTime="2026-02-24 02:03:10.515126765 +0000 UTC m=+75.858431005" Feb 24 02:03:10.515834 master-0 kubenswrapper[4207]: I0224 02:03:10.515316 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 24 02:03:10.566185 master-0 kubenswrapper[4207]: I0224 02:03:10.566146 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-54b95"] Feb 24 02:03:10.566471 master-0 kubenswrapper[4207]: I0224 02:03:10.566445 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:10.566510 master-0 kubenswrapper[4207]: E0224 02:03:10.566493 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-54b95" podUID="e3a675b9-feaa-4456-b7b4-0cd3afc42a42" Feb 24 02:03:10.585747 master-0 kubenswrapper[4207]: I0224 02:03:10.585667 4207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-scheduler-master-0" podStartSLOduration=0.585562811 podStartE2EDuration="585.562811ms" podCreationTimestamp="2026-02-24 02:03:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:03:10.585152799 +0000 UTC m=+75.928457039" watchObservedRunningTime="2026-02-24 02:03:10.585562811 +0000 UTC m=+75.928867041" Feb 24 02:03:10.599873 master-0 kubenswrapper[4207]: I0224 02:03:10.599846 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn8hz\" (UniqueName: \"kubernetes.io/projected/e3a675b9-feaa-4456-b7b4-0cd3afc42a42-kube-api-access-nn8hz\") pod \"network-check-target-54b95\" (UID: \"e3a675b9-feaa-4456-b7b4-0cd3afc42a42\") " pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:10.700487 master-0 kubenswrapper[4207]: I0224 02:03:10.700449 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn8hz\" (UniqueName: \"kubernetes.io/projected/e3a675b9-feaa-4456-b7b4-0cd3afc42a42-kube-api-access-nn8hz\") pod \"network-check-target-54b95\" (UID: \"e3a675b9-feaa-4456-b7b4-0cd3afc42a42\") " pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:10.712172 master-0 kubenswrapper[4207]: E0224 02:03:10.712109 4207 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 02:03:10.712172 master-0 kubenswrapper[4207]: E0224 02:03:10.712153 4207 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 02:03:10.712172 master-0 kubenswrapper[4207]: E0224 02:03:10.712168 4207 projected.go:194] Error preparing data for projected volume kube-api-access-nn8hz for pod openshift-network-diagnostics/network-check-target-54b95: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 02:03:10.712352 master-0 kubenswrapper[4207]: E0224 02:03:10.712249 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3a675b9-feaa-4456-b7b4-0cd3afc42a42-kube-api-access-nn8hz podName:e3a675b9-feaa-4456-b7b4-0cd3afc42a42 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:11.212224993 +0000 UTC m=+76.555529233 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nn8hz" (UniqueName: "kubernetes.io/projected/e3a675b9-feaa-4456-b7b4-0cd3afc42a42-kube-api-access-nn8hz") pod "network-check-target-54b95" (UID: "e3a675b9-feaa-4456-b7b4-0cd3afc42a42") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 02:03:10.901217 master-0 kubenswrapper[4207]: I0224 02:03:10.901167 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:03:10.901595 master-0 kubenswrapper[4207]: E0224 02:03:10.901538 4207 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 24 02:03:10.901727 master-0 kubenswrapper[4207]: E0224 02:03:10.901702 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert podName:4f72a322-2142-482a-9b0b-2ad890181d7a nodeName:}" failed. No retries permitted until 2026-02-24 02:03:42.901669677 +0000 UTC m=+108.244973917 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert") pod "cluster-version-operator-5cfd9759cf-v5tpt" (UID: "4f72a322-2142-482a-9b0b-2ad890181d7a") : secret "cluster-version-operator-serving-cert" not found Feb 24 02:03:11.304756 master-0 kubenswrapper[4207]: I0224 02:03:11.304696 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn8hz\" (UniqueName: \"kubernetes.io/projected/e3a675b9-feaa-4456-b7b4-0cd3afc42a42-kube-api-access-nn8hz\") pod \"network-check-target-54b95\" (UID: \"e3a675b9-feaa-4456-b7b4-0cd3afc42a42\") " pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:11.304938 master-0 kubenswrapper[4207]: E0224 02:03:11.304893 4207 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 02:03:11.304938 master-0 kubenswrapper[4207]: E0224 02:03:11.304918 4207 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 02:03:11.304938 master-0 kubenswrapper[4207]: E0224 02:03:11.304931 4207 projected.go:194] Error preparing data for projected volume kube-api-access-nn8hz for pod openshift-network-diagnostics/network-check-target-54b95: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 02:03:11.305063 master-0 kubenswrapper[4207]: E0224 02:03:11.304991 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3a675b9-feaa-4456-b7b4-0cd3afc42a42-kube-api-access-nn8hz podName:e3a675b9-feaa-4456-b7b4-0cd3afc42a42 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:12.304971188 +0000 UTC m=+77.648275438 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nn8hz" (UniqueName: "kubernetes.io/projected/e3a675b9-feaa-4456-b7b4-0cd3afc42a42-kube-api-access-nn8hz") pod "network-check-target-54b95" (UID: "e3a675b9-feaa-4456-b7b4-0cd3afc42a42") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 02:03:11.503534 master-0 kubenswrapper[4207]: I0224 02:03:11.503483 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:11.503735 master-0 kubenswrapper[4207]: E0224 02:03:11.503615 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tntcf" podUID="70e2ba24-4871-4d1d-9935-156fdbeb2810" Feb 24 02:03:11.818776 master-0 kubenswrapper[4207]: I0224 02:03:11.818689 4207 generic.go:334] "Generic (PLEG): container finished" podID="57811d07-ae8a-44b7-8efb-dafc5afad31e" containerID="bdb96a50270730f3bce2e557a04b02a2063f4f2e15fbd55d5081bf5036b5f652" exitCode=0 Feb 24 02:03:11.819775 master-0 kubenswrapper[4207]: I0224 02:03:11.818776 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jtdht" event={"ID":"57811d07-ae8a-44b7-8efb-dafc5afad31e","Type":"ContainerDied","Data":"bdb96a50270730f3bce2e557a04b02a2063f4f2e15fbd55d5081bf5036b5f652"} Feb 24 02:03:11.909423 master-0 kubenswrapper[4207]: I0224 02:03:11.909352 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs\") pod \"network-metrics-daemon-tntcf\" (UID: \"70e2ba24-4871-4d1d-9935-156fdbeb2810\") " pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:11.909567 master-0 kubenswrapper[4207]: E0224 02:03:11.909526 4207 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 02:03:11.909690 master-0 kubenswrapper[4207]: E0224 02:03:11.909642 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs podName:70e2ba24-4871-4d1d-9935-156fdbeb2810 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:27.909618268 +0000 UTC m=+93.252922538 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs") pod "network-metrics-daemon-tntcf" (UID: "70e2ba24-4871-4d1d-9935-156fdbeb2810") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 02:03:12.312514 master-0 kubenswrapper[4207]: I0224 02:03:12.312468 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn8hz\" (UniqueName: \"kubernetes.io/projected/e3a675b9-feaa-4456-b7b4-0cd3afc42a42-kube-api-access-nn8hz\") pod \"network-check-target-54b95\" (UID: \"e3a675b9-feaa-4456-b7b4-0cd3afc42a42\") " pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:12.312758 master-0 kubenswrapper[4207]: E0224 02:03:12.312700 4207 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 02:03:12.312758 master-0 kubenswrapper[4207]: E0224 02:03:12.312749 4207 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 02:03:12.312822 master-0 kubenswrapper[4207]: E0224 02:03:12.312802 4207 projected.go:194] Error preparing data for projected volume kube-api-access-nn8hz for pod openshift-network-diagnostics/network-check-target-54b95: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 02:03:12.312923 master-0 kubenswrapper[4207]: E0224 02:03:12.312890 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3a675b9-feaa-4456-b7b4-0cd3afc42a42-kube-api-access-nn8hz podName:e3a675b9-feaa-4456-b7b4-0cd3afc42a42 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:14.312863588 +0000 UTC m=+79.656167868 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nn8hz" (UniqueName: "kubernetes.io/projected/e3a675b9-feaa-4456-b7b4-0cd3afc42a42-kube-api-access-nn8hz") pod "network-check-target-54b95" (UID: "e3a675b9-feaa-4456-b7b4-0cd3afc42a42") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 02:03:12.503685 master-0 kubenswrapper[4207]: I0224 02:03:12.503468 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:12.503685 master-0 kubenswrapper[4207]: E0224 02:03:12.503634 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-54b95" podUID="e3a675b9-feaa-4456-b7b4-0cd3afc42a42" Feb 24 02:03:13.503918 master-0 kubenswrapper[4207]: I0224 02:03:13.503834 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:13.504695 master-0 kubenswrapper[4207]: E0224 02:03:13.504052 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tntcf" podUID="70e2ba24-4871-4d1d-9935-156fdbeb2810" Feb 24 02:03:13.762836 master-0 kubenswrapper[4207]: I0224 02:03:13.762647 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-p5b6q"] Feb 24 02:03:13.763294 master-0 kubenswrapper[4207]: I0224 02:03:13.763238 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-p5b6q" Feb 24 02:03:13.774409 master-0 kubenswrapper[4207]: I0224 02:03:13.774027 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 24 02:03:13.774409 master-0 kubenswrapper[4207]: I0224 02:03:13.774169 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 24 02:03:13.774409 master-0 kubenswrapper[4207]: I0224 02:03:13.774201 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 24 02:03:13.774409 master-0 kubenswrapper[4207]: I0224 02:03:13.774186 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 24 02:03:13.774409 master-0 kubenswrapper[4207]: I0224 02:03:13.774185 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 24 02:03:13.829145 master-0 kubenswrapper[4207]: I0224 02:03:13.829097 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/adc1097b-c1ab-4f09-965d-1c819671475b-webhook-cert\") pod \"network-node-identity-p5b6q\" (UID: \"adc1097b-c1ab-4f09-965d-1c819671475b\") " pod="openshift-network-node-identity/network-node-identity-p5b6q" Feb 24 02:03:13.930066 master-0 kubenswrapper[4207]: I0224 02:03:13.930010 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/adc1097b-c1ab-4f09-965d-1c819671475b-webhook-cert\") pod \"network-node-identity-p5b6q\" (UID: \"adc1097b-c1ab-4f09-965d-1c819671475b\") " pod="openshift-network-node-identity/network-node-identity-p5b6q" Feb 24 02:03:13.930167 master-0 kubenswrapper[4207]: I0224 02:03:13.930074 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqtld\" (UniqueName: \"kubernetes.io/projected/adc1097b-c1ab-4f09-965d-1c819671475b-kube-api-access-nqtld\") pod \"network-node-identity-p5b6q\" (UID: \"adc1097b-c1ab-4f09-965d-1c819671475b\") " pod="openshift-network-node-identity/network-node-identity-p5b6q" Feb 24 02:03:13.930167 master-0 kubenswrapper[4207]: I0224 02:03:13.930127 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/adc1097b-c1ab-4f09-965d-1c819671475b-env-overrides\") pod \"network-node-identity-p5b6q\" (UID: \"adc1097b-c1ab-4f09-965d-1c819671475b\") " pod="openshift-network-node-identity/network-node-identity-p5b6q" Feb 24 02:03:13.930167 master-0 kubenswrapper[4207]: I0224 02:03:13.930155 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/adc1097b-c1ab-4f09-965d-1c819671475b-ovnkube-identity-cm\") pod \"network-node-identity-p5b6q\" (UID: \"adc1097b-c1ab-4f09-965d-1c819671475b\") " pod="openshift-network-node-identity/network-node-identity-p5b6q" Feb 24 02:03:13.931750 master-0 kubenswrapper[4207]: E0224 02:03:13.931431 4207 secret.go:189] Couldn't get secret openshift-network-node-identity/network-node-identity-cert: secret "network-node-identity-cert" not found Feb 24 02:03:13.931750 master-0 kubenswrapper[4207]: E0224 02:03:13.931604 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adc1097b-c1ab-4f09-965d-1c819671475b-webhook-cert podName:adc1097b-c1ab-4f09-965d-1c819671475b nodeName:}" failed. No retries permitted until 2026-02-24 02:03:14.431562951 +0000 UTC m=+79.774867201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/adc1097b-c1ab-4f09-965d-1c819671475b-webhook-cert") pod "network-node-identity-p5b6q" (UID: "adc1097b-c1ab-4f09-965d-1c819671475b") : secret "network-node-identity-cert" not found Feb 24 02:03:14.030499 master-0 kubenswrapper[4207]: I0224 02:03:14.030450 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqtld\" (UniqueName: \"kubernetes.io/projected/adc1097b-c1ab-4f09-965d-1c819671475b-kube-api-access-nqtld\") pod \"network-node-identity-p5b6q\" (UID: \"adc1097b-c1ab-4f09-965d-1c819671475b\") " pod="openshift-network-node-identity/network-node-identity-p5b6q" Feb 24 02:03:14.030587 master-0 kubenswrapper[4207]: I0224 02:03:14.030528 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/adc1097b-c1ab-4f09-965d-1c819671475b-env-overrides\") pod \"network-node-identity-p5b6q\" (UID: \"adc1097b-c1ab-4f09-965d-1c819671475b\") " pod="openshift-network-node-identity/network-node-identity-p5b6q" Feb 24 02:03:14.030735 master-0 kubenswrapper[4207]: I0224 02:03:14.030705 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/adc1097b-c1ab-4f09-965d-1c819671475b-ovnkube-identity-cm\") pod \"network-node-identity-p5b6q\" (UID: \"adc1097b-c1ab-4f09-965d-1c819671475b\") " pod="openshift-network-node-identity/network-node-identity-p5b6q" Feb 24 02:03:14.031711 master-0 kubenswrapper[4207]: I0224 02:03:14.031672 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/adc1097b-c1ab-4f09-965d-1c819671475b-env-overrides\") pod \"network-node-identity-p5b6q\" (UID: \"adc1097b-c1ab-4f09-965d-1c819671475b\") " pod="openshift-network-node-identity/network-node-identity-p5b6q" Feb 24 02:03:14.032159 master-0 kubenswrapper[4207]: I0224 02:03:14.032120 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/adc1097b-c1ab-4f09-965d-1c819671475b-ovnkube-identity-cm\") pod \"network-node-identity-p5b6q\" (UID: \"adc1097b-c1ab-4f09-965d-1c819671475b\") " pod="openshift-network-node-identity/network-node-identity-p5b6q" Feb 24 02:03:14.060276 master-0 kubenswrapper[4207]: I0224 02:03:14.060248 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqtld\" (UniqueName: \"kubernetes.io/projected/adc1097b-c1ab-4f09-965d-1c819671475b-kube-api-access-nqtld\") pod \"network-node-identity-p5b6q\" (UID: \"adc1097b-c1ab-4f09-965d-1c819671475b\") " pod="openshift-network-node-identity/network-node-identity-p5b6q" Feb 24 02:03:14.333446 master-0 kubenswrapper[4207]: I0224 02:03:14.333340 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn8hz\" (UniqueName: \"kubernetes.io/projected/e3a675b9-feaa-4456-b7b4-0cd3afc42a42-kube-api-access-nn8hz\") pod \"network-check-target-54b95\" (UID: \"e3a675b9-feaa-4456-b7b4-0cd3afc42a42\") " pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:14.333661 master-0 kubenswrapper[4207]: E0224 02:03:14.333476 4207 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 02:03:14.333661 master-0 kubenswrapper[4207]: E0224 02:03:14.333493 4207 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 02:03:14.333661 master-0 kubenswrapper[4207]: E0224 02:03:14.333504 4207 projected.go:194] Error preparing data for projected volume kube-api-access-nn8hz for pod openshift-network-diagnostics/network-check-target-54b95: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 02:03:14.333661 master-0 kubenswrapper[4207]: E0224 02:03:14.333542 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3a675b9-feaa-4456-b7b4-0cd3afc42a42-kube-api-access-nn8hz podName:e3a675b9-feaa-4456-b7b4-0cd3afc42a42 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:18.333528815 +0000 UTC m=+83.676833055 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-nn8hz" (UniqueName: "kubernetes.io/projected/e3a675b9-feaa-4456-b7b4-0cd3afc42a42-kube-api-access-nn8hz") pod "network-check-target-54b95" (UID: "e3a675b9-feaa-4456-b7b4-0cd3afc42a42") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 02:03:14.434456 master-0 kubenswrapper[4207]: I0224 02:03:14.434411 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/adc1097b-c1ab-4f09-965d-1c819671475b-webhook-cert\") pod \"network-node-identity-p5b6q\" (UID: \"adc1097b-c1ab-4f09-965d-1c819671475b\") " pod="openshift-network-node-identity/network-node-identity-p5b6q" Feb 24 02:03:14.440128 master-0 kubenswrapper[4207]: I0224 02:03:14.440093 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/adc1097b-c1ab-4f09-965d-1c819671475b-webhook-cert\") pod \"network-node-identity-p5b6q\" (UID: \"adc1097b-c1ab-4f09-965d-1c819671475b\") " pod="openshift-network-node-identity/network-node-identity-p5b6q" Feb 24 02:03:14.504145 master-0 kubenswrapper[4207]: I0224 02:03:14.504094 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:14.504560 master-0 kubenswrapper[4207]: E0224 02:03:14.504243 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-54b95" podUID="e3a675b9-feaa-4456-b7b4-0cd3afc42a42" Feb 24 02:03:14.695673 master-0 kubenswrapper[4207]: I0224 02:03:14.695546 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-p5b6q" Feb 24 02:03:14.706762 master-0 kubenswrapper[4207]: W0224 02:03:14.706707 4207 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadc1097b_c1ab_4f09_965d_1c819671475b.slice/crio-fe31ec10252333ed40b830b2aacce6de4e895210a7d0b0aebae765349ccfa670 WatchSource:0}: Error finding container fe31ec10252333ed40b830b2aacce6de4e895210a7d0b0aebae765349ccfa670: Status 404 returned error can't find the container with id fe31ec10252333ed40b830b2aacce6de4e895210a7d0b0aebae765349ccfa670 Feb 24 02:03:14.827377 master-0 kubenswrapper[4207]: I0224 02:03:14.827307 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-p5b6q" event={"ID":"adc1097b-c1ab-4f09-965d-1c819671475b","Type":"ContainerStarted","Data":"fe31ec10252333ed40b830b2aacce6de4e895210a7d0b0aebae765349ccfa670"} Feb 24 02:03:14.830456 master-0 kubenswrapper[4207]: I0224 02:03:14.830408 4207 generic.go:334] "Generic (PLEG): container finished" podID="57811d07-ae8a-44b7-8efb-dafc5afad31e" containerID="cfdb24d0d0b1a9e1ffe1c98259396806799adff6a318a37a19e4e31ee02f6987" exitCode=0 Feb 24 02:03:14.830563 master-0 kubenswrapper[4207]: I0224 02:03:14.830456 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jtdht" event={"ID":"57811d07-ae8a-44b7-8efb-dafc5afad31e","Type":"ContainerDied","Data":"cfdb24d0d0b1a9e1ffe1c98259396806799adff6a318a37a19e4e31ee02f6987"} Feb 24 02:03:15.504381 master-0 kubenswrapper[4207]: I0224 02:03:15.504296 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:15.505687 master-0 kubenswrapper[4207]: E0224 02:03:15.505609 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tntcf" podUID="70e2ba24-4871-4d1d-9935-156fdbeb2810" Feb 24 02:03:15.686403 master-0 kubenswrapper[4207]: I0224 02:03:15.685985 4207 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 24 02:03:16.504001 master-0 kubenswrapper[4207]: I0224 02:03:16.503948 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:16.504416 master-0 kubenswrapper[4207]: E0224 02:03:16.504039 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-54b95" podUID="e3a675b9-feaa-4456-b7b4-0cd3afc42a42" Feb 24 02:03:17.504717 master-0 kubenswrapper[4207]: I0224 02:03:17.504303 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:17.504717 master-0 kubenswrapper[4207]: E0224 02:03:17.504446 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tntcf" podUID="70e2ba24-4871-4d1d-9935-156fdbeb2810" Feb 24 02:03:18.370355 master-0 kubenswrapper[4207]: I0224 02:03:18.370274 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn8hz\" (UniqueName: \"kubernetes.io/projected/e3a675b9-feaa-4456-b7b4-0cd3afc42a42-kube-api-access-nn8hz\") pod \"network-check-target-54b95\" (UID: \"e3a675b9-feaa-4456-b7b4-0cd3afc42a42\") " pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:18.370707 master-0 kubenswrapper[4207]: E0224 02:03:18.370406 4207 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 02:03:18.370707 master-0 kubenswrapper[4207]: E0224 02:03:18.370435 4207 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 02:03:18.370707 master-0 kubenswrapper[4207]: E0224 02:03:18.370448 4207 projected.go:194] Error preparing data for projected volume kube-api-access-nn8hz for pod openshift-network-diagnostics/network-check-target-54b95: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 02:03:18.370707 master-0 kubenswrapper[4207]: E0224 02:03:18.370505 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3a675b9-feaa-4456-b7b4-0cd3afc42a42-kube-api-access-nn8hz podName:e3a675b9-feaa-4456-b7b4-0cd3afc42a42 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:26.370487986 +0000 UTC m=+91.713792236 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-nn8hz" (UniqueName: "kubernetes.io/projected/e3a675b9-feaa-4456-b7b4-0cd3afc42a42-kube-api-access-nn8hz") pod "network-check-target-54b95" (UID: "e3a675b9-feaa-4456-b7b4-0cd3afc42a42") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 02:03:18.503955 master-0 kubenswrapper[4207]: I0224 02:03:18.503907 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:18.504260 master-0 kubenswrapper[4207]: E0224 02:03:18.504059 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-54b95" podUID="e3a675b9-feaa-4456-b7b4-0cd3afc42a42" Feb 24 02:03:19.504240 master-0 kubenswrapper[4207]: I0224 02:03:19.504166 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:19.505292 master-0 kubenswrapper[4207]: E0224 02:03:19.504323 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tntcf" podUID="70e2ba24-4871-4d1d-9935-156fdbeb2810" Feb 24 02:03:20.504170 master-0 kubenswrapper[4207]: I0224 02:03:20.504081 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:20.504468 master-0 kubenswrapper[4207]: E0224 02:03:20.504200 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-54b95" podUID="e3a675b9-feaa-4456-b7b4-0cd3afc42a42" Feb 24 02:03:21.504045 master-0 kubenswrapper[4207]: I0224 02:03:21.503975 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:21.505942 master-0 kubenswrapper[4207]: E0224 02:03:21.504086 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tntcf" podUID="70e2ba24-4871-4d1d-9935-156fdbeb2810" Feb 24 02:03:22.504373 master-0 kubenswrapper[4207]: I0224 02:03:22.504302 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:22.504823 master-0 kubenswrapper[4207]: E0224 02:03:22.504480 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-54b95" podUID="e3a675b9-feaa-4456-b7b4-0cd3afc42a42" Feb 24 02:03:23.504623 master-0 kubenswrapper[4207]: I0224 02:03:23.504233 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:23.505671 master-0 kubenswrapper[4207]: E0224 02:03:23.504753 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tntcf" podUID="70e2ba24-4871-4d1d-9935-156fdbeb2810" Feb 24 02:03:24.503478 master-0 kubenswrapper[4207]: I0224 02:03:24.503373 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:24.503478 master-0 kubenswrapper[4207]: E0224 02:03:24.503598 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-54b95" podUID="e3a675b9-feaa-4456-b7b4-0cd3afc42a42" Feb 24 02:03:25.504319 master-0 kubenswrapper[4207]: I0224 02:03:25.504161 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:25.505893 master-0 kubenswrapper[4207]: E0224 02:03:25.505356 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tntcf" podUID="70e2ba24-4871-4d1d-9935-156fdbeb2810" Feb 24 02:03:26.458820 master-0 kubenswrapper[4207]: I0224 02:03:26.458773 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn8hz\" (UniqueName: \"kubernetes.io/projected/e3a675b9-feaa-4456-b7b4-0cd3afc42a42-kube-api-access-nn8hz\") pod \"network-check-target-54b95\" (UID: \"e3a675b9-feaa-4456-b7b4-0cd3afc42a42\") " pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:26.459111 master-0 kubenswrapper[4207]: E0224 02:03:26.458937 4207 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 02:03:26.459111 master-0 kubenswrapper[4207]: E0224 02:03:26.458955 4207 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 02:03:26.459111 master-0 kubenswrapper[4207]: E0224 02:03:26.458966 4207 projected.go:194] Error preparing data for projected volume kube-api-access-nn8hz for pod openshift-network-diagnostics/network-check-target-54b95: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 02:03:26.459111 master-0 kubenswrapper[4207]: E0224 02:03:26.459019 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3a675b9-feaa-4456-b7b4-0cd3afc42a42-kube-api-access-nn8hz podName:e3a675b9-feaa-4456-b7b4-0cd3afc42a42 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:42.459004116 +0000 UTC m=+107.802308356 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-nn8hz" (UniqueName: "kubernetes.io/projected/e3a675b9-feaa-4456-b7b4-0cd3afc42a42-kube-api-access-nn8hz") pod "network-check-target-54b95" (UID: "e3a675b9-feaa-4456-b7b4-0cd3afc42a42") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 02:03:26.504244 master-0 kubenswrapper[4207]: I0224 02:03:26.504184 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:26.504475 master-0 kubenswrapper[4207]: E0224 02:03:26.504433 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-54b95" podUID="e3a675b9-feaa-4456-b7b4-0cd3afc42a42" Feb 24 02:03:27.504214 master-0 kubenswrapper[4207]: I0224 02:03:27.504128 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:27.505486 master-0 kubenswrapper[4207]: E0224 02:03:27.504498 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tntcf" podUID="70e2ba24-4871-4d1d-9935-156fdbeb2810" Feb 24 02:03:27.526298 master-0 kubenswrapper[4207]: I0224 02:03:27.526214 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 24 02:03:27.872198 master-0 kubenswrapper[4207]: I0224 02:03:27.872098 4207 generic.go:334] "Generic (PLEG): container finished" podID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerID="022a7efb43e6f5db3a609d614b46f18d131186112acc9f758e926a268d8724ff" exitCode=0 Feb 24 02:03:27.872410 master-0 kubenswrapper[4207]: I0224 02:03:27.872191 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" event={"ID":"70c4541e-cb82-4d13-95b4-905dda52bd9a","Type":"ContainerDied","Data":"022a7efb43e6f5db3a609d614b46f18d131186112acc9f758e926a268d8724ff"} Feb 24 02:03:27.878535 master-0 kubenswrapper[4207]: I0224 02:03:27.878454 4207 generic.go:334] "Generic (PLEG): container finished" podID="57811d07-ae8a-44b7-8efb-dafc5afad31e" containerID="46f1df0f3044924b6c94bc53975525ce01b17baddc32b6007d1fff90c64f595f" exitCode=0 Feb 24 02:03:27.878691 master-0 kubenswrapper[4207]: I0224 02:03:27.878542 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jtdht" event={"ID":"57811d07-ae8a-44b7-8efb-dafc5afad31e","Type":"ContainerDied","Data":"46f1df0f3044924b6c94bc53975525ce01b17baddc32b6007d1fff90c64f595f"} Feb 24 02:03:27.886161 master-0 kubenswrapper[4207]: I0224 02:03:27.884405 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" event={"ID":"523033b8-4101-4a55-8320-55bef04ddaaf","Type":"ContainerStarted","Data":"1d78e51e0a1da7f353fa2fc0c8e9c9a46d124e7c769ba9917e9138703d244089"} Feb 24 02:03:27.892082 master-0 kubenswrapper[4207]: I0224 02:03:27.891993 4207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podStartSLOduration=0.891967879 podStartE2EDuration="891.967879ms" podCreationTimestamp="2026-02-24 02:03:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:03:27.89126586 +0000 UTC m=+93.234570140" watchObservedRunningTime="2026-02-24 02:03:27.891967879 +0000 UTC m=+93.235272149" Feb 24 02:03:27.893805 master-0 kubenswrapper[4207]: I0224 02:03:27.893749 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-p5b6q" event={"ID":"adc1097b-c1ab-4f09-965d-1c819671475b","Type":"ContainerStarted","Data":"40959528c0e652134371f4afb20a4ee849f4f1c1c0599ddd64b9076a7771bc13"} Feb 24 02:03:27.894349 master-0 kubenswrapper[4207]: I0224 02:03:27.893805 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-p5b6q" event={"ID":"adc1097b-c1ab-4f09-965d-1c819671475b","Type":"ContainerStarted","Data":"6f03a78798c0233a4b142276c0a188023ee91558d59191ef7fa3f909cb0c5802"} Feb 24 02:03:27.975627 master-0 kubenswrapper[4207]: I0224 02:03:27.975459 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs\") pod \"network-metrics-daemon-tntcf\" (UID: \"70e2ba24-4871-4d1d-9935-156fdbeb2810\") " pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:27.976629 master-0 kubenswrapper[4207]: E0224 02:03:27.976552 4207 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 02:03:27.976761 master-0 kubenswrapper[4207]: E0224 02:03:27.976700 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs podName:70e2ba24-4871-4d1d-9935-156fdbeb2810 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:59.976664035 +0000 UTC m=+125.319968315 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs") pod "network-metrics-daemon-tntcf" (UID: "70e2ba24-4871-4d1d-9935-156fdbeb2810") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 02:03:27.990814 master-0 kubenswrapper[4207]: I0224 02:03:27.990698 4207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" podStartSLOduration=3.610208273 podStartE2EDuration="20.990668568s" podCreationTimestamp="2026-02-24 02:03:07 +0000 UTC" firstStartedPulling="2026-02-24 02:03:09.609337139 +0000 UTC m=+74.952641419" lastFinishedPulling="2026-02-24 02:03:26.989797434 +0000 UTC m=+92.333101714" observedRunningTime="2026-02-24 02:03:27.989739702 +0000 UTC m=+93.333044002" watchObservedRunningTime="2026-02-24 02:03:27.990668568 +0000 UTC m=+93.333972848" Feb 24 02:03:28.504803 master-0 kubenswrapper[4207]: I0224 02:03:28.504201 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:28.512940 master-0 kubenswrapper[4207]: E0224 02:03:28.504962 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-54b95" podUID="e3a675b9-feaa-4456-b7b4-0cd3afc42a42" Feb 24 02:03:28.902781 master-0 kubenswrapper[4207]: I0224 02:03:28.902549 4207 generic.go:334] "Generic (PLEG): container finished" podID="57811d07-ae8a-44b7-8efb-dafc5afad31e" containerID="0261b05dc86f44c57d1260d8e9e574b7afb0942396c397b4be98f1486a4e967b" exitCode=0 Feb 24 02:03:28.902781 master-0 kubenswrapper[4207]: I0224 02:03:28.902745 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jtdht" event={"ID":"57811d07-ae8a-44b7-8efb-dafc5afad31e","Type":"ContainerDied","Data":"0261b05dc86f44c57d1260d8e9e574b7afb0942396c397b4be98f1486a4e967b"} Feb 24 02:03:28.923670 master-0 kubenswrapper[4207]: I0224 02:03:28.920848 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" event={"ID":"70c4541e-cb82-4d13-95b4-905dda52bd9a","Type":"ContainerStarted","Data":"f92ac89c1c0a664ab7711d88b604153ad5e50b45fc5bf9c734191f13af287319"} Feb 24 02:03:28.923670 master-0 kubenswrapper[4207]: I0224 02:03:28.920930 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" event={"ID":"70c4541e-cb82-4d13-95b4-905dda52bd9a","Type":"ContainerStarted","Data":"0cd095e8ba276ef3afb5ef5d7c289b663f14bf3ce618e318e464f72c6c1aa15e"} Feb 24 02:03:28.923670 master-0 kubenswrapper[4207]: I0224 02:03:28.920959 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" event={"ID":"70c4541e-cb82-4d13-95b4-905dda52bd9a","Type":"ContainerStarted","Data":"fdf844e0cfc654d3df3c7c38d2d4549d35254f807092f3af31106092a5e9bffb"} Feb 24 02:03:28.923670 master-0 kubenswrapper[4207]: I0224 02:03:28.920980 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" event={"ID":"70c4541e-cb82-4d13-95b4-905dda52bd9a","Type":"ContainerStarted","Data":"304f018e75b8ea837a6c1758bf18ece57119ac34a2be377600b1e6267e65ee61"} Feb 24 02:03:28.923670 master-0 kubenswrapper[4207]: I0224 02:03:28.921000 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" event={"ID":"70c4541e-cb82-4d13-95b4-905dda52bd9a","Type":"ContainerStarted","Data":"1ab44020eede7767bace0990c6a5ce8d5fd6375706720daf62a15832a05696cb"} Feb 24 02:03:28.923670 master-0 kubenswrapper[4207]: I0224 02:03:28.921017 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" event={"ID":"70c4541e-cb82-4d13-95b4-905dda52bd9a","Type":"ContainerStarted","Data":"d34f583e77d75903384301039369523622480125533be2d573f01b3a6c06071d"} Feb 24 02:03:28.942498 master-0 kubenswrapper[4207]: I0224 02:03:28.942388 4207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-p5b6q" podStartSLOduration=3.655419972 podStartE2EDuration="15.942351581s" podCreationTimestamp="2026-02-24 02:03:13 +0000 UTC" firstStartedPulling="2026-02-24 02:03:14.708638957 +0000 UTC m=+80.051943227" lastFinishedPulling="2026-02-24 02:03:26.995570566 +0000 UTC m=+92.338874836" observedRunningTime="2026-02-24 02:03:28.015148984 +0000 UTC m=+93.358453264" watchObservedRunningTime="2026-02-24 02:03:28.942351581 +0000 UTC m=+94.285655861" Feb 24 02:03:29.503564 master-0 kubenswrapper[4207]: I0224 02:03:29.503440 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:29.503872 master-0 kubenswrapper[4207]: E0224 02:03:29.503673 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tntcf" podUID="70e2ba24-4871-4d1d-9935-156fdbeb2810" Feb 24 02:03:29.931625 master-0 kubenswrapper[4207]: I0224 02:03:29.931299 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jtdht" event={"ID":"57811d07-ae8a-44b7-8efb-dafc5afad31e","Type":"ContainerStarted","Data":"577e534e7b46ead634ade626be31364b87f35a324373685d74e9e47dc0da5b44"} Feb 24 02:03:30.504027 master-0 kubenswrapper[4207]: I0224 02:03:30.503936 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:30.504378 master-0 kubenswrapper[4207]: E0224 02:03:30.504189 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-54b95" podUID="e3a675b9-feaa-4456-b7b4-0cd3afc42a42" Feb 24 02:03:31.504123 master-0 kubenswrapper[4207]: I0224 02:03:31.504050 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:31.505099 master-0 kubenswrapper[4207]: E0224 02:03:31.504256 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tntcf" podUID="70e2ba24-4871-4d1d-9935-156fdbeb2810" Feb 24 02:03:31.946744 master-0 kubenswrapper[4207]: I0224 02:03:31.946389 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" event={"ID":"70c4541e-cb82-4d13-95b4-905dda52bd9a","Type":"ContainerStarted","Data":"c60da12f5e48e3c24b66984bd016dde81cfe90826c4afeae0e33075af2f42aa4"} Feb 24 02:03:32.503435 master-0 kubenswrapper[4207]: I0224 02:03:32.503341 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:32.503847 master-0 kubenswrapper[4207]: E0224 02:03:32.503520 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-54b95" podUID="e3a675b9-feaa-4456-b7b4-0cd3afc42a42" Feb 24 02:03:33.450959 master-0 kubenswrapper[4207]: I0224 02:03:33.450531 4207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-jtdht" podStartSLOduration=7.144055488 podStartE2EDuration="38.450507728s" podCreationTimestamp="2026-02-24 02:02:55 +0000 UTC" firstStartedPulling="2026-02-24 02:02:55.564960533 +0000 UTC m=+60.908264813" lastFinishedPulling="2026-02-24 02:03:26.871412773 +0000 UTC m=+92.214717053" observedRunningTime="2026-02-24 02:03:30.076446441 +0000 UTC m=+95.419750721" watchObservedRunningTime="2026-02-24 02:03:33.450507728 +0000 UTC m=+98.793811998" Feb 24 02:03:33.451952 master-0 kubenswrapper[4207]: I0224 02:03:33.451897 4207 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-m5kbp"] Feb 24 02:03:33.504180 master-0 kubenswrapper[4207]: I0224 02:03:33.503606 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:33.504180 master-0 kubenswrapper[4207]: E0224 02:03:33.503882 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tntcf" podUID="70e2ba24-4871-4d1d-9935-156fdbeb2810" Feb 24 02:03:33.961105 master-0 kubenswrapper[4207]: I0224 02:03:33.960990 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" event={"ID":"70c4541e-cb82-4d13-95b4-905dda52bd9a","Type":"ContainerStarted","Data":"700ed9e73a3b7959e5250d3b5d994d4c07c1209bc00871cc5d48ed3ceeb9f11a"} Feb 24 02:03:33.961398 master-0 kubenswrapper[4207]: I0224 02:03:33.961293 4207 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="sbdb" containerID="cri-o://c60da12f5e48e3c24b66984bd016dde81cfe90826c4afeae0e33075af2f42aa4" gracePeriod=30 Feb 24 02:03:33.961436 master-0 kubenswrapper[4207]: I0224 02:03:33.961381 4207 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="nbdb" containerID="cri-o://f92ac89c1c0a664ab7711d88b604153ad5e50b45fc5bf9c734191f13af287319" gracePeriod=30 Feb 24 02:03:33.961436 master-0 kubenswrapper[4207]: I0224 02:03:33.961340 4207 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://fdf844e0cfc654d3df3c7c38d2d4549d35254f807092f3af31106092a5e9bffb" gracePeriod=30 Feb 24 02:03:33.961556 master-0 kubenswrapper[4207]: I0224 02:03:33.961290 4207 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="ovn-controller" containerID="cri-o://d34f583e77d75903384301039369523622480125533be2d573f01b3a6c06071d" gracePeriod=30 Feb 24 02:03:33.961556 master-0 kubenswrapper[4207]: I0224 02:03:33.961380 4207 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="kube-rbac-proxy-node" containerID="cri-o://304f018e75b8ea837a6c1758bf18ece57119ac34a2be377600b1e6267e65ee61" gracePeriod=30 Feb 24 02:03:33.961645 master-0 kubenswrapper[4207]: I0224 02:03:33.961436 4207 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="ovn-acl-logging" containerID="cri-o://1ab44020eede7767bace0990c6a5ce8d5fd6375706720daf62a15832a05696cb" gracePeriod=30 Feb 24 02:03:33.961726 master-0 kubenswrapper[4207]: I0224 02:03:33.961553 4207 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="northd" containerID="cri-o://0cd095e8ba276ef3afb5ef5d7c289b663f14bf3ce618e318e464f72c6c1aa15e" gracePeriod=30 Feb 24 02:03:33.961787 master-0 kubenswrapper[4207]: I0224 02:03:33.961759 4207 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:33.961991 master-0 kubenswrapper[4207]: I0224 02:03:33.961915 4207 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:33.962038 master-0 kubenswrapper[4207]: I0224 02:03:33.962014 4207 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:33.965321 master-0 kubenswrapper[4207]: E0224 02:03:33.965235 4207 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f92ac89c1c0a664ab7711d88b604153ad5e50b45fc5bf9c734191f13af287319" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 24 02:03:33.965532 master-0 kubenswrapper[4207]: E0224 02:03:33.965479 4207 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c60da12f5e48e3c24b66984bd016dde81cfe90826c4afeae0e33075af2f42aa4" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 24 02:03:33.967007 master-0 kubenswrapper[4207]: E0224 02:03:33.966942 4207 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f92ac89c1c0a664ab7711d88b604153ad5e50b45fc5bf9c734191f13af287319" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 24 02:03:33.970245 master-0 kubenswrapper[4207]: E0224 02:03:33.969199 4207 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c60da12f5e48e3c24b66984bd016dde81cfe90826c4afeae0e33075af2f42aa4" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 24 02:03:33.970245 master-0 kubenswrapper[4207]: E0224 02:03:33.969641 4207 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f92ac89c1c0a664ab7711d88b604153ad5e50b45fc5bf9c734191f13af287319" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 24 02:03:33.970245 master-0 kubenswrapper[4207]: E0224 02:03:33.969708 4207 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="nbdb" Feb 24 02:03:33.972722 master-0 kubenswrapper[4207]: E0224 02:03:33.970853 4207 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c60da12f5e48e3c24b66984bd016dde81cfe90826c4afeae0e33075af2f42aa4" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 24 02:03:33.972722 master-0 kubenswrapper[4207]: E0224 02:03:33.971407 4207 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="sbdb" Feb 24 02:03:33.996908 master-0 kubenswrapper[4207]: I0224 02:03:33.996824 4207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" podStartSLOduration=9.413046744 podStartE2EDuration="26.996802171s" podCreationTimestamp="2026-02-24 02:03:07 +0000 UTC" firstStartedPulling="2026-02-24 02:03:09.346230779 +0000 UTC m=+74.689535049" lastFinishedPulling="2026-02-24 02:03:26.929986206 +0000 UTC m=+92.273290476" observedRunningTime="2026-02-24 02:03:33.995733311 +0000 UTC m=+99.339037591" watchObservedRunningTime="2026-02-24 02:03:33.996802171 +0000 UTC m=+99.340106451" Feb 24 02:03:34.009035 master-0 kubenswrapper[4207]: I0224 02:03:34.006785 4207 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="ovnkube-controller" containerID="cri-o://700ed9e73a3b7959e5250d3b5d994d4c07c1209bc00871cc5d48ed3ceeb9f11a" gracePeriod=30 Feb 24 02:03:34.503815 master-0 kubenswrapper[4207]: I0224 02:03:34.503745 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:34.504433 master-0 kubenswrapper[4207]: E0224 02:03:34.503943 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-54b95" podUID="e3a675b9-feaa-4456-b7b4-0cd3afc42a42" Feb 24 02:03:34.972215 master-0 kubenswrapper[4207]: I0224 02:03:34.972161 4207 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5kbp_70c4541e-cb82-4d13-95b4-905dda52bd9a/kube-rbac-proxy-ovn-metrics/0.log" Feb 24 02:03:34.973006 master-0 kubenswrapper[4207]: I0224 02:03:34.972962 4207 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5kbp_70c4541e-cb82-4d13-95b4-905dda52bd9a/kube-rbac-proxy-node/0.log" Feb 24 02:03:34.973687 master-0 kubenswrapper[4207]: I0224 02:03:34.973653 4207 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5kbp_70c4541e-cb82-4d13-95b4-905dda52bd9a/ovn-acl-logging/0.log" Feb 24 02:03:34.974444 master-0 kubenswrapper[4207]: I0224 02:03:34.974405 4207 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5kbp_70c4541e-cb82-4d13-95b4-905dda52bd9a/ovn-controller/0.log" Feb 24 02:03:34.975103 master-0 kubenswrapper[4207]: I0224 02:03:34.975058 4207 generic.go:334] "Generic (PLEG): container finished" podID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerID="c60da12f5e48e3c24b66984bd016dde81cfe90826c4afeae0e33075af2f42aa4" exitCode=0 Feb 24 02:03:34.975103 master-0 kubenswrapper[4207]: I0224 02:03:34.975099 4207 generic.go:334] "Generic (PLEG): container finished" podID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerID="f92ac89c1c0a664ab7711d88b604153ad5e50b45fc5bf9c734191f13af287319" exitCode=0 Feb 24 02:03:34.975226 master-0 kubenswrapper[4207]: I0224 02:03:34.975115 4207 generic.go:334] "Generic (PLEG): container finished" podID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerID="0cd095e8ba276ef3afb5ef5d7c289b663f14bf3ce618e318e464f72c6c1aa15e" exitCode=0 Feb 24 02:03:34.975226 master-0 kubenswrapper[4207]: I0224 02:03:34.975131 4207 generic.go:334] "Generic (PLEG): container finished" podID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerID="fdf844e0cfc654d3df3c7c38d2d4549d35254f807092f3af31106092a5e9bffb" exitCode=143 Feb 24 02:03:34.975226 master-0 kubenswrapper[4207]: I0224 02:03:34.975150 4207 generic.go:334] "Generic (PLEG): container finished" podID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerID="304f018e75b8ea837a6c1758bf18ece57119ac34a2be377600b1e6267e65ee61" exitCode=143 Feb 24 02:03:34.975226 master-0 kubenswrapper[4207]: I0224 02:03:34.975167 4207 generic.go:334] "Generic (PLEG): container finished" podID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerID="1ab44020eede7767bace0990c6a5ce8d5fd6375706720daf62a15832a05696cb" exitCode=143 Feb 24 02:03:34.975226 master-0 kubenswrapper[4207]: I0224 02:03:34.975187 4207 generic.go:334] "Generic (PLEG): container finished" podID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerID="d34f583e77d75903384301039369523622480125533be2d573f01b3a6c06071d" exitCode=143 Feb 24 02:03:34.975226 master-0 kubenswrapper[4207]: I0224 02:03:34.975160 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" event={"ID":"70c4541e-cb82-4d13-95b4-905dda52bd9a","Type":"ContainerDied","Data":"c60da12f5e48e3c24b66984bd016dde81cfe90826c4afeae0e33075af2f42aa4"} Feb 24 02:03:34.975545 master-0 kubenswrapper[4207]: I0224 02:03:34.975246 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" event={"ID":"70c4541e-cb82-4d13-95b4-905dda52bd9a","Type":"ContainerDied","Data":"f92ac89c1c0a664ab7711d88b604153ad5e50b45fc5bf9c734191f13af287319"} Feb 24 02:03:34.975545 master-0 kubenswrapper[4207]: I0224 02:03:34.975272 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" event={"ID":"70c4541e-cb82-4d13-95b4-905dda52bd9a","Type":"ContainerDied","Data":"0cd095e8ba276ef3afb5ef5d7c289b663f14bf3ce618e318e464f72c6c1aa15e"} Feb 24 02:03:34.975545 master-0 kubenswrapper[4207]: I0224 02:03:34.975293 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" event={"ID":"70c4541e-cb82-4d13-95b4-905dda52bd9a","Type":"ContainerDied","Data":"fdf844e0cfc654d3df3c7c38d2d4549d35254f807092f3af31106092a5e9bffb"} Feb 24 02:03:34.975545 master-0 kubenswrapper[4207]: I0224 02:03:34.975313 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" event={"ID":"70c4541e-cb82-4d13-95b4-905dda52bd9a","Type":"ContainerDied","Data":"304f018e75b8ea837a6c1758bf18ece57119ac34a2be377600b1e6267e65ee61"} Feb 24 02:03:34.975545 master-0 kubenswrapper[4207]: I0224 02:03:34.975331 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" event={"ID":"70c4541e-cb82-4d13-95b4-905dda52bd9a","Type":"ContainerDied","Data":"1ab44020eede7767bace0990c6a5ce8d5fd6375706720daf62a15832a05696cb"} Feb 24 02:03:34.975545 master-0 kubenswrapper[4207]: I0224 02:03:34.975349 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" event={"ID":"70c4541e-cb82-4d13-95b4-905dda52bd9a","Type":"ContainerDied","Data":"d34f583e77d75903384301039369523622480125533be2d573f01b3a6c06071d"} Feb 24 02:03:35.199778 master-0 kubenswrapper[4207]: I0224 02:03:35.199701 4207 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5kbp_70c4541e-cb82-4d13-95b4-905dda52bd9a/ovnkube-controller/0.log" Feb 24 02:03:35.202312 master-0 kubenswrapper[4207]: I0224 02:03:35.202261 4207 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5kbp_70c4541e-cb82-4d13-95b4-905dda52bd9a/kube-rbac-proxy-ovn-metrics/0.log" Feb 24 02:03:35.203109 master-0 kubenswrapper[4207]: I0224 02:03:35.203062 4207 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5kbp_70c4541e-cb82-4d13-95b4-905dda52bd9a/kube-rbac-proxy-node/0.log" Feb 24 02:03:35.203807 master-0 kubenswrapper[4207]: I0224 02:03:35.203760 4207 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5kbp_70c4541e-cb82-4d13-95b4-905dda52bd9a/ovn-acl-logging/0.log" Feb 24 02:03:35.204802 master-0 kubenswrapper[4207]: I0224 02:03:35.204743 4207 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5kbp_70c4541e-cb82-4d13-95b4-905dda52bd9a/ovn-controller/0.log" Feb 24 02:03:35.205837 master-0 kubenswrapper[4207]: I0224 02:03:35.205787 4207 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:35.271774 master-0 kubenswrapper[4207]: I0224 02:03:35.271631 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rg9r6"] Feb 24 02:03:35.271973 master-0 kubenswrapper[4207]: E0224 02:03:35.271901 4207 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="kubecfg-setup" Feb 24 02:03:35.271973 master-0 kubenswrapper[4207]: I0224 02:03:35.271966 4207 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="kubecfg-setup" Feb 24 02:03:35.272106 master-0 kubenswrapper[4207]: E0224 02:03:35.272079 4207 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="ovn-controller" Feb 24 02:03:35.272106 master-0 kubenswrapper[4207]: I0224 02:03:35.272101 4207 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="ovn-controller" Feb 24 02:03:35.272211 master-0 kubenswrapper[4207]: E0224 02:03:35.272118 4207 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="sbdb" Feb 24 02:03:35.272211 master-0 kubenswrapper[4207]: I0224 02:03:35.272137 4207 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="sbdb" Feb 24 02:03:35.272211 master-0 kubenswrapper[4207]: E0224 02:03:35.272151 4207 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="ovn-acl-logging" Feb 24 02:03:35.272211 master-0 kubenswrapper[4207]: I0224 02:03:35.272165 4207 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="ovn-acl-logging" Feb 24 02:03:35.272211 master-0 kubenswrapper[4207]: E0224 02:03:35.272179 4207 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="ovnkube-controller" Feb 24 02:03:35.272211 master-0 kubenswrapper[4207]: I0224 02:03:35.272193 4207 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="ovnkube-controller" Feb 24 02:03:35.272211 master-0 kubenswrapper[4207]: E0224 02:03:35.272209 4207 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="kube-rbac-proxy-ovn-metrics" Feb 24 02:03:35.272211 master-0 kubenswrapper[4207]: I0224 02:03:35.272224 4207 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="kube-rbac-proxy-ovn-metrics" Feb 24 02:03:35.272656 master-0 kubenswrapper[4207]: E0224 02:03:35.272241 4207 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="northd" Feb 24 02:03:35.272656 master-0 kubenswrapper[4207]: I0224 02:03:35.272255 4207 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="northd" Feb 24 02:03:35.272656 master-0 kubenswrapper[4207]: E0224 02:03:35.272271 4207 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="kube-rbac-proxy-node" Feb 24 02:03:35.272656 master-0 kubenswrapper[4207]: I0224 02:03:35.272284 4207 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="kube-rbac-proxy-node" Feb 24 02:03:35.272656 master-0 kubenswrapper[4207]: E0224 02:03:35.272299 4207 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="nbdb" Feb 24 02:03:35.272656 master-0 kubenswrapper[4207]: I0224 02:03:35.272311 4207 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="nbdb" Feb 24 02:03:35.272656 master-0 kubenswrapper[4207]: I0224 02:03:35.272391 4207 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="kube-rbac-proxy-node" Feb 24 02:03:35.272656 master-0 kubenswrapper[4207]: I0224 02:03:35.272412 4207 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="ovn-controller" Feb 24 02:03:35.272656 master-0 kubenswrapper[4207]: I0224 02:03:35.272426 4207 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="nbdb" Feb 24 02:03:35.272656 master-0 kubenswrapper[4207]: I0224 02:03:35.272439 4207 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="sbdb" Feb 24 02:03:35.272656 master-0 kubenswrapper[4207]: I0224 02:03:35.272650 4207 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="ovnkube-controller" Feb 24 02:03:35.273196 master-0 kubenswrapper[4207]: I0224 02:03:35.272669 4207 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="ovn-acl-logging" Feb 24 02:03:35.273196 master-0 kubenswrapper[4207]: I0224 02:03:35.272721 4207 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="kube-rbac-proxy-ovn-metrics" Feb 24 02:03:35.273196 master-0 kubenswrapper[4207]: I0224 02:03:35.272736 4207 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerName="northd" Feb 24 02:03:35.274161 master-0 kubenswrapper[4207]: I0224 02:03:35.274112 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.340475 master-0 kubenswrapper[4207]: I0224 02:03:35.340395 4207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-log-socket\") pod \"70c4541e-cb82-4d13-95b4-905dda52bd9a\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " Feb 24 02:03:35.340475 master-0 kubenswrapper[4207]: I0224 02:03:35.340474 4207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpdd5\" (UniqueName: \"kubernetes.io/projected/70c4541e-cb82-4d13-95b4-905dda52bd9a-kube-api-access-kpdd5\") pod \"70c4541e-cb82-4d13-95b4-905dda52bd9a\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " Feb 24 02:03:35.340684 master-0 kubenswrapper[4207]: I0224 02:03:35.340563 4207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/70c4541e-cb82-4d13-95b4-905dda52bd9a-ovnkube-config\") pod \"70c4541e-cb82-4d13-95b4-905dda52bd9a\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " Feb 24 02:03:35.340684 master-0 kubenswrapper[4207]: I0224 02:03:35.340632 4207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-log-socket" (OuterVolumeSpecName: "log-socket") pod "70c4541e-cb82-4d13-95b4-905dda52bd9a" (UID: "70c4541e-cb82-4d13-95b4-905dda52bd9a"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:03:35.341076 master-0 kubenswrapper[4207]: I0224 02:03:35.341026 4207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-run-netns\") pod \"70c4541e-cb82-4d13-95b4-905dda52bd9a\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " Feb 24 02:03:35.341142 master-0 kubenswrapper[4207]: I0224 02:03:35.341094 4207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "70c4541e-cb82-4d13-95b4-905dda52bd9a" (UID: "70c4541e-cb82-4d13-95b4-905dda52bd9a"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:03:35.341199 master-0 kubenswrapper[4207]: I0224 02:03:35.341178 4207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-var-lib-openvswitch\") pod \"70c4541e-cb82-4d13-95b4-905dda52bd9a\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " Feb 24 02:03:35.341256 master-0 kubenswrapper[4207]: I0224 02:03:35.341233 4207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "70c4541e-cb82-4d13-95b4-905dda52bd9a" (UID: "70c4541e-cb82-4d13-95b4-905dda52bd9a"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:03:35.341539 master-0 kubenswrapper[4207]: I0224 02:03:35.341480 4207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-node-log\") pod \"70c4541e-cb82-4d13-95b4-905dda52bd9a\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " Feb 24 02:03:35.341627 master-0 kubenswrapper[4207]: I0224 02:03:35.341546 4207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-systemd-units\") pod \"70c4541e-cb82-4d13-95b4-905dda52bd9a\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " Feb 24 02:03:35.341627 master-0 kubenswrapper[4207]: I0224 02:03:35.341602 4207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-node-log" (OuterVolumeSpecName: "node-log") pod "70c4541e-cb82-4d13-95b4-905dda52bd9a" (UID: "70c4541e-cb82-4d13-95b4-905dda52bd9a"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:03:35.341627 master-0 kubenswrapper[4207]: I0224 02:03:35.341604 4207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c4541e-cb82-4d13-95b4-905dda52bd9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "70c4541e-cb82-4d13-95b4-905dda52bd9a" (UID: "70c4541e-cb82-4d13-95b4-905dda52bd9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:03:35.341780 master-0 kubenswrapper[4207]: I0224 02:03:35.341617 4207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/70c4541e-cb82-4d13-95b4-905dda52bd9a-env-overrides\") pod \"70c4541e-cb82-4d13-95b4-905dda52bd9a\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " Feb 24 02:03:35.341780 master-0 kubenswrapper[4207]: I0224 02:03:35.341692 4207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "70c4541e-cb82-4d13-95b4-905dda52bd9a" (UID: "70c4541e-cb82-4d13-95b4-905dda52bd9a"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:03:35.341780 master-0 kubenswrapper[4207]: I0224 02:03:35.341719 4207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/70c4541e-cb82-4d13-95b4-905dda52bd9a-ovn-node-metrics-cert\") pod \"70c4541e-cb82-4d13-95b4-905dda52bd9a\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " Feb 24 02:03:35.341953 master-0 kubenswrapper[4207]: I0224 02:03:35.341825 4207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-kubelet\") pod \"70c4541e-cb82-4d13-95b4-905dda52bd9a\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " Feb 24 02:03:35.341953 master-0 kubenswrapper[4207]: I0224 02:03:35.341933 4207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-run-ovn\") pod \"70c4541e-cb82-4d13-95b4-905dda52bd9a\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " Feb 24 02:03:35.342461 master-0 kubenswrapper[4207]: I0224 02:03:35.341975 4207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-cni-bin\") pod \"70c4541e-cb82-4d13-95b4-905dda52bd9a\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " Feb 24 02:03:35.342461 master-0 kubenswrapper[4207]: I0224 02:03:35.341990 4207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "70c4541e-cb82-4d13-95b4-905dda52bd9a" (UID: "70c4541e-cb82-4d13-95b4-905dda52bd9a"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:03:35.342461 master-0 kubenswrapper[4207]: I0224 02:03:35.342013 4207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-run-systemd\") pod \"70c4541e-cb82-4d13-95b4-905dda52bd9a\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " Feb 24 02:03:35.342461 master-0 kubenswrapper[4207]: I0224 02:03:35.342010 4207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "70c4541e-cb82-4d13-95b4-905dda52bd9a" (UID: "70c4541e-cb82-4d13-95b4-905dda52bd9a"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:03:35.342461 master-0 kubenswrapper[4207]: I0224 02:03:35.342093 4207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-etc-openvswitch\") pod \"70c4541e-cb82-4d13-95b4-905dda52bd9a\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " Feb 24 02:03:35.342461 master-0 kubenswrapper[4207]: I0224 02:03:35.342143 4207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-slash\") pod \"70c4541e-cb82-4d13-95b4-905dda52bd9a\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " Feb 24 02:03:35.342461 master-0 kubenswrapper[4207]: I0224 02:03:35.342111 4207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "70c4541e-cb82-4d13-95b4-905dda52bd9a" (UID: "70c4541e-cb82-4d13-95b4-905dda52bd9a"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:03:35.342461 master-0 kubenswrapper[4207]: I0224 02:03:35.342181 4207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-run-ovn-kubernetes\") pod \"70c4541e-cb82-4d13-95b4-905dda52bd9a\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " Feb 24 02:03:35.342461 master-0 kubenswrapper[4207]: I0224 02:03:35.342178 4207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "70c4541e-cb82-4d13-95b4-905dda52bd9a" (UID: "70c4541e-cb82-4d13-95b4-905dda52bd9a"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:03:35.342461 master-0 kubenswrapper[4207]: I0224 02:03:35.342230 4207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-slash" (OuterVolumeSpecName: "host-slash") pod "70c4541e-cb82-4d13-95b4-905dda52bd9a" (UID: "70c4541e-cb82-4d13-95b4-905dda52bd9a"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:03:35.342461 master-0 kubenswrapper[4207]: I0224 02:03:35.342252 4207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "70c4541e-cb82-4d13-95b4-905dda52bd9a" (UID: "70c4541e-cb82-4d13-95b4-905dda52bd9a"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:03:35.342461 master-0 kubenswrapper[4207]: I0224 02:03:35.342258 4207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c4541e-cb82-4d13-95b4-905dda52bd9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "70c4541e-cb82-4d13-95b4-905dda52bd9a" (UID: "70c4541e-cb82-4d13-95b4-905dda52bd9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:03:35.342461 master-0 kubenswrapper[4207]: I0224 02:03:35.342281 4207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "70c4541e-cb82-4d13-95b4-905dda52bd9a" (UID: "70c4541e-cb82-4d13-95b4-905dda52bd9a"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:03:35.342461 master-0 kubenswrapper[4207]: I0224 02:03:35.342216 4207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-cni-netd\") pod \"70c4541e-cb82-4d13-95b4-905dda52bd9a\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " Feb 24 02:03:35.342461 master-0 kubenswrapper[4207]: I0224 02:03:35.342375 4207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/70c4541e-cb82-4d13-95b4-905dda52bd9a-ovnkube-script-lib\") pod \"70c4541e-cb82-4d13-95b4-905dda52bd9a\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " Feb 24 02:03:35.342461 master-0 kubenswrapper[4207]: I0224 02:03:35.342411 4207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-run-openvswitch\") pod \"70c4541e-cb82-4d13-95b4-905dda52bd9a\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " Feb 24 02:03:35.342461 master-0 kubenswrapper[4207]: I0224 02:03:35.342448 4207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"70c4541e-cb82-4d13-95b4-905dda52bd9a\" (UID: \"70c4541e-cb82-4d13-95b4-905dda52bd9a\") " Feb 24 02:03:35.343981 master-0 kubenswrapper[4207]: I0224 02:03:35.342560 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-var-lib-openvswitch\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.343981 master-0 kubenswrapper[4207]: I0224 02:03:35.342646 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-log-socket\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.343981 master-0 kubenswrapper[4207]: I0224 02:03:35.342649 4207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "70c4541e-cb82-4d13-95b4-905dda52bd9a" (UID: "70c4541e-cb82-4d13-95b4-905dda52bd9a"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:03:35.343981 master-0 kubenswrapper[4207]: I0224 02:03:35.342686 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-ovnkube-config\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.343981 master-0 kubenswrapper[4207]: I0224 02:03:35.342682 4207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "70c4541e-cb82-4d13-95b4-905dda52bd9a" (UID: "70c4541e-cb82-4d13-95b4-905dda52bd9a"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:03:35.343981 master-0 kubenswrapper[4207]: I0224 02:03:35.342747 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-systemd-units\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.343981 master-0 kubenswrapper[4207]: I0224 02:03:35.342784 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-slash\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.343981 master-0 kubenswrapper[4207]: I0224 02:03:35.342879 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-run-netns\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.343981 master-0 kubenswrapper[4207]: I0224 02:03:35.343040 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-run-ovn\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.343981 master-0 kubenswrapper[4207]: I0224 02:03:35.343106 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-ovn-node-metrics-cert\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.343981 master-0 kubenswrapper[4207]: I0224 02:03:35.343149 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-kubelet\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.343981 master-0 kubenswrapper[4207]: I0224 02:03:35.343187 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-etc-openvswitch\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.343981 master-0 kubenswrapper[4207]: I0224 02:03:35.343224 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-run-ovn-kubernetes\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.343981 master-0 kubenswrapper[4207]: I0224 02:03:35.343288 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-cni-netd\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.343981 master-0 kubenswrapper[4207]: I0224 02:03:35.343288 4207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c4541e-cb82-4d13-95b4-905dda52bd9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "70c4541e-cb82-4d13-95b4-905dda52bd9a" (UID: "70c4541e-cb82-4d13-95b4-905dda52bd9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:03:35.343981 master-0 kubenswrapper[4207]: I0224 02:03:35.343349 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-node-log\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.344825 master-0 kubenswrapper[4207]: I0224 02:03:35.343434 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc8jx\" (UniqueName: \"kubernetes.io/projected/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-kube-api-access-rc8jx\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.344825 master-0 kubenswrapper[4207]: I0224 02:03:35.343522 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-run-openvswitch\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.344825 master-0 kubenswrapper[4207]: I0224 02:03:35.343563 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.344825 master-0 kubenswrapper[4207]: I0224 02:03:35.343625 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-ovnkube-script-lib\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.344825 master-0 kubenswrapper[4207]: I0224 02:03:35.343662 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-run-systemd\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.344825 master-0 kubenswrapper[4207]: I0224 02:03:35.343695 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-env-overrides\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.344825 master-0 kubenswrapper[4207]: I0224 02:03:35.343770 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-cni-bin\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.344825 master-0 kubenswrapper[4207]: I0224 02:03:35.343885 4207 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-systemd-units\") on node \"master-0\" DevicePath \"\"" Feb 24 02:03:35.344825 master-0 kubenswrapper[4207]: I0224 02:03:35.343909 4207 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-node-log\") on node \"master-0\" DevicePath \"\"" Feb 24 02:03:35.344825 master-0 kubenswrapper[4207]: I0224 02:03:35.343927 4207 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-kubelet\") on node \"master-0\" DevicePath \"\"" Feb 24 02:03:35.344825 master-0 kubenswrapper[4207]: I0224 02:03:35.343945 4207 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/70c4541e-cb82-4d13-95b4-905dda52bd9a-env-overrides\") on node \"master-0\" DevicePath \"\"" Feb 24 02:03:35.344825 master-0 kubenswrapper[4207]: I0224 02:03:35.343964 4207 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-run-ovn\") on node \"master-0\" DevicePath \"\"" Feb 24 02:03:35.344825 master-0 kubenswrapper[4207]: I0224 02:03:35.343983 4207 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-etc-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 24 02:03:35.344825 master-0 kubenswrapper[4207]: I0224 02:03:35.344004 4207 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-cni-bin\") on node \"master-0\" DevicePath \"\"" Feb 24 02:03:35.344825 master-0 kubenswrapper[4207]: I0224 02:03:35.344023 4207 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-slash\") on node \"master-0\" DevicePath \"\"" Feb 24 02:03:35.344825 master-0 kubenswrapper[4207]: I0224 02:03:35.344041 4207 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-cni-netd\") on node \"master-0\" DevicePath \"\"" Feb 24 02:03:35.344825 master-0 kubenswrapper[4207]: I0224 02:03:35.344061 4207 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-run-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Feb 24 02:03:35.344825 master-0 kubenswrapper[4207]: I0224 02:03:35.344090 4207 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-run-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 24 02:03:35.344825 master-0 kubenswrapper[4207]: I0224 02:03:35.344108 4207 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/70c4541e-cb82-4d13-95b4-905dda52bd9a-ovnkube-script-lib\") on node \"master-0\" DevicePath \"\"" Feb 24 02:03:35.344825 master-0 kubenswrapper[4207]: I0224 02:03:35.344129 4207 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Feb 24 02:03:35.344825 master-0 kubenswrapper[4207]: I0224 02:03:35.344152 4207 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-log-socket\") on node \"master-0\" DevicePath \"\"" Feb 24 02:03:35.345871 master-0 kubenswrapper[4207]: I0224 02:03:35.344171 4207 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-host-run-netns\") on node \"master-0\" DevicePath \"\"" Feb 24 02:03:35.345871 master-0 kubenswrapper[4207]: I0224 02:03:35.344190 4207 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-var-lib-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 24 02:03:35.345871 master-0 kubenswrapper[4207]: I0224 02:03:35.344208 4207 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/70c4541e-cb82-4d13-95b4-905dda52bd9a-ovnkube-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:03:35.349528 master-0 kubenswrapper[4207]: I0224 02:03:35.349463 4207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c4541e-cb82-4d13-95b4-905dda52bd9a-kube-api-access-kpdd5" (OuterVolumeSpecName: "kube-api-access-kpdd5") pod "70c4541e-cb82-4d13-95b4-905dda52bd9a" (UID: "70c4541e-cb82-4d13-95b4-905dda52bd9a"). InnerVolumeSpecName "kube-api-access-kpdd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:03:35.350225 master-0 kubenswrapper[4207]: I0224 02:03:35.350166 4207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70c4541e-cb82-4d13-95b4-905dda52bd9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "70c4541e-cb82-4d13-95b4-905dda52bd9a" (UID: "70c4541e-cb82-4d13-95b4-905dda52bd9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:03:35.355516 master-0 kubenswrapper[4207]: I0224 02:03:35.355452 4207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "70c4541e-cb82-4d13-95b4-905dda52bd9a" (UID: "70c4541e-cb82-4d13-95b4-905dda52bd9a"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:03:35.445631 master-0 kubenswrapper[4207]: I0224 02:03:35.445516 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-run-ovn\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.445790 master-0 kubenswrapper[4207]: I0224 02:03:35.445700 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-run-ovn\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.445882 master-0 kubenswrapper[4207]: I0224 02:03:35.445815 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-ovn-node-metrics-cert\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.445951 master-0 kubenswrapper[4207]: I0224 02:03:35.445896 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-kubelet\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.446018 master-0 kubenswrapper[4207]: I0224 02:03:35.445988 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-kubelet\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.446180 master-0 kubenswrapper[4207]: I0224 02:03:35.446122 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-etc-openvswitch\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.446255 master-0 kubenswrapper[4207]: I0224 02:03:35.446190 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-run-ovn-kubernetes\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.446315 master-0 kubenswrapper[4207]: I0224 02:03:35.446265 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-etc-openvswitch\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.446315 master-0 kubenswrapper[4207]: I0224 02:03:35.446268 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-run-ovn-kubernetes\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.446315 master-0 kubenswrapper[4207]: I0224 02:03:35.446307 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-cni-netd\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.446496 master-0 kubenswrapper[4207]: I0224 02:03:35.446347 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-cni-netd\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.446496 master-0 kubenswrapper[4207]: I0224 02:03:35.446352 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-node-log\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.446496 master-0 kubenswrapper[4207]: I0224 02:03:35.446414 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-node-log\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.446496 master-0 kubenswrapper[4207]: I0224 02:03:35.446421 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc8jx\" (UniqueName: \"kubernetes.io/projected/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-kube-api-access-rc8jx\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.446496 master-0 kubenswrapper[4207]: I0224 02:03:35.446489 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-run-openvswitch\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.446831 master-0 kubenswrapper[4207]: I0224 02:03:35.446534 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.446831 master-0 kubenswrapper[4207]: I0224 02:03:35.446655 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.446831 master-0 kubenswrapper[4207]: I0224 02:03:35.446707 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-ovnkube-script-lib\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.446831 master-0 kubenswrapper[4207]: I0224 02:03:35.446758 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-run-systemd\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.446831 master-0 kubenswrapper[4207]: I0224 02:03:35.446793 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-env-overrides\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.447103 master-0 kubenswrapper[4207]: I0224 02:03:35.446896 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-run-openvswitch\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.447218 master-0 kubenswrapper[4207]: I0224 02:03:35.447161 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-run-systemd\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.447306 master-0 kubenswrapper[4207]: I0224 02:03:35.447267 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-cni-bin\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.447396 master-0 kubenswrapper[4207]: I0224 02:03:35.447368 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-var-lib-openvswitch\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.447473 master-0 kubenswrapper[4207]: I0224 02:03:35.447447 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-log-socket\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.447562 master-0 kubenswrapper[4207]: I0224 02:03:35.447534 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-ovnkube-config\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.447659 master-0 kubenswrapper[4207]: I0224 02:03:35.447642 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-systemd-units\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.447757 master-0 kubenswrapper[4207]: I0224 02:03:35.447728 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-slash\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.447983 master-0 kubenswrapper[4207]: I0224 02:03:35.447932 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-env-overrides\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.448046 master-0 kubenswrapper[4207]: I0224 02:03:35.448011 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-log-socket\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.448119 master-0 kubenswrapper[4207]: I0224 02:03:35.448073 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-ovnkube-script-lib\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.448178 master-0 kubenswrapper[4207]: I0224 02:03:35.448099 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-slash\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.448178 master-0 kubenswrapper[4207]: I0224 02:03:35.448148 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-run-netns\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.448288 master-0 kubenswrapper[4207]: I0224 02:03:35.448211 4207 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpdd5\" (UniqueName: \"kubernetes.io/projected/70c4541e-cb82-4d13-95b4-905dda52bd9a-kube-api-access-kpdd5\") on node \"master-0\" DevicePath \"\"" Feb 24 02:03:35.448288 master-0 kubenswrapper[4207]: I0224 02:03:35.448222 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-run-netns\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.448288 master-0 kubenswrapper[4207]: I0224 02:03:35.448235 4207 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/70c4541e-cb82-4d13-95b4-905dda52bd9a-ovn-node-metrics-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 02:03:35.448288 master-0 kubenswrapper[4207]: I0224 02:03:35.448211 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-cni-bin\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.448288 master-0 kubenswrapper[4207]: I0224 02:03:35.448255 4207 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/70c4541e-cb82-4d13-95b4-905dda52bd9a-run-systemd\") on node \"master-0\" DevicePath \"\"" Feb 24 02:03:35.448542 master-0 kubenswrapper[4207]: I0224 02:03:35.448291 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-var-lib-openvswitch\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.448542 master-0 kubenswrapper[4207]: I0224 02:03:35.448343 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-systemd-units\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.450193 master-0 kubenswrapper[4207]: I0224 02:03:35.450145 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-ovnkube-config\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.452294 master-0 kubenswrapper[4207]: I0224 02:03:35.452208 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-ovn-node-metrics-cert\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.475372 master-0 kubenswrapper[4207]: I0224 02:03:35.475254 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc8jx\" (UniqueName: \"kubernetes.io/projected/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-kube-api-access-rc8jx\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.503690 master-0 kubenswrapper[4207]: I0224 02:03:35.503556 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:35.505320 master-0 kubenswrapper[4207]: E0224 02:03:35.505247 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tntcf" podUID="70e2ba24-4871-4d1d-9935-156fdbeb2810" Feb 24 02:03:35.588607 master-0 kubenswrapper[4207]: I0224 02:03:35.588479 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:35.606569 master-0 kubenswrapper[4207]: W0224 02:03:35.606499 4207 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb39fcc8_beb4_410e_b2a4_0b3e150719cc.slice/crio-5961c2c1ae4747bec6388a9fbe96dacb27b6a52832bcc7c5d12c3091d629abab WatchSource:0}: Error finding container 5961c2c1ae4747bec6388a9fbe96dacb27b6a52832bcc7c5d12c3091d629abab: Status 404 returned error can't find the container with id 5961c2c1ae4747bec6388a9fbe96dacb27b6a52832bcc7c5d12c3091d629abab Feb 24 02:03:35.981920 master-0 kubenswrapper[4207]: I0224 02:03:35.981844 4207 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5kbp_70c4541e-cb82-4d13-95b4-905dda52bd9a/ovnkube-controller/0.log" Feb 24 02:03:35.984800 master-0 kubenswrapper[4207]: I0224 02:03:35.984766 4207 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5kbp_70c4541e-cb82-4d13-95b4-905dda52bd9a/kube-rbac-proxy-ovn-metrics/0.log" Feb 24 02:03:35.985785 master-0 kubenswrapper[4207]: I0224 02:03:35.985745 4207 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5kbp_70c4541e-cb82-4d13-95b4-905dda52bd9a/kube-rbac-proxy-node/0.log" Feb 24 02:03:35.986682 master-0 kubenswrapper[4207]: I0224 02:03:35.986627 4207 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5kbp_70c4541e-cb82-4d13-95b4-905dda52bd9a/ovn-acl-logging/0.log" Feb 24 02:03:35.987536 master-0 kubenswrapper[4207]: I0224 02:03:35.987484 4207 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5kbp_70c4541e-cb82-4d13-95b4-905dda52bd9a/ovn-controller/0.log" Feb 24 02:03:35.988226 master-0 kubenswrapper[4207]: I0224 02:03:35.988163 4207 generic.go:334] "Generic (PLEG): container finished" podID="70c4541e-cb82-4d13-95b4-905dda52bd9a" containerID="700ed9e73a3b7959e5250d3b5d994d4c07c1209bc00871cc5d48ed3ceeb9f11a" exitCode=1 Feb 24 02:03:35.988354 master-0 kubenswrapper[4207]: I0224 02:03:35.988300 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" event={"ID":"70c4541e-cb82-4d13-95b4-905dda52bd9a","Type":"ContainerDied","Data":"700ed9e73a3b7959e5250d3b5d994d4c07c1209bc00871cc5d48ed3ceeb9f11a"} Feb 24 02:03:35.988354 master-0 kubenswrapper[4207]: I0224 02:03:35.988328 4207 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" Feb 24 02:03:35.988633 master-0 kubenswrapper[4207]: I0224 02:03:35.988376 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5kbp" event={"ID":"70c4541e-cb82-4d13-95b4-905dda52bd9a","Type":"ContainerDied","Data":"39e4dd152418288c83854aae3e150fd3be1fd966f2ead04dae32ed3cad75dace"} Feb 24 02:03:35.988633 master-0 kubenswrapper[4207]: I0224 02:03:35.988412 4207 scope.go:117] "RemoveContainer" containerID="700ed9e73a3b7959e5250d3b5d994d4c07c1209bc00871cc5d48ed3ceeb9f11a" Feb 24 02:03:35.991082 master-0 kubenswrapper[4207]: I0224 02:03:35.991015 4207 generic.go:334] "Generic (PLEG): container finished" podID="fb39fcc8-beb4-410e-b2a4-0b3e150719cc" containerID="c3063a301534062c954aa79867d0cc96573d7146ccda3bfb83406935c96bf2b9" exitCode=0 Feb 24 02:03:35.991173 master-0 kubenswrapper[4207]: I0224 02:03:35.991113 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" event={"ID":"fb39fcc8-beb4-410e-b2a4-0b3e150719cc","Type":"ContainerDied","Data":"c3063a301534062c954aa79867d0cc96573d7146ccda3bfb83406935c96bf2b9"} Feb 24 02:03:35.991173 master-0 kubenswrapper[4207]: I0224 02:03:35.991159 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" event={"ID":"fb39fcc8-beb4-410e-b2a4-0b3e150719cc","Type":"ContainerStarted","Data":"5961c2c1ae4747bec6388a9fbe96dacb27b6a52832bcc7c5d12c3091d629abab"} Feb 24 02:03:36.012515 master-0 kubenswrapper[4207]: I0224 02:03:36.012392 4207 scope.go:117] "RemoveContainer" containerID="c60da12f5e48e3c24b66984bd016dde81cfe90826c4afeae0e33075af2f42aa4" Feb 24 02:03:36.029524 master-0 kubenswrapper[4207]: I0224 02:03:36.029453 4207 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-m5kbp"] Feb 24 02:03:36.033326 master-0 kubenswrapper[4207]: I0224 02:03:36.033233 4207 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-m5kbp"] Feb 24 02:03:36.039839 master-0 kubenswrapper[4207]: I0224 02:03:36.039791 4207 scope.go:117] "RemoveContainer" containerID="f92ac89c1c0a664ab7711d88b604153ad5e50b45fc5bf9c734191f13af287319" Feb 24 02:03:36.057743 master-0 kubenswrapper[4207]: I0224 02:03:36.057688 4207 scope.go:117] "RemoveContainer" containerID="0cd095e8ba276ef3afb5ef5d7c289b663f14bf3ce618e318e464f72c6c1aa15e" Feb 24 02:03:36.080596 master-0 kubenswrapper[4207]: I0224 02:03:36.080513 4207 scope.go:117] "RemoveContainer" containerID="fdf844e0cfc654d3df3c7c38d2d4549d35254f807092f3af31106092a5e9bffb" Feb 24 02:03:36.107029 master-0 kubenswrapper[4207]: I0224 02:03:36.106962 4207 scope.go:117] "RemoveContainer" containerID="304f018e75b8ea837a6c1758bf18ece57119ac34a2be377600b1e6267e65ee61" Feb 24 02:03:36.121461 master-0 kubenswrapper[4207]: I0224 02:03:36.121407 4207 scope.go:117] "RemoveContainer" containerID="1ab44020eede7767bace0990c6a5ce8d5fd6375706720daf62a15832a05696cb" Feb 24 02:03:36.135994 master-0 kubenswrapper[4207]: I0224 02:03:36.135834 4207 scope.go:117] "RemoveContainer" containerID="d34f583e77d75903384301039369523622480125533be2d573f01b3a6c06071d" Feb 24 02:03:36.153946 master-0 kubenswrapper[4207]: I0224 02:03:36.153224 4207 scope.go:117] "RemoveContainer" containerID="022a7efb43e6f5db3a609d614b46f18d131186112acc9f758e926a268d8724ff" Feb 24 02:03:36.170997 master-0 kubenswrapper[4207]: I0224 02:03:36.170949 4207 scope.go:117] "RemoveContainer" containerID="700ed9e73a3b7959e5250d3b5d994d4c07c1209bc00871cc5d48ed3ceeb9f11a" Feb 24 02:03:36.171649 master-0 kubenswrapper[4207]: E0224 02:03:36.171568 4207 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"700ed9e73a3b7959e5250d3b5d994d4c07c1209bc00871cc5d48ed3ceeb9f11a\": container with ID starting with 700ed9e73a3b7959e5250d3b5d994d4c07c1209bc00871cc5d48ed3ceeb9f11a not found: ID does not exist" containerID="700ed9e73a3b7959e5250d3b5d994d4c07c1209bc00871cc5d48ed3ceeb9f11a" Feb 24 02:03:36.171851 master-0 kubenswrapper[4207]: I0224 02:03:36.171659 4207 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"700ed9e73a3b7959e5250d3b5d994d4c07c1209bc00871cc5d48ed3ceeb9f11a"} err="failed to get container status \"700ed9e73a3b7959e5250d3b5d994d4c07c1209bc00871cc5d48ed3ceeb9f11a\": rpc error: code = NotFound desc = could not find container \"700ed9e73a3b7959e5250d3b5d994d4c07c1209bc00871cc5d48ed3ceeb9f11a\": container with ID starting with 700ed9e73a3b7959e5250d3b5d994d4c07c1209bc00871cc5d48ed3ceeb9f11a not found: ID does not exist" Feb 24 02:03:36.171851 master-0 kubenswrapper[4207]: I0224 02:03:36.171842 4207 scope.go:117] "RemoveContainer" containerID="c60da12f5e48e3c24b66984bd016dde81cfe90826c4afeae0e33075af2f42aa4" Feb 24 02:03:36.172419 master-0 kubenswrapper[4207]: E0224 02:03:36.172302 4207 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c60da12f5e48e3c24b66984bd016dde81cfe90826c4afeae0e33075af2f42aa4\": container with ID starting with c60da12f5e48e3c24b66984bd016dde81cfe90826c4afeae0e33075af2f42aa4 not found: ID does not exist" containerID="c60da12f5e48e3c24b66984bd016dde81cfe90826c4afeae0e33075af2f42aa4" Feb 24 02:03:36.172419 master-0 kubenswrapper[4207]: I0224 02:03:36.172356 4207 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c60da12f5e48e3c24b66984bd016dde81cfe90826c4afeae0e33075af2f42aa4"} err="failed to get container status \"c60da12f5e48e3c24b66984bd016dde81cfe90826c4afeae0e33075af2f42aa4\": rpc error: code = NotFound desc = could not find container \"c60da12f5e48e3c24b66984bd016dde81cfe90826c4afeae0e33075af2f42aa4\": container with ID starting with c60da12f5e48e3c24b66984bd016dde81cfe90826c4afeae0e33075af2f42aa4 not found: ID does not exist" Feb 24 02:03:36.172419 master-0 kubenswrapper[4207]: I0224 02:03:36.172393 4207 scope.go:117] "RemoveContainer" containerID="f92ac89c1c0a664ab7711d88b604153ad5e50b45fc5bf9c734191f13af287319" Feb 24 02:03:36.172905 master-0 kubenswrapper[4207]: E0224 02:03:36.172822 4207 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f92ac89c1c0a664ab7711d88b604153ad5e50b45fc5bf9c734191f13af287319\": container with ID starting with f92ac89c1c0a664ab7711d88b604153ad5e50b45fc5bf9c734191f13af287319 not found: ID does not exist" containerID="f92ac89c1c0a664ab7711d88b604153ad5e50b45fc5bf9c734191f13af287319" Feb 24 02:03:36.172905 master-0 kubenswrapper[4207]: I0224 02:03:36.172861 4207 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f92ac89c1c0a664ab7711d88b604153ad5e50b45fc5bf9c734191f13af287319"} err="failed to get container status \"f92ac89c1c0a664ab7711d88b604153ad5e50b45fc5bf9c734191f13af287319\": rpc error: code = NotFound desc = could not find container \"f92ac89c1c0a664ab7711d88b604153ad5e50b45fc5bf9c734191f13af287319\": container with ID starting with f92ac89c1c0a664ab7711d88b604153ad5e50b45fc5bf9c734191f13af287319 not found: ID does not exist" Feb 24 02:03:36.172905 master-0 kubenswrapper[4207]: I0224 02:03:36.172887 4207 scope.go:117] "RemoveContainer" containerID="0cd095e8ba276ef3afb5ef5d7c289b663f14bf3ce618e318e464f72c6c1aa15e" Feb 24 02:03:36.173364 master-0 kubenswrapper[4207]: E0224 02:03:36.173294 4207 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cd095e8ba276ef3afb5ef5d7c289b663f14bf3ce618e318e464f72c6c1aa15e\": container with ID starting with 0cd095e8ba276ef3afb5ef5d7c289b663f14bf3ce618e318e464f72c6c1aa15e not found: ID does not exist" containerID="0cd095e8ba276ef3afb5ef5d7c289b663f14bf3ce618e318e464f72c6c1aa15e" Feb 24 02:03:36.173364 master-0 kubenswrapper[4207]: I0224 02:03:36.173329 4207 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cd095e8ba276ef3afb5ef5d7c289b663f14bf3ce618e318e464f72c6c1aa15e"} err="failed to get container status \"0cd095e8ba276ef3afb5ef5d7c289b663f14bf3ce618e318e464f72c6c1aa15e\": rpc error: code = NotFound desc = could not find container \"0cd095e8ba276ef3afb5ef5d7c289b663f14bf3ce618e318e464f72c6c1aa15e\": container with ID starting with 0cd095e8ba276ef3afb5ef5d7c289b663f14bf3ce618e318e464f72c6c1aa15e not found: ID does not exist" Feb 24 02:03:36.173364 master-0 kubenswrapper[4207]: I0224 02:03:36.173352 4207 scope.go:117] "RemoveContainer" containerID="fdf844e0cfc654d3df3c7c38d2d4549d35254f807092f3af31106092a5e9bffb" Feb 24 02:03:36.174017 master-0 kubenswrapper[4207]: E0224 02:03:36.173950 4207 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdf844e0cfc654d3df3c7c38d2d4549d35254f807092f3af31106092a5e9bffb\": container with ID starting with fdf844e0cfc654d3df3c7c38d2d4549d35254f807092f3af31106092a5e9bffb not found: ID does not exist" containerID="fdf844e0cfc654d3df3c7c38d2d4549d35254f807092f3af31106092a5e9bffb" Feb 24 02:03:36.174108 master-0 kubenswrapper[4207]: I0224 02:03:36.174024 4207 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdf844e0cfc654d3df3c7c38d2d4549d35254f807092f3af31106092a5e9bffb"} err="failed to get container status \"fdf844e0cfc654d3df3c7c38d2d4549d35254f807092f3af31106092a5e9bffb\": rpc error: code = NotFound desc = could not find container \"fdf844e0cfc654d3df3c7c38d2d4549d35254f807092f3af31106092a5e9bffb\": container with ID starting with fdf844e0cfc654d3df3c7c38d2d4549d35254f807092f3af31106092a5e9bffb not found: ID does not exist" Feb 24 02:03:36.174108 master-0 kubenswrapper[4207]: I0224 02:03:36.174075 4207 scope.go:117] "RemoveContainer" containerID="304f018e75b8ea837a6c1758bf18ece57119ac34a2be377600b1e6267e65ee61" Feb 24 02:03:36.174916 master-0 kubenswrapper[4207]: E0224 02:03:36.174862 4207 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"304f018e75b8ea837a6c1758bf18ece57119ac34a2be377600b1e6267e65ee61\": container with ID starting with 304f018e75b8ea837a6c1758bf18ece57119ac34a2be377600b1e6267e65ee61 not found: ID does not exist" containerID="304f018e75b8ea837a6c1758bf18ece57119ac34a2be377600b1e6267e65ee61" Feb 24 02:03:36.175043 master-0 kubenswrapper[4207]: I0224 02:03:36.174934 4207 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"304f018e75b8ea837a6c1758bf18ece57119ac34a2be377600b1e6267e65ee61"} err="failed to get container status \"304f018e75b8ea837a6c1758bf18ece57119ac34a2be377600b1e6267e65ee61\": rpc error: code = NotFound desc = could not find container \"304f018e75b8ea837a6c1758bf18ece57119ac34a2be377600b1e6267e65ee61\": container with ID starting with 304f018e75b8ea837a6c1758bf18ece57119ac34a2be377600b1e6267e65ee61 not found: ID does not exist" Feb 24 02:03:36.175043 master-0 kubenswrapper[4207]: I0224 02:03:36.174982 4207 scope.go:117] "RemoveContainer" containerID="1ab44020eede7767bace0990c6a5ce8d5fd6375706720daf62a15832a05696cb" Feb 24 02:03:36.175516 master-0 kubenswrapper[4207]: E0224 02:03:36.175463 4207 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ab44020eede7767bace0990c6a5ce8d5fd6375706720daf62a15832a05696cb\": container with ID starting with 1ab44020eede7767bace0990c6a5ce8d5fd6375706720daf62a15832a05696cb not found: ID does not exist" containerID="1ab44020eede7767bace0990c6a5ce8d5fd6375706720daf62a15832a05696cb" Feb 24 02:03:36.175607 master-0 kubenswrapper[4207]: I0224 02:03:36.175506 4207 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ab44020eede7767bace0990c6a5ce8d5fd6375706720daf62a15832a05696cb"} err="failed to get container status \"1ab44020eede7767bace0990c6a5ce8d5fd6375706720daf62a15832a05696cb\": rpc error: code = NotFound desc = could not find container \"1ab44020eede7767bace0990c6a5ce8d5fd6375706720daf62a15832a05696cb\": container with ID starting with 1ab44020eede7767bace0990c6a5ce8d5fd6375706720daf62a15832a05696cb not found: ID does not exist" Feb 24 02:03:36.175607 master-0 kubenswrapper[4207]: I0224 02:03:36.175538 4207 scope.go:117] "RemoveContainer" containerID="d34f583e77d75903384301039369523622480125533be2d573f01b3a6c06071d" Feb 24 02:03:36.176018 master-0 kubenswrapper[4207]: E0224 02:03:36.175970 4207 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d34f583e77d75903384301039369523622480125533be2d573f01b3a6c06071d\": container with ID starting with d34f583e77d75903384301039369523622480125533be2d573f01b3a6c06071d not found: ID does not exist" containerID="d34f583e77d75903384301039369523622480125533be2d573f01b3a6c06071d" Feb 24 02:03:36.176097 master-0 kubenswrapper[4207]: I0224 02:03:36.176012 4207 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d34f583e77d75903384301039369523622480125533be2d573f01b3a6c06071d"} err="failed to get container status \"d34f583e77d75903384301039369523622480125533be2d573f01b3a6c06071d\": rpc error: code = NotFound desc = could not find container \"d34f583e77d75903384301039369523622480125533be2d573f01b3a6c06071d\": container with ID starting with d34f583e77d75903384301039369523622480125533be2d573f01b3a6c06071d not found: ID does not exist" Feb 24 02:03:36.176097 master-0 kubenswrapper[4207]: I0224 02:03:36.176041 4207 scope.go:117] "RemoveContainer" containerID="022a7efb43e6f5db3a609d614b46f18d131186112acc9f758e926a268d8724ff" Feb 24 02:03:36.176519 master-0 kubenswrapper[4207]: E0224 02:03:36.176469 4207 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"022a7efb43e6f5db3a609d614b46f18d131186112acc9f758e926a268d8724ff\": container with ID starting with 022a7efb43e6f5db3a609d614b46f18d131186112acc9f758e926a268d8724ff not found: ID does not exist" containerID="022a7efb43e6f5db3a609d614b46f18d131186112acc9f758e926a268d8724ff" Feb 24 02:03:36.176606 master-0 kubenswrapper[4207]: I0224 02:03:36.176510 4207 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"022a7efb43e6f5db3a609d614b46f18d131186112acc9f758e926a268d8724ff"} err="failed to get container status \"022a7efb43e6f5db3a609d614b46f18d131186112acc9f758e926a268d8724ff\": rpc error: code = NotFound desc = could not find container \"022a7efb43e6f5db3a609d614b46f18d131186112acc9f758e926a268d8724ff\": container with ID starting with 022a7efb43e6f5db3a609d614b46f18d131186112acc9f758e926a268d8724ff not found: ID does not exist" Feb 24 02:03:36.506787 master-0 kubenswrapper[4207]: I0224 02:03:36.504224 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:36.506787 master-0 kubenswrapper[4207]: E0224 02:03:36.504395 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-54b95" podUID="e3a675b9-feaa-4456-b7b4-0cd3afc42a42" Feb 24 02:03:37.002523 master-0 kubenswrapper[4207]: I0224 02:03:37.002013 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" event={"ID":"fb39fcc8-beb4-410e-b2a4-0b3e150719cc","Type":"ContainerStarted","Data":"4f5b06a0a1103084565e7f3fed74736cb11f62b92bf6867022587965f1a2caaf"} Feb 24 02:03:37.002523 master-0 kubenswrapper[4207]: I0224 02:03:37.002511 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" event={"ID":"fb39fcc8-beb4-410e-b2a4-0b3e150719cc","Type":"ContainerStarted","Data":"c880f756d05774fc1f954066039c7ec198c9da869a02c1a619e01fcc3885fb5a"} Feb 24 02:03:37.002804 master-0 kubenswrapper[4207]: I0224 02:03:37.002536 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" event={"ID":"fb39fcc8-beb4-410e-b2a4-0b3e150719cc","Type":"ContainerStarted","Data":"9bb3aaaf98b3e613aa38c174dae3a871e1597827859f13849e7bd01ad03bb7bb"} Feb 24 02:03:37.002804 master-0 kubenswrapper[4207]: I0224 02:03:37.002559 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" event={"ID":"fb39fcc8-beb4-410e-b2a4-0b3e150719cc","Type":"ContainerStarted","Data":"1c39255c92a2233ed6cf746e8b12d337977d9bba6a9424c402e1eeeb4d639e30"} Feb 24 02:03:37.002804 master-0 kubenswrapper[4207]: I0224 02:03:37.002613 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" event={"ID":"fb39fcc8-beb4-410e-b2a4-0b3e150719cc","Type":"ContainerStarted","Data":"059228c2a27c3aed25af8917cccf482faf03f812c73e457a250e417c4a568a0c"} Feb 24 02:03:37.002804 master-0 kubenswrapper[4207]: I0224 02:03:37.002630 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" event={"ID":"fb39fcc8-beb4-410e-b2a4-0b3e150719cc","Type":"ContainerStarted","Data":"4cabdcb3ddc56ae97f7b4649fc91fc0a40b0adb8f619c78d4eb6d40afa505204"} Feb 24 02:03:37.504454 master-0 kubenswrapper[4207]: I0224 02:03:37.504395 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:37.504730 master-0 kubenswrapper[4207]: E0224 02:03:37.504620 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tntcf" podUID="70e2ba24-4871-4d1d-9935-156fdbeb2810" Feb 24 02:03:37.511344 master-0 kubenswrapper[4207]: I0224 02:03:37.511283 4207 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70c4541e-cb82-4d13-95b4-905dda52bd9a" path="/var/lib/kubelet/pods/70c4541e-cb82-4d13-95b4-905dda52bd9a/volumes" Feb 24 02:03:38.504482 master-0 kubenswrapper[4207]: I0224 02:03:38.504376 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:38.504795 master-0 kubenswrapper[4207]: E0224 02:03:38.504733 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-54b95" podUID="e3a675b9-feaa-4456-b7b4-0cd3afc42a42" Feb 24 02:03:39.021379 master-0 kubenswrapper[4207]: I0224 02:03:39.021308 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" event={"ID":"fb39fcc8-beb4-410e-b2a4-0b3e150719cc","Type":"ContainerStarted","Data":"53aef3176bd11ce32053e4c2256ae3bd19adf8061abe89a3f26ff52596748dc6"} Feb 24 02:03:39.503741 master-0 kubenswrapper[4207]: I0224 02:03:39.503661 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:39.504012 master-0 kubenswrapper[4207]: E0224 02:03:39.503854 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tntcf" podUID="70e2ba24-4871-4d1d-9935-156fdbeb2810" Feb 24 02:03:40.503340 master-0 kubenswrapper[4207]: I0224 02:03:40.503251 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:40.504108 master-0 kubenswrapper[4207]: E0224 02:03:40.503419 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-54b95" podUID="e3a675b9-feaa-4456-b7b4-0cd3afc42a42" Feb 24 02:03:41.504140 master-0 kubenswrapper[4207]: I0224 02:03:41.504051 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:41.504943 master-0 kubenswrapper[4207]: E0224 02:03:41.504262 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tntcf" podUID="70e2ba24-4871-4d1d-9935-156fdbeb2810" Feb 24 02:03:42.040560 master-0 kubenswrapper[4207]: I0224 02:03:42.040473 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" event={"ID":"fb39fcc8-beb4-410e-b2a4-0b3e150719cc","Type":"ContainerStarted","Data":"5380c680490e259bb66eb660b6baa7d7340e0ee146b1b9cd483ce9f97fef3094"} Feb 24 02:03:42.041173 master-0 kubenswrapper[4207]: I0224 02:03:42.041062 4207 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:42.041254 master-0 kubenswrapper[4207]: I0224 02:03:42.041235 4207 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:42.076167 master-0 kubenswrapper[4207]: I0224 02:03:42.076098 4207 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:42.078200 master-0 kubenswrapper[4207]: I0224 02:03:42.078057 4207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" podStartSLOduration=7.077829722 podStartE2EDuration="7.077829722s" podCreationTimestamp="2026-02-24 02:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:03:42.076797223 +0000 UTC m=+107.420101513" watchObservedRunningTime="2026-02-24 02:03:42.077829722 +0000 UTC m=+107.421133992" Feb 24 02:03:42.505713 master-0 kubenswrapper[4207]: I0224 02:03:42.504731 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:42.505713 master-0 kubenswrapper[4207]: E0224 02:03:42.505299 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-54b95" podUID="e3a675b9-feaa-4456-b7b4-0cd3afc42a42" Feb 24 02:03:42.517313 master-0 kubenswrapper[4207]: I0224 02:03:42.514423 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn8hz\" (UniqueName: \"kubernetes.io/projected/e3a675b9-feaa-4456-b7b4-0cd3afc42a42-kube-api-access-nn8hz\") pod \"network-check-target-54b95\" (UID: \"e3a675b9-feaa-4456-b7b4-0cd3afc42a42\") " pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:42.517313 master-0 kubenswrapper[4207]: E0224 02:03:42.514826 4207 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 02:03:42.517313 master-0 kubenswrapper[4207]: E0224 02:03:42.514863 4207 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 02:03:42.517313 master-0 kubenswrapper[4207]: E0224 02:03:42.514885 4207 projected.go:194] Error preparing data for projected volume kube-api-access-nn8hz for pod openshift-network-diagnostics/network-check-target-54b95: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 02:03:42.517313 master-0 kubenswrapper[4207]: E0224 02:03:42.514975 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3a675b9-feaa-4456-b7b4-0cd3afc42a42-kube-api-access-nn8hz podName:e3a675b9-feaa-4456-b7b4-0cd3afc42a42 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:14.514946802 +0000 UTC m=+139.858251072 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-nn8hz" (UniqueName: "kubernetes.io/projected/e3a675b9-feaa-4456-b7b4-0cd3afc42a42-kube-api-access-nn8hz") pod "network-check-target-54b95" (UID: "e3a675b9-feaa-4456-b7b4-0cd3afc42a42") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 02:03:42.517313 master-0 kubenswrapper[4207]: I0224 02:03:42.516383 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tntcf"] Feb 24 02:03:42.517313 master-0 kubenswrapper[4207]: I0224 02:03:42.516494 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:42.517313 master-0 kubenswrapper[4207]: E0224 02:03:42.516669 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tntcf" podUID="70e2ba24-4871-4d1d-9935-156fdbeb2810" Feb 24 02:03:42.522681 master-0 kubenswrapper[4207]: I0224 02:03:42.522623 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-54b95"] Feb 24 02:03:42.918017 master-0 kubenswrapper[4207]: I0224 02:03:42.917901 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:03:42.918157 master-0 kubenswrapper[4207]: E0224 02:03:42.918127 4207 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 24 02:03:42.918249 master-0 kubenswrapper[4207]: E0224 02:03:42.918221 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert podName:4f72a322-2142-482a-9b0b-2ad890181d7a nodeName:}" failed. No retries permitted until 2026-02-24 02:04:46.918190113 +0000 UTC m=+172.261494393 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert") pod "cluster-version-operator-5cfd9759cf-v5tpt" (UID: "4f72a322-2142-482a-9b0b-2ad890181d7a") : secret "cluster-version-operator-serving-cert" not found Feb 24 02:03:43.044302 master-0 kubenswrapper[4207]: I0224 02:03:43.044231 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:43.044539 master-0 kubenswrapper[4207]: E0224 02:03:43.044393 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-54b95" podUID="e3a675b9-feaa-4456-b7b4-0cd3afc42a42" Feb 24 02:03:43.045694 master-0 kubenswrapper[4207]: I0224 02:03:43.045625 4207 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:43.084309 master-0 kubenswrapper[4207]: I0224 02:03:43.084195 4207 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:44.504138 master-0 kubenswrapper[4207]: I0224 02:03:44.504022 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:44.505335 master-0 kubenswrapper[4207]: I0224 02:03:44.504040 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:44.505335 master-0 kubenswrapper[4207]: E0224 02:03:44.504194 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-54b95" podUID="e3a675b9-feaa-4456-b7b4-0cd3afc42a42" Feb 24 02:03:44.505335 master-0 kubenswrapper[4207]: E0224 02:03:44.504351 4207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tntcf" podUID="70e2ba24-4871-4d1d-9935-156fdbeb2810" Feb 24 02:03:46.372266 master-0 kubenswrapper[4207]: I0224 02:03:46.372093 4207 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Feb 24 02:03:46.372266 master-0 kubenswrapper[4207]: I0224 02:03:46.372267 4207 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Feb 24 02:03:46.418715 master-0 kubenswrapper[4207]: I0224 02:03:46.418677 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq"] Feb 24 02:03:46.419341 master-0 kubenswrapper[4207]: I0224 02:03:46.419298 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:46.430527 master-0 kubenswrapper[4207]: I0224 02:03:46.430452 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 24 02:03:46.430907 master-0 kubenswrapper[4207]: I0224 02:03:46.430864 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 24 02:03:46.431040 master-0 kubenswrapper[4207]: I0224 02:03:46.430987 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 24 02:03:46.431191 master-0 kubenswrapper[4207]: I0224 02:03:46.431119 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 24 02:03:46.431472 master-0 kubenswrapper[4207]: I0224 02:03:46.431435 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 24 02:03:46.431764 master-0 kubenswrapper[4207]: I0224 02:03:46.431725 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 24 02:03:46.433300 master-0 kubenswrapper[4207]: I0224 02:03:46.433253 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc"] Feb 24 02:03:46.433887 master-0 kubenswrapper[4207]: I0224 02:03:46.433856 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:03:46.435434 master-0 kubenswrapper[4207]: I0224 02:03:46.435358 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-6f5488b997-4qf9p"] Feb 24 02:03:46.436214 master-0 kubenswrapper[4207]: I0224 02:03:46.436145 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:03:46.439982 master-0 kubenswrapper[4207]: I0224 02:03:46.439933 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c"] Feb 24 02:03:46.455980 master-0 kubenswrapper[4207]: I0224 02:03:46.455913 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:03:46.459612 master-0 kubenswrapper[4207]: I0224 02:03:46.457463 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr"] Feb 24 02:03:46.459612 master-0 kubenswrapper[4207]: I0224 02:03:46.458321 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr" Feb 24 02:03:46.464997 master-0 kubenswrapper[4207]: I0224 02:03:46.464118 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 24 02:03:46.464997 master-0 kubenswrapper[4207]: I0224 02:03:46.464543 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 24 02:03:46.464997 master-0 kubenswrapper[4207]: I0224 02:03:46.464940 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 24 02:03:46.465424 master-0 kubenswrapper[4207]: I0224 02:03:46.465351 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 24 02:03:46.465738 master-0 kubenswrapper[4207]: I0224 02:03:46.465704 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 24 02:03:46.465843 master-0 kubenswrapper[4207]: I0224 02:03:46.465819 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 24 02:03:46.466006 master-0 kubenswrapper[4207]: I0224 02:03:46.465905 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 24 02:03:46.474320 master-0 kubenswrapper[4207]: I0224 02:03:46.470273 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 24 02:03:46.480599 master-0 kubenswrapper[4207]: I0224 02:03:46.474792 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 24 02:03:46.496596 master-0 kubenswrapper[4207]: I0224 02:03:46.493763 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb"] Feb 24 02:03:46.496596 master-0 kubenswrapper[4207]: I0224 02:03:46.494722 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz"] Feb 24 02:03:46.496596 master-0 kubenswrapper[4207]: I0224 02:03:46.494998 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq"] Feb 24 02:03:46.496596 master-0 kubenswrapper[4207]: I0224 02:03:46.495415 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:03:46.496596 master-0 kubenswrapper[4207]: I0224 02:03:46.495954 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" Feb 24 02:03:46.496596 master-0 kubenswrapper[4207]: I0224 02:03:46.496373 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" Feb 24 02:03:46.501600 master-0 kubenswrapper[4207]: I0224 02:03:46.497019 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 24 02:03:46.501600 master-0 kubenswrapper[4207]: I0224 02:03:46.497304 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-c95qc"] Feb 24 02:03:46.501600 master-0 kubenswrapper[4207]: I0224 02:03:46.497655 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k"] Feb 24 02:03:46.501600 master-0 kubenswrapper[4207]: I0224 02:03:46.498109 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" Feb 24 02:03:46.501600 master-0 kubenswrapper[4207]: I0224 02:03:46.498492 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-c95qc" Feb 24 02:03:46.501600 master-0 kubenswrapper[4207]: I0224 02:03:46.498666 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg"] Feb 24 02:03:46.501600 master-0 kubenswrapper[4207]: I0224 02:03:46.498715 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 24 02:03:46.501600 master-0 kubenswrapper[4207]: I0224 02:03:46.498897 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 24 02:03:46.501600 master-0 kubenswrapper[4207]: I0224 02:03:46.499211 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:03:46.501600 master-0 kubenswrapper[4207]: I0224 02:03:46.500648 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd"] Feb 24 02:03:46.501600 master-0 kubenswrapper[4207]: I0224 02:03:46.501016 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:46.501600 master-0 kubenswrapper[4207]: I0224 02:03:46.501333 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n"] Feb 24 02:03:46.502093 master-0 kubenswrapper[4207]: I0224 02:03:46.501844 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" Feb 24 02:03:46.507602 master-0 kubenswrapper[4207]: I0224 02:03:46.502560 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb"] Feb 24 02:03:46.507602 master-0 kubenswrapper[4207]: I0224 02:03:46.505944 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:46.507602 master-0 kubenswrapper[4207]: I0224 02:03:46.506450 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:46.507602 master-0 kubenswrapper[4207]: I0224 02:03:46.506525 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 24 02:03:46.507602 master-0 kubenswrapper[4207]: I0224 02:03:46.506684 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn"] Feb 24 02:03:46.507602 master-0 kubenswrapper[4207]: I0224 02:03:46.506832 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:46.507602 master-0 kubenswrapper[4207]: I0224 02:03:46.507465 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2"] Feb 24 02:03:46.507872 master-0 kubenswrapper[4207]: I0224 02:03:46.507857 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6569778c84-6dlqb"] Feb 24 02:03:46.511590 master-0 kubenswrapper[4207]: I0224 02:03:46.508289 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f"] Feb 24 02:03:46.511590 master-0 kubenswrapper[4207]: I0224 02:03:46.508832 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" Feb 24 02:03:46.511590 master-0 kubenswrapper[4207]: I0224 02:03:46.508953 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg"] Feb 24 02:03:46.511590 master-0 kubenswrapper[4207]: I0224 02:03:46.509294 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn" Feb 24 02:03:46.511590 master-0 kubenswrapper[4207]: I0224 02:03:46.509369 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" Feb 24 02:03:46.511590 master-0 kubenswrapper[4207]: I0224 02:03:46.509749 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" Feb 24 02:03:46.511590 master-0 kubenswrapper[4207]: I0224 02:03:46.510172 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:03:46.515603 master-0 kubenswrapper[4207]: I0224 02:03:46.513620 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4"] Feb 24 02:03:46.515603 master-0 kubenswrapper[4207]: I0224 02:03:46.514022 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:03:46.515603 master-0 kubenswrapper[4207]: I0224 02:03:46.514743 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7"] Feb 24 02:03:46.515603 master-0 kubenswrapper[4207]: I0224 02:03:46.515302 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:03:46.515603 master-0 kubenswrapper[4207]: I0224 02:03:46.515303 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q"] Feb 24 02:03:46.519590 master-0 kubenswrapper[4207]: I0224 02:03:46.517880 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 24 02:03:46.519590 master-0 kubenswrapper[4207]: I0224 02:03:46.518649 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 24 02:03:46.519590 master-0 kubenswrapper[4207]: I0224 02:03:46.519243 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 24 02:03:46.521879 master-0 kubenswrapper[4207]: I0224 02:03:46.520609 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 24 02:03:46.521879 master-0 kubenswrapper[4207]: I0224 02:03:46.521282 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 24 02:03:46.528317 master-0 kubenswrapper[4207]: I0224 02:03:46.526730 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 24 02:03:46.528317 master-0 kubenswrapper[4207]: I0224 02:03:46.526778 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 24 02:03:46.528317 master-0 kubenswrapper[4207]: I0224 02:03:46.527498 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 24 02:03:46.528317 master-0 kubenswrapper[4207]: I0224 02:03:46.527784 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 24 02:03:46.528317 master-0 kubenswrapper[4207]: I0224 02:03:46.528002 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 24 02:03:46.531587 master-0 kubenswrapper[4207]: I0224 02:03:46.528673 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 24 02:03:46.531587 master-0 kubenswrapper[4207]: I0224 02:03:46.530889 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 24 02:03:46.531587 master-0 kubenswrapper[4207]: I0224 02:03:46.531108 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 24 02:03:46.531587 master-0 kubenswrapper[4207]: I0224 02:03:46.531249 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 24 02:03:46.550959 master-0 kubenswrapper[4207]: I0224 02:03:46.550191 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb"] Feb 24 02:03:46.550959 master-0 kubenswrapper[4207]: I0224 02:03:46.550446 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" Feb 24 02:03:46.552307 master-0 kubenswrapper[4207]: I0224 02:03:46.551336 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 24 02:03:46.552307 master-0 kubenswrapper[4207]: I0224 02:03:46.552219 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-8c7d49845-hxcn2"] Feb 24 02:03:46.559588 master-0 kubenswrapper[4207]: I0224 02:03:46.553357 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" Feb 24 02:03:46.559588 master-0 kubenswrapper[4207]: I0224 02:03:46.553744 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 24 02:03:46.559588 master-0 kubenswrapper[4207]: I0224 02:03:46.554489 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:03:46.559588 master-0 kubenswrapper[4207]: I0224 02:03:46.555497 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 24 02:03:46.559588 master-0 kubenswrapper[4207]: I0224 02:03:46.555644 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 24 02:03:46.559588 master-0 kubenswrapper[4207]: I0224 02:03:46.555758 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 24 02:03:46.559588 master-0 kubenswrapper[4207]: I0224 02:03:46.556421 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq"] Feb 24 02:03:46.559588 master-0 kubenswrapper[4207]: I0224 02:03:46.558228 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc"] Feb 24 02:03:46.559588 master-0 kubenswrapper[4207]: I0224 02:03:46.558285 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-6f5488b997-4qf9p"] Feb 24 02:03:46.568591 master-0 kubenswrapper[4207]: I0224 02:03:46.560635 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c"] Feb 24 02:03:46.568591 master-0 kubenswrapper[4207]: I0224 02:03:46.561344 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:46.568591 master-0 kubenswrapper[4207]: I0224 02:03:46.561394 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-4qf9p\" (UID: \"91d16f7b-390a-4d9d-99d6-cc8e210801d1\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:03:46.568591 master-0 kubenswrapper[4207]: I0224 02:03:46.561438 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert\") pod \"catalog-operator-596f79dd6f-8cg5c\" (UID: \"12b89e05-a503-47aa-90b2-4d741e015b19\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:03:46.568591 master-0 kubenswrapper[4207]: I0224 02:03:46.561484 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b5620d6-a5fe-45d7-b39e-8bed7f602a17-config\") pod \"service-ca-operator-c48c8bf7c-6fqkr\" (UID: \"9b5620d6-a5fe-45d7-b39e-8bed7f602a17\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr" Feb 24 02:03:46.568591 master-0 kubenswrapper[4207]: I0224 02:03:46.561535 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8rjx\" (UniqueName: \"kubernetes.io/projected/91d16f7b-390a-4d9d-99d6-cc8e210801d1-kube-api-access-b8rjx\") pod \"marketplace-operator-6f5488b997-4qf9p\" (UID: \"91d16f7b-390a-4d9d-99d6-cc8e210801d1\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:03:46.568591 master-0 kubenswrapper[4207]: I0224 02:03:46.561603 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert\") pod \"olm-operator-5499d7f7bb-5g6nc\" (UID: \"02f1d753-983a-4c4a-b1a0-560de173859a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:03:46.568591 master-0 kubenswrapper[4207]: I0224 02:03:46.561666 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:46.568591 master-0 kubenswrapper[4207]: I0224 02:03:46.561705 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-images\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:46.568591 master-0 kubenswrapper[4207]: I0224 02:03:46.561740 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-4qf9p\" (UID: \"91d16f7b-390a-4d9d-99d6-cc8e210801d1\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:03:46.568591 master-0 kubenswrapper[4207]: I0224 02:03:46.561778 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqqkv\" (UniqueName: \"kubernetes.io/projected/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-kube-api-access-dqqkv\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:46.568591 master-0 kubenswrapper[4207]: I0224 02:03:46.561816 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b5620d6-a5fe-45d7-b39e-8bed7f602a17-serving-cert\") pod \"service-ca-operator-c48c8bf7c-6fqkr\" (UID: \"9b5620d6-a5fe-45d7-b39e-8bed7f602a17\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr" Feb 24 02:03:46.568591 master-0 kubenswrapper[4207]: I0224 02:03:46.561855 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-8cg5c\" (UID: \"12b89e05-a503-47aa-90b2-4d741e015b19\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:03:46.568591 master-0 kubenswrapper[4207]: I0224 02:03:46.561895 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-config\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:46.568591 master-0 kubenswrapper[4207]: I0224 02:03:46.561948 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtf52\" (UniqueName: \"kubernetes.io/projected/9b5620d6-a5fe-45d7-b39e-8bed7f602a17-kube-api-access-jtf52\") pod \"service-ca-operator-c48c8bf7c-6fqkr\" (UID: \"9b5620d6-a5fe-45d7-b39e-8bed7f602a17\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr" Feb 24 02:03:46.569102 master-0 kubenswrapper[4207]: I0224 02:03:46.562005 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-5g6nc\" (UID: \"02f1d753-983a-4c4a-b1a0-560de173859a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:03:46.569102 master-0 kubenswrapper[4207]: I0224 02:03:46.562049 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb52w\" (UniqueName: \"kubernetes.io/projected/02f1d753-983a-4c4a-b1a0-560de173859a-kube-api-access-mb52w\") pod \"olm-operator-5499d7f7bb-5g6nc\" (UID: \"02f1d753-983a-4c4a-b1a0-560de173859a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:03:46.569102 master-0 kubenswrapper[4207]: I0224 02:03:46.562088 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twgrj\" (UniqueName: \"kubernetes.io/projected/12b89e05-a503-47aa-90b2-4d741e015b19-kube-api-access-twgrj\") pod \"catalog-operator-596f79dd6f-8cg5c\" (UID: \"12b89e05-a503-47aa-90b2-4d741e015b19\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:03:46.575594 master-0 kubenswrapper[4207]: I0224 02:03:46.572765 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-c95qc"] Feb 24 02:03:46.576053 master-0 kubenswrapper[4207]: I0224 02:03:46.575985 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n"] Feb 24 02:03:46.576094 master-0 kubenswrapper[4207]: I0224 02:03:46.576057 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2"] Feb 24 02:03:46.576362 master-0 kubenswrapper[4207]: I0224 02:03:46.576341 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 24 02:03:46.576507 master-0 kubenswrapper[4207]: I0224 02:03:46.576482 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 24 02:03:46.577051 master-0 kubenswrapper[4207]: I0224 02:03:46.576691 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 24 02:03:46.577106 master-0 kubenswrapper[4207]: I0224 02:03:46.577066 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 24 02:03:46.577327 master-0 kubenswrapper[4207]: I0224 02:03:46.577312 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 24 02:03:46.577403 master-0 kubenswrapper[4207]: I0224 02:03:46.577382 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 24 02:03:46.577441 master-0 kubenswrapper[4207]: I0224 02:03:46.577430 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 24 02:03:46.577583 master-0 kubenswrapper[4207]: I0224 02:03:46.577554 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 24 02:03:46.577717 master-0 kubenswrapper[4207]: I0224 02:03:46.577704 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 24 02:03:46.577821 master-0 kubenswrapper[4207]: I0224 02:03:46.577805 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 24 02:03:46.577862 master-0 kubenswrapper[4207]: I0224 02:03:46.577840 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 24 02:03:46.578194 master-0 kubenswrapper[4207]: I0224 02:03:46.577907 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb"] Feb 24 02:03:46.578194 master-0 kubenswrapper[4207]: I0224 02:03:46.577999 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 24 02:03:46.578765 master-0 kubenswrapper[4207]: I0224 02:03:46.578738 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 24 02:03:46.578853 master-0 kubenswrapper[4207]: I0224 02:03:46.578811 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 24 02:03:46.579305 master-0 kubenswrapper[4207]: I0224 02:03:46.579278 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 24 02:03:46.579735 master-0 kubenswrapper[4207]: I0224 02:03:46.579713 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 24 02:03:46.579854 master-0 kubenswrapper[4207]: I0224 02:03:46.579822 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 24 02:03:46.579969 master-0 kubenswrapper[4207]: I0224 02:03:46.579943 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 24 02:03:46.580087 master-0 kubenswrapper[4207]: I0224 02:03:46.580068 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 24 02:03:46.580167 master-0 kubenswrapper[4207]: I0224 02:03:46.580154 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 24 02:03:46.580231 master-0 kubenswrapper[4207]: I0224 02:03:46.580219 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 24 02:03:46.580327 master-0 kubenswrapper[4207]: I0224 02:03:46.580314 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 24 02:03:46.580372 master-0 kubenswrapper[4207]: I0224 02:03:46.580351 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq"] Feb 24 02:03:46.580400 master-0 kubenswrapper[4207]: I0224 02:03:46.580390 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 24 02:03:46.580425 master-0 kubenswrapper[4207]: I0224 02:03:46.580410 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 24 02:03:46.580498 master-0 kubenswrapper[4207]: I0224 02:03:46.580485 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 24 02:03:46.580710 master-0 kubenswrapper[4207]: I0224 02:03:46.580541 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 24 02:03:46.580965 master-0 kubenswrapper[4207]: I0224 02:03:46.580945 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 24 02:03:46.581000 master-0 kubenswrapper[4207]: I0224 02:03:46.580962 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 24 02:03:46.581085 master-0 kubenswrapper[4207]: I0224 02:03:46.581016 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 24 02:03:46.581085 master-0 kubenswrapper[4207]: I0224 02:03:46.580591 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 24 02:03:46.581138 master-0 kubenswrapper[4207]: I0224 02:03:46.580608 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 24 02:03:46.581138 master-0 kubenswrapper[4207]: I0224 02:03:46.581102 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 24 02:03:46.581184 master-0 kubenswrapper[4207]: I0224 02:03:46.580631 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 24 02:03:46.581210 master-0 kubenswrapper[4207]: I0224 02:03:46.581196 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 24 02:03:46.581210 master-0 kubenswrapper[4207]: I0224 02:03:46.580485 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 24 02:03:46.581265 master-0 kubenswrapper[4207]: I0224 02:03:46.580648 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 24 02:03:46.581265 master-0 kubenswrapper[4207]: I0224 02:03:46.580661 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 24 02:03:46.581352 master-0 kubenswrapper[4207]: I0224 02:03:46.580690 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 24 02:03:46.581382 master-0 kubenswrapper[4207]: I0224 02:03:46.581354 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 24 02:03:46.581457 master-0 kubenswrapper[4207]: I0224 02:03:46.581438 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 24 02:03:46.581504 master-0 kubenswrapper[4207]: I0224 02:03:46.581441 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 24 02:03:46.581504 master-0 kubenswrapper[4207]: I0224 02:03:46.581499 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 24 02:03:46.581556 master-0 kubenswrapper[4207]: I0224 02:03:46.581510 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 24 02:03:46.581603 master-0 kubenswrapper[4207]: I0224 02:03:46.581582 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 24 02:03:46.581718 master-0 kubenswrapper[4207]: I0224 02:03:46.581700 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 24 02:03:46.585500 master-0 kubenswrapper[4207]: I0224 02:03:46.582067 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k"] Feb 24 02:03:46.585500 master-0 kubenswrapper[4207]: I0224 02:03:46.582440 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd"] Feb 24 02:03:46.585500 master-0 kubenswrapper[4207]: I0224 02:03:46.578986 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 24 02:03:46.587419 master-0 kubenswrapper[4207]: I0224 02:03:46.587393 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6569778c84-6dlqb"] Feb 24 02:03:46.587419 master-0 kubenswrapper[4207]: I0224 02:03:46.587419 4207 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-rjbl5"] Feb 24 02:03:46.587869 master-0 kubenswrapper[4207]: I0224 02:03:46.587843 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-rjbl5" Feb 24 02:03:46.589342 master-0 kubenswrapper[4207]: I0224 02:03:46.589321 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 24 02:03:46.589441 master-0 kubenswrapper[4207]: I0224 02:03:46.589424 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q"] Feb 24 02:03:46.589921 master-0 kubenswrapper[4207]: I0224 02:03:46.589895 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 24 02:03:46.590688 master-0 kubenswrapper[4207]: I0224 02:03:46.590669 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 24 02:03:46.590760 master-0 kubenswrapper[4207]: I0224 02:03:46.590744 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz"] Feb 24 02:03:46.590790 master-0 kubenswrapper[4207]: I0224 02:03:46.590762 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg"] Feb 24 02:03:46.591166 master-0 kubenswrapper[4207]: I0224 02:03:46.591149 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 24 02:03:46.591308 master-0 kubenswrapper[4207]: I0224 02:03:46.591292 4207 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 24 02:03:46.591952 master-0 kubenswrapper[4207]: I0224 02:03:46.591899 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 24 02:03:46.592607 master-0 kubenswrapper[4207]: I0224 02:03:46.592198 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 24 02:03:46.593317 master-0 kubenswrapper[4207]: I0224 02:03:46.593279 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn"] Feb 24 02:03:46.593362 master-0 kubenswrapper[4207]: I0224 02:03:46.593320 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f"] Feb 24 02:03:46.594079 master-0 kubenswrapper[4207]: I0224 02:03:46.594064 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg"] Feb 24 02:03:46.594357 master-0 kubenswrapper[4207]: I0224 02:03:46.594328 4207 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 24 02:03:46.594619 master-0 kubenswrapper[4207]: I0224 02:03:46.594596 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-8c7d49845-hxcn2"] Feb 24 02:03:46.595351 master-0 kubenswrapper[4207]: I0224 02:03:46.595333 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb"] Feb 24 02:03:46.597276 master-0 kubenswrapper[4207]: I0224 02:03:46.597247 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4"] Feb 24 02:03:46.597953 master-0 kubenswrapper[4207]: I0224 02:03:46.597911 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr"] Feb 24 02:03:46.598890 master-0 kubenswrapper[4207]: I0224 02:03:46.598638 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7"] Feb 24 02:03:46.602770 master-0 kubenswrapper[4207]: I0224 02:03:46.602748 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb"] Feb 24 02:03:46.663124 master-0 kubenswrapper[4207]: I0224 02:03:46.663083 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:46.663224 master-0 kubenswrapper[4207]: I0224 02:03:46.663141 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/303d5058-84df-40d1-a941-896b093ae470-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-7wc6k\" (UID: \"303d5058-84df-40d1-a941-896b093ae470\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" Feb 24 02:03:46.663224 master-0 kubenswrapper[4207]: I0224 02:03:46.663174 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-fkzdb\" (UID: \"f2e9cdff-8c15-43df-b8df-7fe3a73fda86\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:03:46.663224 master-0 kubenswrapper[4207]: I0224 02:03:46.663197 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls\") pod \"dns-operator-8c7d49845-hxcn2\" (UID: \"2cb764f6-40f8-4e87-8be0-b9d7b0364201\") " pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" Feb 24 02:03:46.663224 master-0 kubenswrapper[4207]: I0224 02:03:46.663219 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcbda577-b943-4b5c-b041-948aece8e40f-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-xdws2\" (UID: \"fcbda577-b943-4b5c-b041-948aece8e40f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" Feb 24 02:03:46.663398 master-0 kubenswrapper[4207]: I0224 02:03:46.663243 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cabdddba-5507-4e47-98ef-a00c6d0f305d-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:03:46.663398 master-0 kubenswrapper[4207]: I0224 02:03:46.663270 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nwzm\" (UniqueName: \"kubernetes.io/projected/c92835f0-7f32-4584-8304-843d7979392a-kube-api-access-6nwzm\") pod \"openshift-config-operator-6f47d587d6-ccrxg\" (UID: \"c92835f0-7f32-4584-8304-843d7979392a\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:03:46.663398 master-0 kubenswrapper[4207]: I0224 02:03:46.663294 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:46.663398 master-0 kubenswrapper[4207]: I0224 02:03:46.663318 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8tttg\" (UID: \"f5463fbf-ac21-4058-9a3b-30d0e5ea31b7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" Feb 24 02:03:46.663398 master-0 kubenswrapper[4207]: I0224 02:03:46.663342 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8tttg\" (UID: \"f5463fbf-ac21-4058-9a3b-30d0e5ea31b7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" Feb 24 02:03:46.663398 master-0 kubenswrapper[4207]: I0224 02:03:46.663364 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-trusted-ca\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:03:46.663398 master-0 kubenswrapper[4207]: I0224 02:03:46.663393 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb52w\" (UniqueName: \"kubernetes.io/projected/02f1d753-983a-4c4a-b1a0-560de173859a-kube-api-access-mb52w\") pod \"olm-operator-5499d7f7bb-5g6nc\" (UID: \"02f1d753-983a-4c4a-b1a0-560de173859a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:03:46.663733 master-0 kubenswrapper[4207]: I0224 02:03:46.663417 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:03:46.663733 master-0 kubenswrapper[4207]: I0224 02:03:46.663446 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:46.663733 master-0 kubenswrapper[4207]: I0224 02:03:46.663471 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f85222bf-f51a-4232-8db1-1e6ee593617b-config\") pod \"kube-apiserver-operator-5d87bf58c-2492q\" (UID: \"f85222bf-f51a-4232-8db1-1e6ee593617b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" Feb 24 02:03:46.663733 master-0 kubenswrapper[4207]: I0224 02:03:46.663495 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ssxg\" (UniqueName: \"kubernetes.io/projected/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-kube-api-access-2ssxg\") pod \"multus-admission-controller-5f98f4f8d5-dg77f\" (UID: \"dc3d08db-45fa-4fef-b1fd-2875f22d5c45\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" Feb 24 02:03:46.663733 master-0 kubenswrapper[4207]: I0224 02:03:46.663517 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86pcb\" (UniqueName: \"kubernetes.io/projected/7b098bd4-5751-4b01-8409-0688fd29233e-kube-api-access-86pcb\") pod \"csi-snapshot-controller-operator-6fb4df594f-c95qc\" (UID: \"7b098bd4-5751-4b01-8409-0688fd29233e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-c95qc" Feb 24 02:03:46.663733 master-0 kubenswrapper[4207]: I0224 02:03:46.663537 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7fgn\" (UID: \"7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn" Feb 24 02:03:46.663733 master-0 kubenswrapper[4207]: I0224 02:03:46.663556 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcbda577-b943-4b5c-b041-948aece8e40f-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-xdws2\" (UID: \"fcbda577-b943-4b5c-b041-948aece8e40f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" Feb 24 02:03:46.663733 master-0 kubenswrapper[4207]: I0224 02:03:46.663593 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6f7j\" (UniqueName: \"kubernetes.io/projected/cabdddba-5507-4e47-98ef-a00c6d0f305d-kube-api-access-h6f7j\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:03:46.663733 master-0 kubenswrapper[4207]: I0224 02:03:46.663612 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f85222bf-f51a-4232-8db1-1e6ee593617b-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-2492q\" (UID: \"f85222bf-f51a-4232-8db1-1e6ee593617b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" Feb 24 02:03:46.663733 master-0 kubenswrapper[4207]: I0224 02:03:46.663634 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b5620d6-a5fe-45d7-b39e-8bed7f602a17-config\") pod \"service-ca-operator-c48c8bf7c-6fqkr\" (UID: \"9b5620d6-a5fe-45d7-b39e-8bed7f602a17\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr" Feb 24 02:03:46.663733 master-0 kubenswrapper[4207]: I0224 02:03:46.663654 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b36d8451-0fda-4d9d-a850-d05c8f847016-config\") pod \"openshift-apiserver-operator-8586dccc9b-sl5hz\" (UID: \"b36d8451-0fda-4d9d-a850-d05c8f847016\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" Feb 24 02:03:46.663733 master-0 kubenswrapper[4207]: I0224 02:03:46.663680 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c92835f0-7f32-4584-8304-843d7979392a-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-ccrxg\" (UID: \"c92835f0-7f32-4584-8304-843d7979392a\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:03:46.663733 master-0 kubenswrapper[4207]: I0224 02:03:46.663701 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sp95\" (UniqueName: \"kubernetes.io/projected/c84dc269-43ae-4083-9998-a0b3c90bb681-kube-api-access-9sp95\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:03:46.663733 master-0 kubenswrapper[4207]: I0224 02:03:46.663723 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:03:46.664209 master-0 kubenswrapper[4207]: I0224 02:03:46.663746 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a02536a3-7d3e-4e74-9625-aefed518ec35-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-tl97n\" (UID: \"a02536a3-7d3e-4e74-9625-aefed518ec35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" Feb 24 02:03:46.664209 master-0 kubenswrapper[4207]: I0224 02:03:46.663765 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert\") pod \"olm-operator-5499d7f7bb-5g6nc\" (UID: \"02f1d753-983a-4c4a-b1a0-560de173859a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:03:46.664209 master-0 kubenswrapper[4207]: I0224 02:03:46.663783 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:46.664209 master-0 kubenswrapper[4207]: I0224 02:03:46.663803 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:03:46.664209 master-0 kubenswrapper[4207]: I0224 02:03:46.663820 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-bound-sa-token\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:03:46.664209 master-0 kubenswrapper[4207]: I0224 02:03:46.663841 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a02536a3-7d3e-4e74-9625-aefed518ec35-config\") pod \"kube-controller-manager-operator-7bcfbc574b-tl97n\" (UID: \"a02536a3-7d3e-4e74-9625-aefed518ec35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" Feb 24 02:03:46.664209 master-0 kubenswrapper[4207]: I0224 02:03:46.663858 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/db8d6627-394c-4087-bfa4-bf7580f6bb4b-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:03:46.664209 master-0 kubenswrapper[4207]: I0224 02:03:46.663876 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2hllb\" (UID: \"6320dbb5-b84d-4a57-8c65-fbed8421f84a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" Feb 24 02:03:46.664209 master-0 kubenswrapper[4207]: I0224 02:03:46.663896 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njjq8\" (UniqueName: \"kubernetes.io/projected/b36d8451-0fda-4d9d-a850-d05c8f847016-kube-api-access-njjq8\") pod \"openshift-apiserver-operator-8586dccc9b-sl5hz\" (UID: \"b36d8451-0fda-4d9d-a850-d05c8f847016\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" Feb 24 02:03:46.664209 master-0 kubenswrapper[4207]: I0224 02:03:46.663917 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqqkv\" (UniqueName: \"kubernetes.io/projected/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-kube-api-access-dqqkv\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:46.664209 master-0 kubenswrapper[4207]: I0224 02:03:46.663935 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b5620d6-a5fe-45d7-b39e-8bed7f602a17-serving-cert\") pod \"service-ca-operator-c48c8bf7c-6fqkr\" (UID: \"9b5620d6-a5fe-45d7-b39e-8bed7f602a17\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr" Feb 24 02:03:46.664209 master-0 kubenswrapper[4207]: I0224 02:03:46.663952 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c84dc269-43ae-4083-9998-a0b3c90bb681-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:03:46.664209 master-0 kubenswrapper[4207]: I0224 02:03:46.663972 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-8cg5c\" (UID: \"12b89e05-a503-47aa-90b2-4d741e015b19\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:03:46.664209 master-0 kubenswrapper[4207]: I0224 02:03:46.663990 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c92835f0-7f32-4584-8304-843d7979392a-serving-cert\") pod \"openshift-config-operator-6f47d587d6-ccrxg\" (UID: \"c92835f0-7f32-4584-8304-843d7979392a\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:03:46.664733 master-0 kubenswrapper[4207]: I0224 02:03:46.664006 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7fgn\" (UID: \"7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn" Feb 24 02:03:46.664733 master-0 kubenswrapper[4207]: I0224 02:03:46.664034 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82hfh\" (UniqueName: \"kubernetes.io/projected/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-kube-api-access-82hfh\") pod \"cluster-monitoring-operator-6bb6d78bf-fkzdb\" (UID: \"f2e9cdff-8c15-43df-b8df-7fe3a73fda86\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:03:46.664733 master-0 kubenswrapper[4207]: I0224 02:03:46.664057 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-5g6nc\" (UID: \"02f1d753-983a-4c4a-b1a0-560de173859a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:03:46.664733 master-0 kubenswrapper[4207]: I0224 02:03:46.664074 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgjlz\" (UniqueName: \"kubernetes.io/projected/6320dbb5-b84d-4a57-8c65-fbed8421f84a-kube-api-access-pgjlz\") pod \"package-server-manager-5c75f78c8b-2hllb\" (UID: \"6320dbb5-b84d-4a57-8c65-fbed8421f84a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" Feb 24 02:03:46.664733 master-0 kubenswrapper[4207]: I0224 02:03:46.664098 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/db8d6627-394c-4087-bfa4-bf7580f6bb4b-images\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:03:46.664733 master-0 kubenswrapper[4207]: I0224 02:03:46.664117 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twgrj\" (UniqueName: \"kubernetes.io/projected/12b89e05-a503-47aa-90b2-4d741e015b19-kube-api-access-twgrj\") pod \"catalog-operator-596f79dd6f-8cg5c\" (UID: \"12b89e05-a503-47aa-90b2-4d741e015b19\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:03:46.664733 master-0 kubenswrapper[4207]: I0224 02:03:46.664134 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6lsp\" (UniqueName: \"kubernetes.io/projected/db8d6627-394c-4087-bfa4-bf7580f6bb4b-kube-api-access-x6lsp\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:03:46.664733 master-0 kubenswrapper[4207]: I0224 02:03:46.664151 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssz8p\" (UniqueName: \"kubernetes.io/projected/6a9ccd8e-d964-4c03-8ffc-51b464030c25-kube-api-access-ssz8p\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:46.664733 master-0 kubenswrapper[4207]: I0224 02:03:46.664170 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2b65\" (UniqueName: \"kubernetes.io/projected/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c-kube-api-access-n2b65\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7fgn\" (UID: \"7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn" Feb 24 02:03:46.664733 master-0 kubenswrapper[4207]: I0224 02:03:46.664188 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert\") pod \"catalog-operator-596f79dd6f-8cg5c\" (UID: \"12b89e05-a503-47aa-90b2-4d741e015b19\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:03:46.664733 master-0 kubenswrapper[4207]: I0224 02:03:46.664207 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02536a3-7d3e-4e74-9625-aefed518ec35-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-tl97n\" (UID: \"a02536a3-7d3e-4e74-9625-aefed518ec35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" Feb 24 02:03:46.664733 master-0 kubenswrapper[4207]: I0224 02:03:46.664225 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-serving-cert\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:46.664733 master-0 kubenswrapper[4207]: I0224 02:03:46.664245 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-4qf9p\" (UID: \"91d16f7b-390a-4d9d-99d6-cc8e210801d1\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:03:46.664733 master-0 kubenswrapper[4207]: I0224 02:03:46.664264 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a9ccd8e-d964-4c03-8ffc-51b464030c25-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:46.668426 master-0 kubenswrapper[4207]: I0224 02:03:46.664280 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f85222bf-f51a-4232-8db1-1e6ee593617b-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-2492q\" (UID: \"f85222bf-f51a-4232-8db1-1e6ee593617b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" Feb 24 02:03:46.668426 master-0 kubenswrapper[4207]: I0224 02:03:46.664312 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpg26\" (UniqueName: \"kubernetes.io/projected/fcbda577-b943-4b5c-b041-948aece8e40f-kube-api-access-vpg26\") pod \"kube-storage-version-migrator-operator-fc889cfd5-xdws2\" (UID: \"fcbda577-b943-4b5c-b041-948aece8e40f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" Feb 24 02:03:46.668426 master-0 kubenswrapper[4207]: I0224 02:03:46.664328 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b36d8451-0fda-4d9d-a850-d05c8f847016-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-sl5hz\" (UID: \"b36d8451-0fda-4d9d-a850-d05c8f847016\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" Feb 24 02:03:46.668426 master-0 kubenswrapper[4207]: I0224 02:03:46.664345 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp8hv\" (UniqueName: \"kubernetes.io/projected/2cb764f6-40f8-4e87-8be0-b9d7b0364201-kube-api-access-sp8hv\") pod \"dns-operator-8c7d49845-hxcn2\" (UID: \"2cb764f6-40f8-4e87-8be0-b9d7b0364201\") " pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" Feb 24 02:03:46.668426 master-0 kubenswrapper[4207]: I0224 02:03:46.664364 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79bl6\" (UniqueName: \"kubernetes.io/projected/303d5058-84df-40d1-a941-896b093ae470-kube-api-access-79bl6\") pod \"cluster-olm-operator-5bd7768f54-7wc6k\" (UID: \"303d5058-84df-40d1-a941-896b093ae470\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" Feb 24 02:03:46.668426 master-0 kubenswrapper[4207]: I0224 02:03:46.664381 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-fkzdb\" (UID: \"f2e9cdff-8c15-43df-b8df-7fe3a73fda86\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:03:46.668426 master-0 kubenswrapper[4207]: I0224 02:03:46.664406 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8rjx\" (UniqueName: \"kubernetes.io/projected/91d16f7b-390a-4d9d-99d6-cc8e210801d1-kube-api-access-b8rjx\") pod \"marketplace-operator-6f5488b997-4qf9p\" (UID: \"91d16f7b-390a-4d9d-99d6-cc8e210801d1\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:03:46.668426 master-0 kubenswrapper[4207]: I0224 02:03:46.664422 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/303d5058-84df-40d1-a941-896b093ae470-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-7wc6k\" (UID: \"303d5058-84df-40d1-a941-896b093ae470\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" Feb 24 02:03:46.668426 master-0 kubenswrapper[4207]: I0224 02:03:46.664438 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qph4g\" (UniqueName: \"kubernetes.io/projected/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-kube-api-access-qph4g\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:03:46.668426 master-0 kubenswrapper[4207]: I0224 02:03:46.664467 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-etcd-ca\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:46.668426 master-0 kubenswrapper[4207]: I0224 02:03:46.664484 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:46.668426 master-0 kubenswrapper[4207]: I0224 02:03:46.664504 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-images\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:46.668426 master-0 kubenswrapper[4207]: I0224 02:03:46.664522 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cabdddba-5507-4e47-98ef-a00c6d0f305d-serving-cert\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:03:46.668426 master-0 kubenswrapper[4207]: I0224 02:03:46.664539 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-config\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:46.668426 master-0 kubenswrapper[4207]: I0224 02:03:46.664556 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cabdddba-5507-4e47-98ef-a00c6d0f305d-config\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:03:46.669050 master-0 kubenswrapper[4207]: I0224 02:03:46.664593 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c84dc269-43ae-4083-9998-a0b3c90bb681-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:03:46.669050 master-0 kubenswrapper[4207]: I0224 02:03:46.664612 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwjpw\" (UniqueName: \"kubernetes.io/projected/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-kube-api-access-pwjpw\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:46.669050 master-0 kubenswrapper[4207]: I0224 02:03:46.664630 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-4qf9p\" (UID: \"91d16f7b-390a-4d9d-99d6-cc8e210801d1\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:03:46.669050 master-0 kubenswrapper[4207]: I0224 02:03:46.664647 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-etcd-client\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:46.669050 master-0 kubenswrapper[4207]: E0224 02:03:46.665199 4207 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 24 02:03:46.669050 master-0 kubenswrapper[4207]: E0224 02:03:46.665237 4207 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Feb 24 02:03:46.669050 master-0 kubenswrapper[4207]: E0224 02:03:46.665302 4207 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Feb 24 02:03:46.669050 master-0 kubenswrapper[4207]: E0224 02:03:46.665332 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert podName:7b4e3ba0-5194-4e20-8f12-dea4b67504fe nodeName:}" failed. No retries permitted until 2026-02-24 02:03:47.165305145 +0000 UTC m=+112.508609375 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert") pod "cluster-baremetal-operator-d6bb9bb76-k98fq" (UID: "7b4e3ba0-5194-4e20-8f12-dea4b67504fe") : secret "cluster-baremetal-webhook-server-cert" not found Feb 24 02:03:46.669050 master-0 kubenswrapper[4207]: E0224 02:03:46.665369 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls podName:7b4e3ba0-5194-4e20-8f12-dea4b67504fe nodeName:}" failed. No retries permitted until 2026-02-24 02:03:47.165347036 +0000 UTC m=+112.508651296 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-d6bb9bb76-k98fq" (UID: "7b4e3ba0-5194-4e20-8f12-dea4b67504fe") : secret "cluster-baremetal-operator-tls" not found Feb 24 02:03:46.669050 master-0 kubenswrapper[4207]: I0224 02:03:46.665499 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-dg77f\" (UID: \"dc3d08db-45fa-4fef-b1fd-2875f22d5c45\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" Feb 24 02:03:46.669050 master-0 kubenswrapper[4207]: I0224 02:03:46.665523 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtf52\" (UniqueName: \"kubernetes.io/projected/9b5620d6-a5fe-45d7-b39e-8bed7f602a17-kube-api-access-jtf52\") pod \"service-ca-operator-c48c8bf7c-6fqkr\" (UID: \"9b5620d6-a5fe-45d7-b39e-8bed7f602a17\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr" Feb 24 02:03:46.669050 master-0 kubenswrapper[4207]: I0224 02:03:46.665541 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cabdddba-5507-4e47-98ef-a00c6d0f305d-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:03:46.669050 master-0 kubenswrapper[4207]: I0224 02:03:46.665560 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8tttg\" (UID: \"f5463fbf-ac21-4058-9a3b-30d0e5ea31b7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" Feb 24 02:03:46.669050 master-0 kubenswrapper[4207]: I0224 02:03:46.665607 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-config\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:46.669050 master-0 kubenswrapper[4207]: E0224 02:03:46.665640 4207 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 24 02:03:46.669050 master-0 kubenswrapper[4207]: E0224 02:03:46.665660 4207 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 24 02:03:46.686641 master-0 kubenswrapper[4207]: E0224 02:03:46.665671 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics podName:91d16f7b-390a-4d9d-99d6-cc8e210801d1 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:47.165662955 +0000 UTC m=+112.508967195 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-4qf9p" (UID: "91d16f7b-390a-4d9d-99d6-cc8e210801d1") : secret "marketplace-operator-metrics" not found Feb 24 02:03:46.686641 master-0 kubenswrapper[4207]: E0224 02:03:46.665824 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert podName:02f1d753-983a-4c4a-b1a0-560de173859a nodeName:}" failed. No retries permitted until 2026-02-24 02:03:47.165803989 +0000 UTC m=+112.509108229 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert") pod "olm-operator-5499d7f7bb-5g6nc" (UID: "02f1d753-983a-4c4a-b1a0-560de173859a") : secret "olm-operator-serving-cert" not found Feb 24 02:03:46.686641 master-0 kubenswrapper[4207]: I0224 02:03:46.666228 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-config\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:46.686641 master-0 kubenswrapper[4207]: I0224 02:03:46.666330 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b5620d6-a5fe-45d7-b39e-8bed7f602a17-config\") pod \"service-ca-operator-c48c8bf7c-6fqkr\" (UID: \"9b5620d6-a5fe-45d7-b39e-8bed7f602a17\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr" Feb 24 02:03:46.686641 master-0 kubenswrapper[4207]: I0224 02:03:46.666393 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-images\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:46.686641 master-0 kubenswrapper[4207]: E0224 02:03:46.666482 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert podName:12b89e05-a503-47aa-90b2-4d741e015b19 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:47.166432706 +0000 UTC m=+112.509736946 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert") pod "catalog-operator-596f79dd6f-8cg5c" (UID: "12b89e05-a503-47aa-90b2-4d741e015b19") : secret "catalog-operator-serving-cert" not found Feb 24 02:03:46.686641 master-0 kubenswrapper[4207]: I0224 02:03:46.669098 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-4qf9p\" (UID: \"91d16f7b-390a-4d9d-99d6-cc8e210801d1\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:03:46.686641 master-0 kubenswrapper[4207]: I0224 02:03:46.670709 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-8cg5c\" (UID: \"12b89e05-a503-47aa-90b2-4d741e015b19\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:03:46.686641 master-0 kubenswrapper[4207]: I0224 02:03:46.670868 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-5g6nc\" (UID: \"02f1d753-983a-4c4a-b1a0-560de173859a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:03:46.686641 master-0 kubenswrapper[4207]: I0224 02:03:46.680233 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b5620d6-a5fe-45d7-b39e-8bed7f602a17-serving-cert\") pod \"service-ca-operator-c48c8bf7c-6fqkr\" (UID: \"9b5620d6-a5fe-45d7-b39e-8bed7f602a17\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr" Feb 24 02:03:46.686641 master-0 kubenswrapper[4207]: I0224 02:03:46.684826 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqqkv\" (UniqueName: \"kubernetes.io/projected/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-kube-api-access-dqqkv\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:46.686641 master-0 kubenswrapper[4207]: I0224 02:03:46.686256 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twgrj\" (UniqueName: \"kubernetes.io/projected/12b89e05-a503-47aa-90b2-4d741e015b19-kube-api-access-twgrj\") pod \"catalog-operator-596f79dd6f-8cg5c\" (UID: \"12b89e05-a503-47aa-90b2-4d741e015b19\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:03:46.686641 master-0 kubenswrapper[4207]: I0224 02:03:46.686391 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8rjx\" (UniqueName: \"kubernetes.io/projected/91d16f7b-390a-4d9d-99d6-cc8e210801d1-kube-api-access-b8rjx\") pod \"marketplace-operator-6f5488b997-4qf9p\" (UID: \"91d16f7b-390a-4d9d-99d6-cc8e210801d1\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:03:46.694200 master-0 kubenswrapper[4207]: I0224 02:03:46.694163 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtf52\" (UniqueName: \"kubernetes.io/projected/9b5620d6-a5fe-45d7-b39e-8bed7f602a17-kube-api-access-jtf52\") pod \"service-ca-operator-c48c8bf7c-6fqkr\" (UID: \"9b5620d6-a5fe-45d7-b39e-8bed7f602a17\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr" Feb 24 02:03:46.695423 master-0 kubenswrapper[4207]: I0224 02:03:46.695374 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb52w\" (UniqueName: \"kubernetes.io/projected/02f1d753-983a-4c4a-b1a0-560de173859a-kube-api-access-mb52w\") pod \"olm-operator-5499d7f7bb-5g6nc\" (UID: \"02f1d753-983a-4c4a-b1a0-560de173859a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:03:46.766741 master-0 kubenswrapper[4207]: I0224 02:03:46.766699 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qph4g\" (UniqueName: \"kubernetes.io/projected/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-kube-api-access-qph4g\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:03:46.766797 master-0 kubenswrapper[4207]: I0224 02:03:46.766752 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-etcd-ca\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:46.766989 master-0 kubenswrapper[4207]: I0224 02:03:46.766944 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cabdddba-5507-4e47-98ef-a00c6d0f305d-serving-cert\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:03:46.767039 master-0 kubenswrapper[4207]: I0224 02:03:46.767001 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-config\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:46.767599 master-0 kubenswrapper[4207]: I0224 02:03:46.767541 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-etcd-ca\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:46.767730 master-0 kubenswrapper[4207]: I0224 02:03:46.767683 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cabdddba-5507-4e47-98ef-a00c6d0f305d-config\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:03:46.767805 master-0 kubenswrapper[4207]: I0224 02:03:46.767749 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c84dc269-43ae-4083-9998-a0b3c90bb681-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:03:46.767854 master-0 kubenswrapper[4207]: I0224 02:03:46.767826 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwjpw\" (UniqueName: \"kubernetes.io/projected/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-kube-api-access-pwjpw\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:46.767892 master-0 kubenswrapper[4207]: I0224 02:03:46.767868 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-dg77f\" (UID: \"dc3d08db-45fa-4fef-b1fd-2875f22d5c45\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" Feb 24 02:03:46.767931 master-0 kubenswrapper[4207]: I0224 02:03:46.767908 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-etcd-client\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:46.767969 master-0 kubenswrapper[4207]: I0224 02:03:46.767951 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cabdddba-5507-4e47-98ef-a00c6d0f305d-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:03:46.768009 master-0 kubenswrapper[4207]: I0224 02:03:46.767986 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8tttg\" (UID: \"f5463fbf-ac21-4058-9a3b-30d0e5ea31b7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" Feb 24 02:03:46.768050 master-0 kubenswrapper[4207]: I0224 02:03:46.768031 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/303d5058-84df-40d1-a941-896b093ae470-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-7wc6k\" (UID: \"303d5058-84df-40d1-a941-896b093ae470\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" Feb 24 02:03:46.768088 master-0 kubenswrapper[4207]: I0224 02:03:46.768051 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-config\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:46.768088 master-0 kubenswrapper[4207]: I0224 02:03:46.768069 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-fkzdb\" (UID: \"f2e9cdff-8c15-43df-b8df-7fe3a73fda86\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:03:46.768676 master-0 kubenswrapper[4207]: E0224 02:03:46.768635 4207 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 24 02:03:46.768731 master-0 kubenswrapper[4207]: I0224 02:03:46.768675 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:46.768783 master-0 kubenswrapper[4207]: E0224 02:03:46.768732 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs podName:dc3d08db-45fa-4fef-b1fd-2875f22d5c45 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:47.268701785 +0000 UTC m=+112.612006055 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-dg77f" (UID: "dc3d08db-45fa-4fef-b1fd-2875f22d5c45") : secret "multus-admission-controller-secret" not found Feb 24 02:03:46.768783 master-0 kubenswrapper[4207]: I0224 02:03:46.768769 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cabdddba-5507-4e47-98ef-a00c6d0f305d-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:03:46.768857 master-0 kubenswrapper[4207]: I0224 02:03:46.768823 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls\") pod \"dns-operator-8c7d49845-hxcn2\" (UID: \"2cb764f6-40f8-4e87-8be0-b9d7b0364201\") " pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" Feb 24 02:03:46.768857 master-0 kubenswrapper[4207]: I0224 02:03:46.768828 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cabdddba-5507-4e47-98ef-a00c6d0f305d-config\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:03:46.768927 master-0 kubenswrapper[4207]: I0224 02:03:46.768872 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcbda577-b943-4b5c-b041-948aece8e40f-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-xdws2\" (UID: \"fcbda577-b943-4b5c-b041-948aece8e40f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" Feb 24 02:03:46.768965 master-0 kubenswrapper[4207]: I0224 02:03:46.768927 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:03:46.769002 master-0 kubenswrapper[4207]: I0224 02:03:46.768969 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nwzm\" (UniqueName: \"kubernetes.io/projected/c92835f0-7f32-4584-8304-843d7979392a-kube-api-access-6nwzm\") pod \"openshift-config-operator-6f47d587d6-ccrxg\" (UID: \"c92835f0-7f32-4584-8304-843d7979392a\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:03:46.769038 master-0 kubenswrapper[4207]: I0224 02:03:46.769003 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:46.769074 master-0 kubenswrapper[4207]: I0224 02:03:46.769035 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8tttg\" (UID: \"f5463fbf-ac21-4058-9a3b-30d0e5ea31b7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" Feb 24 02:03:46.769162 master-0 kubenswrapper[4207]: E0224 02:03:46.769132 4207 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Feb 24 02:03:46.769203 master-0 kubenswrapper[4207]: E0224 02:03:46.769173 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls podName:db8d6627-394c-4087-bfa4-bf7580f6bb4b nodeName:}" failed. No retries permitted until 2026-02-24 02:03:47.269162558 +0000 UTC m=+112.612466798 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls") pod "machine-config-operator-7f8c75f984-ffnq7" (UID: "db8d6627-394c-4087-bfa4-bf7580f6bb4b") : secret "mco-proxy-tls" not found Feb 24 02:03:46.769245 master-0 kubenswrapper[4207]: E0224 02:03:46.769208 4207 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 24 02:03:46.769245 master-0 kubenswrapper[4207]: E0224 02:03:46.769228 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls podName:2cb764f6-40f8-4e87-8be0-b9d7b0364201 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:47.269222029 +0000 UTC m=+112.612526269 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls") pod "dns-operator-8c7d49845-hxcn2" (UID: "2cb764f6-40f8-4e87-8be0-b9d7b0364201") : secret "metrics-tls" not found Feb 24 02:03:46.769312 master-0 kubenswrapper[4207]: I0224 02:03:46.769247 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8tttg\" (UID: \"f5463fbf-ac21-4058-9a3b-30d0e5ea31b7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" Feb 24 02:03:46.769312 master-0 kubenswrapper[4207]: I0224 02:03:46.769267 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-trusted-ca\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:03:46.769312 master-0 kubenswrapper[4207]: I0224 02:03:46.769297 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f85222bf-f51a-4232-8db1-1e6ee593617b-config\") pod \"kube-apiserver-operator-5d87bf58c-2492q\" (UID: \"f85222bf-f51a-4232-8db1-1e6ee593617b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" Feb 24 02:03:46.769417 master-0 kubenswrapper[4207]: I0224 02:03:46.769314 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ssxg\" (UniqueName: \"kubernetes.io/projected/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-kube-api-access-2ssxg\") pod \"multus-admission-controller-5f98f4f8d5-dg77f\" (UID: \"dc3d08db-45fa-4fef-b1fd-2875f22d5c45\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" Feb 24 02:03:46.769417 master-0 kubenswrapper[4207]: I0224 02:03:46.769339 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv6t5\" (UniqueName: \"kubernetes.io/projected/d8e20d47-aeb6-41bf-9715-c437beb8e9e4-kube-api-access-qv6t5\") pod \"iptables-alerter-rjbl5\" (UID: \"d8e20d47-aeb6-41bf-9715-c437beb8e9e4\") " pod="openshift-network-operator/iptables-alerter-rjbl5" Feb 24 02:03:46.769417 master-0 kubenswrapper[4207]: I0224 02:03:46.769362 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86pcb\" (UniqueName: \"kubernetes.io/projected/7b098bd4-5751-4b01-8409-0688fd29233e-kube-api-access-86pcb\") pod \"csi-snapshot-controller-operator-6fb4df594f-c95qc\" (UID: \"7b098bd4-5751-4b01-8409-0688fd29233e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-c95qc" Feb 24 02:03:46.769417 master-0 kubenswrapper[4207]: I0224 02:03:46.769381 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7fgn\" (UID: \"7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn" Feb 24 02:03:46.769417 master-0 kubenswrapper[4207]: I0224 02:03:46.769399 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcbda577-b943-4b5c-b041-948aece8e40f-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-xdws2\" (UID: \"fcbda577-b943-4b5c-b041-948aece8e40f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" Feb 24 02:03:46.769417 master-0 kubenswrapper[4207]: I0224 02:03:46.769417 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d8e20d47-aeb6-41bf-9715-c437beb8e9e4-host-slash\") pod \"iptables-alerter-rjbl5\" (UID: \"d8e20d47-aeb6-41bf-9715-c437beb8e9e4\") " pod="openshift-network-operator/iptables-alerter-rjbl5" Feb 24 02:03:46.769632 master-0 kubenswrapper[4207]: I0224 02:03:46.769453 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6f7j\" (UniqueName: \"kubernetes.io/projected/cabdddba-5507-4e47-98ef-a00c6d0f305d-kube-api-access-h6f7j\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:03:46.769632 master-0 kubenswrapper[4207]: I0224 02:03:46.769472 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f85222bf-f51a-4232-8db1-1e6ee593617b-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-2492q\" (UID: \"f85222bf-f51a-4232-8db1-1e6ee593617b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" Feb 24 02:03:46.769632 master-0 kubenswrapper[4207]: I0224 02:03:46.769493 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b36d8451-0fda-4d9d-a850-d05c8f847016-config\") pod \"openshift-apiserver-operator-8586dccc9b-sl5hz\" (UID: \"b36d8451-0fda-4d9d-a850-d05c8f847016\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" Feb 24 02:03:46.769632 master-0 kubenswrapper[4207]: I0224 02:03:46.769498 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:46.769632 master-0 kubenswrapper[4207]: I0224 02:03:46.769515 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c92835f0-7f32-4584-8304-843d7979392a-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-ccrxg\" (UID: \"c92835f0-7f32-4584-8304-843d7979392a\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:03:46.769632 master-0 kubenswrapper[4207]: I0224 02:03:46.769537 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sp95\" (UniqueName: \"kubernetes.io/projected/c84dc269-43ae-4083-9998-a0b3c90bb681-kube-api-access-9sp95\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:03:46.769632 master-0 kubenswrapper[4207]: I0224 02:03:46.769559 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a02536a3-7d3e-4e74-9625-aefed518ec35-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-tl97n\" (UID: \"a02536a3-7d3e-4e74-9625-aefed518ec35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" Feb 24 02:03:46.769632 master-0 kubenswrapper[4207]: I0224 02:03:46.769594 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:03:46.769632 master-0 kubenswrapper[4207]: I0224 02:03:46.769569 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cabdddba-5507-4e47-98ef-a00c6d0f305d-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:03:46.769632 master-0 kubenswrapper[4207]: I0224 02:03:46.769635 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:46.769952 master-0 kubenswrapper[4207]: E0224 02:03:46.769717 4207 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 24 02:03:46.769952 master-0 kubenswrapper[4207]: E0224 02:03:46.769782 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls podName:6a9ccd8e-d964-4c03-8ffc-51b464030c25 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:47.269760404 +0000 UTC m=+112.613064674 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-8x6sd" (UID: "6a9ccd8e-d964-4c03-8ffc-51b464030c25") : secret "node-tuning-operator-tls" not found Feb 24 02:03:46.769952 master-0 kubenswrapper[4207]: I0224 02:03:46.769821 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:03:46.769952 master-0 kubenswrapper[4207]: I0224 02:03:46.769858 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-bound-sa-token\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:03:46.769952 master-0 kubenswrapper[4207]: I0224 02:03:46.769893 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a02536a3-7d3e-4e74-9625-aefed518ec35-config\") pod \"kube-controller-manager-operator-7bcfbc574b-tl97n\" (UID: \"a02536a3-7d3e-4e74-9625-aefed518ec35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" Feb 24 02:03:46.769952 master-0 kubenswrapper[4207]: I0224 02:03:46.769929 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/db8d6627-394c-4087-bfa4-bf7580f6bb4b-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:03:46.769952 master-0 kubenswrapper[4207]: I0224 02:03:46.769938 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8tttg\" (UID: \"f5463fbf-ac21-4058-9a3b-30d0e5ea31b7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" Feb 24 02:03:46.770184 master-0 kubenswrapper[4207]: I0224 02:03:46.769965 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2hllb\" (UID: \"6320dbb5-b84d-4a57-8c65-fbed8421f84a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" Feb 24 02:03:46.770184 master-0 kubenswrapper[4207]: I0224 02:03:46.769979 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cabdddba-5507-4e47-98ef-a00c6d0f305d-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:03:46.770184 master-0 kubenswrapper[4207]: E0224 02:03:46.770037 4207 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 24 02:03:46.770184 master-0 kubenswrapper[4207]: E0224 02:03:46.770082 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert podName:6320dbb5-b84d-4a57-8c65-fbed8421f84a nodeName:}" failed. No retries permitted until 2026-02-24 02:03:47.270067473 +0000 UTC m=+112.613371753 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-2hllb" (UID: "6320dbb5-b84d-4a57-8c65-fbed8421f84a") : secret "package-server-manager-serving-cert" not found Feb 24 02:03:46.770184 master-0 kubenswrapper[4207]: E0224 02:03:46.770135 4207 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 24 02:03:46.770345 master-0 kubenswrapper[4207]: E0224 02:03:46.770200 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls podName:c3278a82-ee70-4d6c-9c96-f8cb1bcb9334 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:47.270178976 +0000 UTC m=+112.613483246 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls") pod "ingress-operator-6569778c84-6dlqb" (UID: "c3278a82-ee70-4d6c-9c96-f8cb1bcb9334") : secret "metrics-tls" not found Feb 24 02:03:46.770436 master-0 kubenswrapper[4207]: I0224 02:03:46.770404 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7fgn\" (UID: \"7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn" Feb 24 02:03:46.770482 master-0 kubenswrapper[4207]: E0224 02:03:46.770473 4207 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 24 02:03:46.770536 master-0 kubenswrapper[4207]: E0224 02:03:46.770498 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls podName:c84dc269-43ae-4083-9998-a0b3c90bb681 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:47.270490375 +0000 UTC m=+112.613794615 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-d7sx4" (UID: "c84dc269-43ae-4083-9998-a0b3c90bb681") : secret "image-registry-operator-tls" not found Feb 24 02:03:46.771160 master-0 kubenswrapper[4207]: I0224 02:03:46.771117 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b36d8451-0fda-4d9d-a850-d05c8f847016-config\") pod \"openshift-apiserver-operator-8586dccc9b-sl5hz\" (UID: \"b36d8451-0fda-4d9d-a850-d05c8f847016\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" Feb 24 02:03:46.771396 master-0 kubenswrapper[4207]: I0224 02:03:46.771345 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c92835f0-7f32-4584-8304-843d7979392a-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-ccrxg\" (UID: \"c92835f0-7f32-4584-8304-843d7979392a\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:03:46.771396 master-0 kubenswrapper[4207]: I0224 02:03:46.771379 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a02536a3-7d3e-4e74-9625-aefed518ec35-config\") pod \"kube-controller-manager-operator-7bcfbc574b-tl97n\" (UID: \"a02536a3-7d3e-4e74-9625-aefed518ec35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" Feb 24 02:03:46.771474 master-0 kubenswrapper[4207]: I0224 02:03:46.770033 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njjq8\" (UniqueName: \"kubernetes.io/projected/b36d8451-0fda-4d9d-a850-d05c8f847016-kube-api-access-njjq8\") pod \"openshift-apiserver-operator-8586dccc9b-sl5hz\" (UID: \"b36d8451-0fda-4d9d-a850-d05c8f847016\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" Feb 24 02:03:46.771530 master-0 kubenswrapper[4207]: I0224 02:03:46.771503 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c84dc269-43ae-4083-9998-a0b3c90bb681-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:03:46.771582 master-0 kubenswrapper[4207]: I0224 02:03:46.771554 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c92835f0-7f32-4584-8304-843d7979392a-serving-cert\") pod \"openshift-config-operator-6f47d587d6-ccrxg\" (UID: \"c92835f0-7f32-4584-8304-843d7979392a\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:03:46.771655 master-0 kubenswrapper[4207]: I0224 02:03:46.771620 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7fgn\" (UID: \"7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn" Feb 24 02:03:46.771751 master-0 kubenswrapper[4207]: I0224 02:03:46.771714 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-etcd-client\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:46.771751 master-0 kubenswrapper[4207]: I0224 02:03:46.771684 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82hfh\" (UniqueName: \"kubernetes.io/projected/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-kube-api-access-82hfh\") pod \"cluster-monitoring-operator-6bb6d78bf-fkzdb\" (UID: \"f2e9cdff-8c15-43df-b8df-7fe3a73fda86\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:03:46.771833 master-0 kubenswrapper[4207]: I0224 02:03:46.771813 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgjlz\" (UniqueName: \"kubernetes.io/projected/6320dbb5-b84d-4a57-8c65-fbed8421f84a-kube-api-access-pgjlz\") pod \"package-server-manager-5c75f78c8b-2hllb\" (UID: \"6320dbb5-b84d-4a57-8c65-fbed8421f84a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" Feb 24 02:03:46.771871 master-0 kubenswrapper[4207]: I0224 02:03:46.771853 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/db8d6627-394c-4087-bfa4-bf7580f6bb4b-images\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:03:46.771907 master-0 kubenswrapper[4207]: I0224 02:03:46.771891 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6lsp\" (UniqueName: \"kubernetes.io/projected/db8d6627-394c-4087-bfa4-bf7580f6bb4b-kube-api-access-x6lsp\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:03:46.771948 master-0 kubenswrapper[4207]: I0224 02:03:46.771927 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssz8p\" (UniqueName: \"kubernetes.io/projected/6a9ccd8e-d964-4c03-8ffc-51b464030c25-kube-api-access-ssz8p\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:46.772117 master-0 kubenswrapper[4207]: I0224 02:03:46.772080 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2b65\" (UniqueName: \"kubernetes.io/projected/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c-kube-api-access-n2b65\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7fgn\" (UID: \"7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn" Feb 24 02:03:46.772159 master-0 kubenswrapper[4207]: I0224 02:03:46.772144 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02536a3-7d3e-4e74-9625-aefed518ec35-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-tl97n\" (UID: \"a02536a3-7d3e-4e74-9625-aefed518ec35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" Feb 24 02:03:46.772195 master-0 kubenswrapper[4207]: I0224 02:03:46.772177 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-serving-cert\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:46.772232 master-0 kubenswrapper[4207]: I0224 02:03:46.772211 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a9ccd8e-d964-4c03-8ffc-51b464030c25-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:46.772276 master-0 kubenswrapper[4207]: I0224 02:03:46.772246 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f85222bf-f51a-4232-8db1-1e6ee593617b-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-2492q\" (UID: \"f85222bf-f51a-4232-8db1-1e6ee593617b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" Feb 24 02:03:46.772313 master-0 kubenswrapper[4207]: I0224 02:03:46.772279 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpg26\" (UniqueName: \"kubernetes.io/projected/fcbda577-b943-4b5c-b041-948aece8e40f-kube-api-access-vpg26\") pod \"kube-storage-version-migrator-operator-fc889cfd5-xdws2\" (UID: \"fcbda577-b943-4b5c-b041-948aece8e40f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" Feb 24 02:03:46.772367 master-0 kubenswrapper[4207]: I0224 02:03:46.772314 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b36d8451-0fda-4d9d-a850-d05c8f847016-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-sl5hz\" (UID: \"b36d8451-0fda-4d9d-a850-d05c8f847016\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" Feb 24 02:03:46.772405 master-0 kubenswrapper[4207]: I0224 02:03:46.772374 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp8hv\" (UniqueName: \"kubernetes.io/projected/2cb764f6-40f8-4e87-8be0-b9d7b0364201-kube-api-access-sp8hv\") pod \"dns-operator-8c7d49845-hxcn2\" (UID: \"2cb764f6-40f8-4e87-8be0-b9d7b0364201\") " pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" Feb 24 02:03:46.772442 master-0 kubenswrapper[4207]: I0224 02:03:46.772416 4207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d8e20d47-aeb6-41bf-9715-c437beb8e9e4-iptables-alerter-script\") pod \"iptables-alerter-rjbl5\" (UID: \"d8e20d47-aeb6-41bf-9715-c437beb8e9e4\") " pod="openshift-network-operator/iptables-alerter-rjbl5" Feb 24 02:03:46.772492 master-0 kubenswrapper[4207]: I0224 02:03:46.772464 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79bl6\" (UniqueName: \"kubernetes.io/projected/303d5058-84df-40d1-a941-896b093ae470-kube-api-access-79bl6\") pod \"cluster-olm-operator-5bd7768f54-7wc6k\" (UID: \"303d5058-84df-40d1-a941-896b093ae470\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" Feb 24 02:03:46.772529 master-0 kubenswrapper[4207]: I0224 02:03:46.772501 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/303d5058-84df-40d1-a941-896b093ae470-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-7wc6k\" (UID: \"303d5058-84df-40d1-a941-896b093ae470\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" Feb 24 02:03:46.772615 master-0 kubenswrapper[4207]: E0224 02:03:46.771376 4207 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 24 02:03:46.772655 master-0 kubenswrapper[4207]: I0224 02:03:46.772612 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-trusted-ca\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:03:46.773209 master-0 kubenswrapper[4207]: I0224 02:03:46.773155 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-fkzdb\" (UID: \"f2e9cdff-8c15-43df-b8df-7fe3a73fda86\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:03:46.773209 master-0 kubenswrapper[4207]: E0224 02:03:46.773193 4207 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 24 02:03:46.773298 master-0 kubenswrapper[4207]: I0224 02:03:46.773207 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcbda577-b943-4b5c-b041-948aece8e40f-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-xdws2\" (UID: \"fcbda577-b943-4b5c-b041-948aece8e40f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" Feb 24 02:03:46.773298 master-0 kubenswrapper[4207]: E0224 02:03:46.773249 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls podName:f2e9cdff-8c15-43df-b8df-7fe3a73fda86 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:47.273231922 +0000 UTC m=+112.616536202 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-fkzdb" (UID: "f2e9cdff-8c15-43df-b8df-7fe3a73fda86") : secret "cluster-monitoring-operator-tls" not found Feb 24 02:03:46.773421 master-0 kubenswrapper[4207]: I0224 02:03:46.773370 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcbda577-b943-4b5c-b041-948aece8e40f-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-xdws2\" (UID: \"fcbda577-b943-4b5c-b041-948aece8e40f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" Feb 24 02:03:46.773421 master-0 kubenswrapper[4207]: I0224 02:03:46.773386 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/db8d6627-394c-4087-bfa4-bf7580f6bb4b-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:03:46.773606 master-0 kubenswrapper[4207]: E0224 02:03:46.773496 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert podName:6a9ccd8e-d964-4c03-8ffc-51b464030c25 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:47.273395856 +0000 UTC m=+112.616700126 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-8x6sd" (UID: "6a9ccd8e-d964-4c03-8ffc-51b464030c25") : secret "performance-addon-operator-webhook-cert" not found Feb 24 02:03:46.774437 master-0 kubenswrapper[4207]: I0224 02:03:46.774401 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8tttg\" (UID: \"f5463fbf-ac21-4058-9a3b-30d0e5ea31b7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" Feb 24 02:03:46.774499 master-0 kubenswrapper[4207]: I0224 02:03:46.774427 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-fkzdb\" (UID: \"f2e9cdff-8c15-43df-b8df-7fe3a73fda86\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:03:46.775122 master-0 kubenswrapper[4207]: I0224 02:03:46.775085 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/303d5058-84df-40d1-a941-896b093ae470-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-7wc6k\" (UID: \"303d5058-84df-40d1-a941-896b093ae470\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" Feb 24 02:03:46.775251 master-0 kubenswrapper[4207]: I0224 02:03:46.775219 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a9ccd8e-d964-4c03-8ffc-51b464030c25-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:46.775376 master-0 kubenswrapper[4207]: I0224 02:03:46.775350 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/db8d6627-394c-4087-bfa4-bf7580f6bb4b-images\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:03:46.775542 master-0 kubenswrapper[4207]: I0224 02:03:46.775507 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c84dc269-43ae-4083-9998-a0b3c90bb681-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:03:46.775645 master-0 kubenswrapper[4207]: I0224 02:03:46.775614 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f85222bf-f51a-4232-8db1-1e6ee593617b-config\") pod \"kube-apiserver-operator-5d87bf58c-2492q\" (UID: \"f85222bf-f51a-4232-8db1-1e6ee593617b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" Feb 24 02:03:46.777292 master-0 kubenswrapper[4207]: I0224 02:03:46.777174 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/303d5058-84df-40d1-a941-896b093ae470-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-7wc6k\" (UID: \"303d5058-84df-40d1-a941-896b093ae470\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" Feb 24 02:03:46.777475 master-0 kubenswrapper[4207]: I0224 02:03:46.777447 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7fgn\" (UID: \"7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn" Feb 24 02:03:46.777708 master-0 kubenswrapper[4207]: I0224 02:03:46.777676 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cabdddba-5507-4e47-98ef-a00c6d0f305d-serving-cert\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:03:46.778068 master-0 kubenswrapper[4207]: I0224 02:03:46.778012 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c92835f0-7f32-4584-8304-843d7979392a-serving-cert\") pod \"openshift-config-operator-6f47d587d6-ccrxg\" (UID: \"c92835f0-7f32-4584-8304-843d7979392a\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:03:46.778239 master-0 kubenswrapper[4207]: I0224 02:03:46.778208 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-serving-cert\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:46.778536 master-0 kubenswrapper[4207]: I0224 02:03:46.778506 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b36d8451-0fda-4d9d-a850-d05c8f847016-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-sl5hz\" (UID: \"b36d8451-0fda-4d9d-a850-d05c8f847016\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" Feb 24 02:03:46.779238 master-0 kubenswrapper[4207]: I0224 02:03:46.779209 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f85222bf-f51a-4232-8db1-1e6ee593617b-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-2492q\" (UID: \"f85222bf-f51a-4232-8db1-1e6ee593617b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" Feb 24 02:03:46.780014 master-0 kubenswrapper[4207]: I0224 02:03:46.779975 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02536a3-7d3e-4e74-9625-aefed518ec35-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-tl97n\" (UID: \"a02536a3-7d3e-4e74-9625-aefed518ec35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" Feb 24 02:03:46.805799 master-0 kubenswrapper[4207]: I0224 02:03:46.805763 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qph4g\" (UniqueName: \"kubernetes.io/projected/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-kube-api-access-qph4g\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:03:46.825839 master-0 kubenswrapper[4207]: I0224 02:03:46.825784 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c84dc269-43ae-4083-9998-a0b3c90bb681-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:03:46.839994 master-0 kubenswrapper[4207]: I0224 02:03:46.839958 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwjpw\" (UniqueName: \"kubernetes.io/projected/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-kube-api-access-pwjpw\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:46.866074 master-0 kubenswrapper[4207]: I0224 02:03:46.866039 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86pcb\" (UniqueName: \"kubernetes.io/projected/7b098bd4-5751-4b01-8409-0688fd29233e-kube-api-access-86pcb\") pod \"csi-snapshot-controller-operator-6fb4df594f-c95qc\" (UID: \"7b098bd4-5751-4b01-8409-0688fd29233e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-c95qc" Feb 24 02:03:46.875638 master-0 kubenswrapper[4207]: I0224 02:03:46.875032 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d8e20d47-aeb6-41bf-9715-c437beb8e9e4-iptables-alerter-script\") pod \"iptables-alerter-rjbl5\" (UID: \"d8e20d47-aeb6-41bf-9715-c437beb8e9e4\") " pod="openshift-network-operator/iptables-alerter-rjbl5" Feb 24 02:03:46.875638 master-0 kubenswrapper[4207]: I0224 02:03:46.875621 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d8e20d47-aeb6-41bf-9715-c437beb8e9e4-iptables-alerter-script\") pod \"iptables-alerter-rjbl5\" (UID: \"d8e20d47-aeb6-41bf-9715-c437beb8e9e4\") " pod="openshift-network-operator/iptables-alerter-rjbl5" Feb 24 02:03:46.876062 master-0 kubenswrapper[4207]: I0224 02:03:46.876011 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv6t5\" (UniqueName: \"kubernetes.io/projected/d8e20d47-aeb6-41bf-9715-c437beb8e9e4-kube-api-access-qv6t5\") pod \"iptables-alerter-rjbl5\" (UID: \"d8e20d47-aeb6-41bf-9715-c437beb8e9e4\") " pod="openshift-network-operator/iptables-alerter-rjbl5" Feb 24 02:03:46.876138 master-0 kubenswrapper[4207]: I0224 02:03:46.876110 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d8e20d47-aeb6-41bf-9715-c437beb8e9e4-host-slash\") pod \"iptables-alerter-rjbl5\" (UID: \"d8e20d47-aeb6-41bf-9715-c437beb8e9e4\") " pod="openshift-network-operator/iptables-alerter-rjbl5" Feb 24 02:03:46.876258 master-0 kubenswrapper[4207]: I0224 02:03:46.876230 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d8e20d47-aeb6-41bf-9715-c437beb8e9e4-host-slash\") pod \"iptables-alerter-rjbl5\" (UID: \"d8e20d47-aeb6-41bf-9715-c437beb8e9e4\") " pod="openshift-network-operator/iptables-alerter-rjbl5" Feb 24 02:03:46.880019 master-0 kubenswrapper[4207]: I0224 02:03:46.879937 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-bound-sa-token\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:03:46.896534 master-0 kubenswrapper[4207]: I0224 02:03:46.896484 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr" Feb 24 02:03:46.899570 master-0 kubenswrapper[4207]: I0224 02:03:46.899517 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8tttg\" (UID: \"f5463fbf-ac21-4058-9a3b-30d0e5ea31b7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" Feb 24 02:03:46.919138 master-0 kubenswrapper[4207]: I0224 02:03:46.919089 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a02536a3-7d3e-4e74-9625-aefed518ec35-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-tl97n\" (UID: \"a02536a3-7d3e-4e74-9625-aefed518ec35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" Feb 24 02:03:46.940717 master-0 kubenswrapper[4207]: I0224 02:03:46.940419 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-c95qc" Feb 24 02:03:46.948890 master-0 kubenswrapper[4207]: I0224 02:03:46.948824 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njjq8\" (UniqueName: \"kubernetes.io/projected/b36d8451-0fda-4d9d-a850-d05c8f847016-kube-api-access-njjq8\") pod \"openshift-apiserver-operator-8586dccc9b-sl5hz\" (UID: \"b36d8451-0fda-4d9d-a850-d05c8f847016\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" Feb 24 02:03:46.966819 master-0 kubenswrapper[4207]: I0224 02:03:46.966748 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ssxg\" (UniqueName: \"kubernetes.io/projected/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-kube-api-access-2ssxg\") pod \"multus-admission-controller-5f98f4f8d5-dg77f\" (UID: \"dc3d08db-45fa-4fef-b1fd-2875f22d5c45\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" Feb 24 02:03:46.980098 master-0 kubenswrapper[4207]: I0224 02:03:46.980028 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" Feb 24 02:03:47.000747 master-0 kubenswrapper[4207]: I0224 02:03:47.000687 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6f7j\" (UniqueName: \"kubernetes.io/projected/cabdddba-5507-4e47-98ef-a00c6d0f305d-kube-api-access-h6f7j\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:03:47.002237 master-0 kubenswrapper[4207]: I0224 02:03:47.002194 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nwzm\" (UniqueName: \"kubernetes.io/projected/c92835f0-7f32-4584-8304-843d7979392a-kube-api-access-6nwzm\") pod \"openshift-config-operator-6f47d587d6-ccrxg\" (UID: \"c92835f0-7f32-4584-8304-843d7979392a\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:03:47.025271 master-0 kubenswrapper[4207]: I0224 02:03:47.025210 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f85222bf-f51a-4232-8db1-1e6ee593617b-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-2492q\" (UID: \"f85222bf-f51a-4232-8db1-1e6ee593617b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" Feb 24 02:03:47.050625 master-0 kubenswrapper[4207]: I0224 02:03:47.047065 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:47.060540 master-0 kubenswrapper[4207]: I0224 02:03:47.057364 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82hfh\" (UniqueName: \"kubernetes.io/projected/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-kube-api-access-82hfh\") pod \"cluster-monitoring-operator-6bb6d78bf-fkzdb\" (UID: \"f2e9cdff-8c15-43df-b8df-7fe3a73fda86\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:03:47.065337 master-0 kubenswrapper[4207]: I0224 02:03:47.065180 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssz8p\" (UniqueName: \"kubernetes.io/projected/6a9ccd8e-d964-4c03-8ffc-51b464030c25-kube-api-access-ssz8p\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:47.081040 master-0 kubenswrapper[4207]: I0224 02:03:47.072674 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" Feb 24 02:03:47.092481 master-0 kubenswrapper[4207]: I0224 02:03:47.091544 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgjlz\" (UniqueName: \"kubernetes.io/projected/6320dbb5-b84d-4a57-8c65-fbed8421f84a-kube-api-access-pgjlz\") pod \"package-server-manager-5c75f78c8b-2hllb\" (UID: \"6320dbb5-b84d-4a57-8c65-fbed8421f84a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" Feb 24 02:03:47.107187 master-0 kubenswrapper[4207]: I0224 02:03:47.104556 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sp95\" (UniqueName: \"kubernetes.io/projected/c84dc269-43ae-4083-9998-a0b3c90bb681-kube-api-access-9sp95\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:03:47.120539 master-0 kubenswrapper[4207]: I0224 02:03:47.120472 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6lsp\" (UniqueName: \"kubernetes.io/projected/db8d6627-394c-4087-bfa4-bf7580f6bb4b-kube-api-access-x6lsp\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:03:47.139717 master-0 kubenswrapper[4207]: I0224 02:03:47.132513 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:03:47.145039 master-0 kubenswrapper[4207]: I0224 02:03:47.145000 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr"] Feb 24 02:03:47.150753 master-0 kubenswrapper[4207]: I0224 02:03:47.150715 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" Feb 24 02:03:47.155694 master-0 kubenswrapper[4207]: I0224 02:03:47.155654 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp8hv\" (UniqueName: \"kubernetes.io/projected/2cb764f6-40f8-4e87-8be0-b9d7b0364201-kube-api-access-sp8hv\") pod \"dns-operator-8c7d49845-hxcn2\" (UID: \"2cb764f6-40f8-4e87-8be0-b9d7b0364201\") " pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" Feb 24 02:03:47.184399 master-0 kubenswrapper[4207]: I0224 02:03:47.178892 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-c95qc"] Feb 24 02:03:47.184399 master-0 kubenswrapper[4207]: I0224 02:03:47.180975 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79bl6\" (UniqueName: \"kubernetes.io/projected/303d5058-84df-40d1-a941-896b093ae470-kube-api-access-79bl6\") pod \"cluster-olm-operator-5bd7768f54-7wc6k\" (UID: \"303d5058-84df-40d1-a941-896b093ae470\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" Feb 24 02:03:47.184399 master-0 kubenswrapper[4207]: I0224 02:03:47.182487 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:47.184399 master-0 kubenswrapper[4207]: I0224 02:03:47.182547 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert\") pod \"olm-operator-5499d7f7bb-5g6nc\" (UID: \"02f1d753-983a-4c4a-b1a0-560de173859a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:03:47.184399 master-0 kubenswrapper[4207]: I0224 02:03:47.182659 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert\") pod \"catalog-operator-596f79dd6f-8cg5c\" (UID: \"12b89e05-a503-47aa-90b2-4d741e015b19\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:03:47.184399 master-0 kubenswrapper[4207]: I0224 02:03:47.182711 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:47.184399 master-0 kubenswrapper[4207]: I0224 02:03:47.182738 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-4qf9p\" (UID: \"91d16f7b-390a-4d9d-99d6-cc8e210801d1\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:03:47.184399 master-0 kubenswrapper[4207]: E0224 02:03:47.182862 4207 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 24 02:03:47.184399 master-0 kubenswrapper[4207]: E0224 02:03:47.182913 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics podName:91d16f7b-390a-4d9d-99d6-cc8e210801d1 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:48.182895761 +0000 UTC m=+113.526200011 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-4qf9p" (UID: "91d16f7b-390a-4d9d-99d6-cc8e210801d1") : secret "marketplace-operator-metrics" not found Feb 24 02:03:47.184399 master-0 kubenswrapper[4207]: E0224 02:03:47.183315 4207 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Feb 24 02:03:47.184399 master-0 kubenswrapper[4207]: E0224 02:03:47.183347 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls podName:7b4e3ba0-5194-4e20-8f12-dea4b67504fe nodeName:}" failed. No retries permitted until 2026-02-24 02:03:48.183337054 +0000 UTC m=+113.526641304 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-d6bb9bb76-k98fq" (UID: "7b4e3ba0-5194-4e20-8f12-dea4b67504fe") : secret "cluster-baremetal-operator-tls" not found Feb 24 02:03:47.184399 master-0 kubenswrapper[4207]: E0224 02:03:47.183388 4207 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 24 02:03:47.184399 master-0 kubenswrapper[4207]: E0224 02:03:47.183413 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert podName:02f1d753-983a-4c4a-b1a0-560de173859a nodeName:}" failed. No retries permitted until 2026-02-24 02:03:48.183404346 +0000 UTC m=+113.526708596 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert") pod "olm-operator-5499d7f7bb-5g6nc" (UID: "02f1d753-983a-4c4a-b1a0-560de173859a") : secret "olm-operator-serving-cert" not found Feb 24 02:03:47.184399 master-0 kubenswrapper[4207]: E0224 02:03:47.183453 4207 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 24 02:03:47.184399 master-0 kubenswrapper[4207]: E0224 02:03:47.183474 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert podName:12b89e05-a503-47aa-90b2-4d741e015b19 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:48.183467267 +0000 UTC m=+113.526771517 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert") pod "catalog-operator-596f79dd6f-8cg5c" (UID: "12b89e05-a503-47aa-90b2-4d741e015b19") : secret "catalog-operator-serving-cert" not found Feb 24 02:03:47.184399 master-0 kubenswrapper[4207]: E0224 02:03:47.183511 4207 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Feb 24 02:03:47.185643 master-0 kubenswrapper[4207]: E0224 02:03:47.183532 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert podName:7b4e3ba0-5194-4e20-8f12-dea4b67504fe nodeName:}" failed. No retries permitted until 2026-02-24 02:03:48.183525289 +0000 UTC m=+113.526829539 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert") pod "cluster-baremetal-operator-d6bb9bb76-k98fq" (UID: "7b4e3ba0-5194-4e20-8f12-dea4b67504fe") : secret "cluster-baremetal-webhook-server-cert" not found Feb 24 02:03:47.189043 master-0 kubenswrapper[4207]: W0224 02:03:47.188986 4207 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b098bd4_5751_4b01_8409_0688fd29233e.slice/crio-35a3cac3cfce9496c0f221e8539970cdcedf87aabbcb92ba9a5c445596750d49 WatchSource:0}: Error finding container 35a3cac3cfce9496c0f221e8539970cdcedf87aabbcb92ba9a5c445596750d49: Status 404 returned error can't find the container with id 35a3cac3cfce9496c0f221e8539970cdcedf87aabbcb92ba9a5c445596750d49 Feb 24 02:03:47.197677 master-0 kubenswrapper[4207]: I0224 02:03:47.197050 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2b65\" (UniqueName: \"kubernetes.io/projected/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c-kube-api-access-n2b65\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7fgn\" (UID: \"7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn" Feb 24 02:03:47.209029 master-0 kubenswrapper[4207]: I0224 02:03:47.207498 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" Feb 24 02:03:47.217457 master-0 kubenswrapper[4207]: I0224 02:03:47.217415 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpg26\" (UniqueName: \"kubernetes.io/projected/fcbda577-b943-4b5c-b041-948aece8e40f-kube-api-access-vpg26\") pod \"kube-storage-version-migrator-operator-fc889cfd5-xdws2\" (UID: \"fcbda577-b943-4b5c-b041-948aece8e40f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" Feb 24 02:03:47.230973 master-0 kubenswrapper[4207]: I0224 02:03:47.230935 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" Feb 24 02:03:47.234831 master-0 kubenswrapper[4207]: I0224 02:03:47.234787 4207 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv6t5\" (UniqueName: \"kubernetes.io/projected/d8e20d47-aeb6-41bf-9715-c437beb8e9e4-kube-api-access-qv6t5\") pod \"iptables-alerter-rjbl5\" (UID: \"d8e20d47-aeb6-41bf-9715-c437beb8e9e4\") " pod="openshift-network-operator/iptables-alerter-rjbl5" Feb 24 02:03:47.255642 master-0 kubenswrapper[4207]: I0224 02:03:47.252355 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:03:47.258623 master-0 kubenswrapper[4207]: I0224 02:03:47.258544 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n"] Feb 24 02:03:47.295282 master-0 kubenswrapper[4207]: I0224 02:03:47.284802 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls\") pod \"dns-operator-8c7d49845-hxcn2\" (UID: \"2cb764f6-40f8-4e87-8be0-b9d7b0364201\") " pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" Feb 24 02:03:47.295282 master-0 kubenswrapper[4207]: I0224 02:03:47.286489 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:03:47.295282 master-0 kubenswrapper[4207]: I0224 02:03:47.286514 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb"] Feb 24 02:03:47.295282 master-0 kubenswrapper[4207]: I0224 02:03:47.286530 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:47.295282 master-0 kubenswrapper[4207]: E0224 02:03:47.285155 4207 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 24 02:03:47.295282 master-0 kubenswrapper[4207]: I0224 02:03:47.286680 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:03:47.295282 master-0 kubenswrapper[4207]: I0224 02:03:47.286716 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:47.295282 master-0 kubenswrapper[4207]: E0224 02:03:47.286745 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls podName:2cb764f6-40f8-4e87-8be0-b9d7b0364201 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:48.286720553 +0000 UTC m=+113.630024983 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls") pod "dns-operator-8c7d49845-hxcn2" (UID: "2cb764f6-40f8-4e87-8be0-b9d7b0364201") : secret "metrics-tls" not found Feb 24 02:03:47.295282 master-0 kubenswrapper[4207]: I0224 02:03:47.286775 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:03:47.295282 master-0 kubenswrapper[4207]: I0224 02:03:47.286817 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2hllb\" (UID: \"6320dbb5-b84d-4a57-8c65-fbed8421f84a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" Feb 24 02:03:47.295282 master-0 kubenswrapper[4207]: I0224 02:03:47.286966 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-fkzdb\" (UID: \"f2e9cdff-8c15-43df-b8df-7fe3a73fda86\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:03:47.295282 master-0 kubenswrapper[4207]: E0224 02:03:47.286824 4207 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 24 02:03:47.295282 master-0 kubenswrapper[4207]: E0224 02:03:47.287042 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert podName:6a9ccd8e-d964-4c03-8ffc-51b464030c25 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:48.287025312 +0000 UTC m=+113.630329542 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-8x6sd" (UID: "6a9ccd8e-d964-4c03-8ffc-51b464030c25") : secret "performance-addon-operator-webhook-cert" not found Feb 24 02:03:47.295282 master-0 kubenswrapper[4207]: I0224 02:03:47.287072 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-dg77f\" (UID: \"dc3d08db-45fa-4fef-b1fd-2875f22d5c45\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" Feb 24 02:03:47.295282 master-0 kubenswrapper[4207]: E0224 02:03:47.287091 4207 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 24 02:03:47.295990 master-0 kubenswrapper[4207]: E0224 02:03:47.287135 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls podName:f2e9cdff-8c15-43df-b8df-7fe3a73fda86 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:48.287121925 +0000 UTC m=+113.630426395 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-fkzdb" (UID: "f2e9cdff-8c15-43df-b8df-7fe3a73fda86") : secret "cluster-monitoring-operator-tls" not found Feb 24 02:03:47.295990 master-0 kubenswrapper[4207]: E0224 02:03:47.286859 4207 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 24 02:03:47.295990 master-0 kubenswrapper[4207]: E0224 02:03:47.287149 4207 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 24 02:03:47.295990 master-0 kubenswrapper[4207]: E0224 02:03:47.286915 4207 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Feb 24 02:03:47.295990 master-0 kubenswrapper[4207]: E0224 02:03:47.286666 4207 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 24 02:03:47.295990 master-0 kubenswrapper[4207]: E0224 02:03:47.286949 4207 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 24 02:03:47.295990 master-0 kubenswrapper[4207]: E0224 02:03:47.286990 4207 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 24 02:03:47.295990 master-0 kubenswrapper[4207]: E0224 02:03:47.287174 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs podName:dc3d08db-45fa-4fef-b1fd-2875f22d5c45 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:48.287166586 +0000 UTC m=+113.630470826 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-dg77f" (UID: "dc3d08db-45fa-4fef-b1fd-2875f22d5c45") : secret "multus-admission-controller-secret" not found Feb 24 02:03:47.295990 master-0 kubenswrapper[4207]: E0224 02:03:47.287317 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls podName:c84dc269-43ae-4083-9998-a0b3c90bb681 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:48.28730588 +0000 UTC m=+113.630610380 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-d7sx4" (UID: "c84dc269-43ae-4083-9998-a0b3c90bb681") : secret "image-registry-operator-tls" not found Feb 24 02:03:47.295990 master-0 kubenswrapper[4207]: E0224 02:03:47.287349 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls podName:db8d6627-394c-4087-bfa4-bf7580f6bb4b nodeName:}" failed. No retries permitted until 2026-02-24 02:03:48.287340171 +0000 UTC m=+113.630644691 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls") pod "machine-config-operator-7f8c75f984-ffnq7" (UID: "db8d6627-394c-4087-bfa4-bf7580f6bb4b") : secret "mco-proxy-tls" not found Feb 24 02:03:47.295990 master-0 kubenswrapper[4207]: E0224 02:03:47.287375 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls podName:6a9ccd8e-d964-4c03-8ffc-51b464030c25 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:48.287364361 +0000 UTC m=+113.630668871 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-8x6sd" (UID: "6a9ccd8e-d964-4c03-8ffc-51b464030c25") : secret "node-tuning-operator-tls" not found Feb 24 02:03:47.295990 master-0 kubenswrapper[4207]: E0224 02:03:47.287398 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls podName:c3278a82-ee70-4d6c-9c96-f8cb1bcb9334 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:48.287389512 +0000 UTC m=+113.630694012 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls") pod "ingress-operator-6569778c84-6dlqb" (UID: "c3278a82-ee70-4d6c-9c96-f8cb1bcb9334") : secret "metrics-tls" not found Feb 24 02:03:47.295990 master-0 kubenswrapper[4207]: E0224 02:03:47.287423 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert podName:6320dbb5-b84d-4a57-8c65-fbed8421f84a nodeName:}" failed. No retries permitted until 2026-02-24 02:03:48.287415393 +0000 UTC m=+113.630719893 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-2hllb" (UID: "6320dbb5-b84d-4a57-8c65-fbed8421f84a") : secret "package-server-manager-serving-cert" not found Feb 24 02:03:47.305922 master-0 kubenswrapper[4207]: I0224 02:03:47.300671 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg"] Feb 24 02:03:47.305922 master-0 kubenswrapper[4207]: W0224 02:03:47.304754 4207 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbe9964a_9e82_48e9_82b0_7c07e4cec3a2.slice/crio-ff39808811189af69f67503d76fa167bb97add817a078f10dcf74a7660201e4e WatchSource:0}: Error finding container ff39808811189af69f67503d76fa167bb97add817a078f10dcf74a7660201e4e: Status 404 returned error can't find the container with id ff39808811189af69f67503d76fa167bb97add817a078f10dcf74a7660201e4e Feb 24 02:03:47.347333 master-0 kubenswrapper[4207]: I0224 02:03:47.346687 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq"] Feb 24 02:03:47.356581 master-0 kubenswrapper[4207]: W0224 02:03:47.356497 4207 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcabdddba_5507_4e47_98ef_a00c6d0f305d.slice/crio-e54e467fec77344a591689afeb76ae49385e45cfe4c4aeb2a94eec65da6cdce5 WatchSource:0}: Error finding container e54e467fec77344a591689afeb76ae49385e45cfe4c4aeb2a94eec65da6cdce5: Status 404 returned error can't find the container with id e54e467fec77344a591689afeb76ae49385e45cfe4c4aeb2a94eec65da6cdce5 Feb 24 02:03:47.361996 master-0 kubenswrapper[4207]: I0224 02:03:47.361963 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn" Feb 24 02:03:47.379374 master-0 kubenswrapper[4207]: I0224 02:03:47.379341 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q"] Feb 24 02:03:47.391584 master-0 kubenswrapper[4207]: W0224 02:03:47.391500 4207 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85222bf_f51a_4232_8db1_1e6ee593617b.slice/crio-9b9766c83ab547d93c665b0d79f8c94f21cf677d4157ff5e1bc24f519048fa91 WatchSource:0}: Error finding container 9b9766c83ab547d93c665b0d79f8c94f21cf677d4157ff5e1bc24f519048fa91: Status 404 returned error can't find the container with id 9b9766c83ab547d93c665b0d79f8c94f21cf677d4157ff5e1bc24f519048fa91 Feb 24 02:03:47.402685 master-0 kubenswrapper[4207]: I0224 02:03:47.402649 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" Feb 24 02:03:47.433387 master-0 kubenswrapper[4207]: I0224 02:03:47.433345 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz"] Feb 24 02:03:47.471652 master-0 kubenswrapper[4207]: I0224 02:03:47.470558 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k"] Feb 24 02:03:47.486662 master-0 kubenswrapper[4207]: I0224 02:03:47.483424 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg"] Feb 24 02:03:47.489462 master-0 kubenswrapper[4207]: I0224 02:03:47.489425 4207 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-rjbl5" Feb 24 02:03:47.536614 master-0 kubenswrapper[4207]: I0224 02:03:47.535599 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn"] Feb 24 02:03:47.541127 master-0 kubenswrapper[4207]: W0224 02:03:47.541101 4207 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e50df05_0f7f_4c4f_84fa_92dd1f7ee86c.slice/crio-d884f8a9271a3be209f8b517c106210cb5d535a1b46d052e9c8de84e6be62441 WatchSource:0}: Error finding container d884f8a9271a3be209f8b517c106210cb5d535a1b46d052e9c8de84e6be62441: Status 404 returned error can't find the container with id d884f8a9271a3be209f8b517c106210cb5d535a1b46d052e9c8de84e6be62441 Feb 24 02:03:47.594218 master-0 kubenswrapper[4207]: I0224 02:03:47.594161 4207 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2"] Feb 24 02:03:47.608303 master-0 kubenswrapper[4207]: W0224 02:03:47.608242 4207 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcbda577_b943_4b5c_b041_948aece8e40f.slice/crio-e6d28a4266f3905d697e133577d4e67e6ee815cccb7f5ef59b536b8c0d26cb94 WatchSource:0}: Error finding container e6d28a4266f3905d697e133577d4e67e6ee815cccb7f5ef59b536b8c0d26cb94: Status 404 returned error can't find the container with id e6d28a4266f3905d697e133577d4e67e6ee815cccb7f5ef59b536b8c0d26cb94 Feb 24 02:03:48.088594 master-0 kubenswrapper[4207]: I0224 02:03:48.087695 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-c95qc" event={"ID":"7b098bd4-5751-4b01-8409-0688fd29233e","Type":"ContainerStarted","Data":"35a3cac3cfce9496c0f221e8539970cdcedf87aabbcb92ba9a5c445596750d49"} Feb 24 02:03:48.094695 master-0 kubenswrapper[4207]: I0224 02:03:48.090168 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" event={"ID":"a02536a3-7d3e-4e74-9625-aefed518ec35","Type":"ContainerStarted","Data":"a75855ac22ad61c526e140082a63e50802db589f96d5c1f8fe72f371e5c93069"} Feb 24 02:03:48.094695 master-0 kubenswrapper[4207]: I0224 02:03:48.092596 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-rjbl5" event={"ID":"d8e20d47-aeb6-41bf-9715-c437beb8e9e4","Type":"ContainerStarted","Data":"29621914ea05b7d9aefb3ef92742f6212ca05bc6251d28674ae45265f66276a1"} Feb 24 02:03:48.094695 master-0 kubenswrapper[4207]: I0224 02:03:48.094390 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" event={"ID":"cabdddba-5507-4e47-98ef-a00c6d0f305d","Type":"ContainerStarted","Data":"e54e467fec77344a591689afeb76ae49385e45cfe4c4aeb2a94eec65da6cdce5"} Feb 24 02:03:48.097948 master-0 kubenswrapper[4207]: I0224 02:03:48.097547 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" event={"ID":"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2","Type":"ContainerStarted","Data":"ff39808811189af69f67503d76fa167bb97add817a078f10dcf74a7660201e4e"} Feb 24 02:03:48.101220 master-0 kubenswrapper[4207]: I0224 02:03:48.100686 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" event={"ID":"f85222bf-f51a-4232-8db1-1e6ee593617b","Type":"ContainerStarted","Data":"2434dea5e725cbfa7b0fd89dd34a4a36539029bcba3b05703f8e046b7372d369"} Feb 24 02:03:48.101220 master-0 kubenswrapper[4207]: I0224 02:03:48.100757 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" event={"ID":"f85222bf-f51a-4232-8db1-1e6ee593617b","Type":"ContainerStarted","Data":"9b9766c83ab547d93c665b0d79f8c94f21cf677d4157ff5e1bc24f519048fa91"} Feb 24 02:03:48.117641 master-0 kubenswrapper[4207]: I0224 02:03:48.107502 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" event={"ID":"303d5058-84df-40d1-a941-896b093ae470","Type":"ContainerStarted","Data":"f64a1a9e81543288c082ba54493b536ca2db47fef63c0b6ea8e2ecd8d4fc6a3b"} Feb 24 02:03:48.117641 master-0 kubenswrapper[4207]: I0224 02:03:48.110002 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" event={"ID":"fcbda577-b943-4b5c-b041-948aece8e40f","Type":"ContainerStarted","Data":"e6d28a4266f3905d697e133577d4e67e6ee815cccb7f5ef59b536b8c0d26cb94"} Feb 24 02:03:48.117641 master-0 kubenswrapper[4207]: I0224 02:03:48.112367 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" event={"ID":"c92835f0-7f32-4584-8304-843d7979392a","Type":"ContainerStarted","Data":"a6e4933443321f6f827221301b84c881881ea51343c84ac3ad457e15891f86d0"} Feb 24 02:03:48.117641 master-0 kubenswrapper[4207]: I0224 02:03:48.115806 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn" event={"ID":"7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c","Type":"ContainerStarted","Data":"d884f8a9271a3be209f8b517c106210cb5d535a1b46d052e9c8de84e6be62441"} Feb 24 02:03:48.117641 master-0 kubenswrapper[4207]: I0224 02:03:48.117115 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" event={"ID":"b36d8451-0fda-4d9d-a850-d05c8f847016","Type":"ContainerStarted","Data":"64c3913ef0868e964da24e47fde7afcb2edc5db0527066ac2d8451806802e649"} Feb 24 02:03:48.118052 master-0 kubenswrapper[4207]: I0224 02:03:48.117997 4207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" podStartSLOduration=69.117983639 podStartE2EDuration="1m9.117983639s" podCreationTimestamp="2026-02-24 02:02:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:03:48.116279802 +0000 UTC m=+113.459584052" watchObservedRunningTime="2026-02-24 02:03:48.117983639 +0000 UTC m=+113.461287889" Feb 24 02:03:48.123923 master-0 kubenswrapper[4207]: I0224 02:03:48.123897 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr" event={"ID":"9b5620d6-a5fe-45d7-b39e-8bed7f602a17","Type":"ContainerStarted","Data":"83490c1a955fe6b943eda48c6b81b0120dda14df023aa9b81ab0e80b7e90cadf"} Feb 24 02:03:48.126901 master-0 kubenswrapper[4207]: I0224 02:03:48.126851 4207 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" event={"ID":"f5463fbf-ac21-4058-9a3b-30d0e5ea31b7","Type":"ContainerStarted","Data":"1cc1996551692c223eb12edcadd4f14bef06fed859ebb6d00f4391944783b38d"} Feb 24 02:03:48.200380 master-0 kubenswrapper[4207]: I0224 02:03:48.200325 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:48.200593 master-0 kubenswrapper[4207]: E0224 02:03:48.200539 4207 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Feb 24 02:03:48.200676 master-0 kubenswrapper[4207]: E0224 02:03:48.200660 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls podName:7b4e3ba0-5194-4e20-8f12-dea4b67504fe nodeName:}" failed. No retries permitted until 2026-02-24 02:03:50.200625247 +0000 UTC m=+115.543929487 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-d6bb9bb76-k98fq" (UID: "7b4e3ba0-5194-4e20-8f12-dea4b67504fe") : secret "cluster-baremetal-operator-tls" not found Feb 24 02:03:48.200934 master-0 kubenswrapper[4207]: I0224 02:03:48.200902 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert\") pod \"olm-operator-5499d7f7bb-5g6nc\" (UID: \"02f1d753-983a-4c4a-b1a0-560de173859a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:03:48.201439 master-0 kubenswrapper[4207]: E0224 02:03:48.201325 4207 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 24 02:03:48.201439 master-0 kubenswrapper[4207]: I0224 02:03:48.201353 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert\") pod \"catalog-operator-596f79dd6f-8cg5c\" (UID: \"12b89e05-a503-47aa-90b2-4d741e015b19\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:03:48.201439 master-0 kubenswrapper[4207]: E0224 02:03:48.201413 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert podName:02f1d753-983a-4c4a-b1a0-560de173859a nodeName:}" failed. No retries permitted until 2026-02-24 02:03:50.201387799 +0000 UTC m=+115.544692039 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert") pod "olm-operator-5499d7f7bb-5g6nc" (UID: "02f1d753-983a-4c4a-b1a0-560de173859a") : secret "olm-operator-serving-cert" not found Feb 24 02:03:48.201536 master-0 kubenswrapper[4207]: I0224 02:03:48.201507 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:48.201568 master-0 kubenswrapper[4207]: I0224 02:03:48.201547 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-4qf9p\" (UID: \"91d16f7b-390a-4d9d-99d6-cc8e210801d1\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:03:48.201618 master-0 kubenswrapper[4207]: E0224 02:03:48.201606 4207 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 24 02:03:48.201661 master-0 kubenswrapper[4207]: E0224 02:03:48.201647 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert podName:12b89e05-a503-47aa-90b2-4d741e015b19 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:50.201633356 +0000 UTC m=+115.544937596 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert") pod "catalog-operator-596f79dd6f-8cg5c" (UID: "12b89e05-a503-47aa-90b2-4d741e015b19") : secret "catalog-operator-serving-cert" not found Feb 24 02:03:48.201696 master-0 kubenswrapper[4207]: E0224 02:03:48.201664 4207 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 24 02:03:48.201696 master-0 kubenswrapper[4207]: E0224 02:03:48.201690 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics podName:91d16f7b-390a-4d9d-99d6-cc8e210801d1 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:50.201683567 +0000 UTC m=+115.544987807 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-4qf9p" (UID: "91d16f7b-390a-4d9d-99d6-cc8e210801d1") : secret "marketplace-operator-metrics" not found Feb 24 02:03:48.201824 master-0 kubenswrapper[4207]: E0224 02:03:48.201698 4207 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Feb 24 02:03:48.201824 master-0 kubenswrapper[4207]: E0224 02:03:48.201719 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert podName:7b4e3ba0-5194-4e20-8f12-dea4b67504fe nodeName:}" failed. No retries permitted until 2026-02-24 02:03:50.201713428 +0000 UTC m=+115.545017668 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert") pod "cluster-baremetal-operator-d6bb9bb76-k98fq" (UID: "7b4e3ba0-5194-4e20-8f12-dea4b67504fe") : secret "cluster-baremetal-webhook-server-cert" not found Feb 24 02:03:48.302714 master-0 kubenswrapper[4207]: I0224 02:03:48.302638 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls\") pod \"dns-operator-8c7d49845-hxcn2\" (UID: \"2cb764f6-40f8-4e87-8be0-b9d7b0364201\") " pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" Feb 24 02:03:48.302714 master-0 kubenswrapper[4207]: I0224 02:03:48.302692 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:03:48.302869 master-0 kubenswrapper[4207]: I0224 02:03:48.302723 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:48.302869 master-0 kubenswrapper[4207]: E0224 02:03:48.302747 4207 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 24 02:03:48.302869 master-0 kubenswrapper[4207]: I0224 02:03:48.302760 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:03:48.302869 master-0 kubenswrapper[4207]: E0224 02:03:48.302786 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls podName:2cb764f6-40f8-4e87-8be0-b9d7b0364201 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:50.302777353 +0000 UTC m=+115.646081593 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls") pod "dns-operator-8c7d49845-hxcn2" (UID: "2cb764f6-40f8-4e87-8be0-b9d7b0364201") : secret "metrics-tls" not found Feb 24 02:03:48.302869 master-0 kubenswrapper[4207]: I0224 02:03:48.302804 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:03:48.302869 master-0 kubenswrapper[4207]: I0224 02:03:48.302837 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:48.302869 master-0 kubenswrapper[4207]: E0224 02:03:48.302868 4207 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 24 02:03:48.303220 master-0 kubenswrapper[4207]: E0224 02:03:48.302914 4207 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 24 02:03:48.303220 master-0 kubenswrapper[4207]: E0224 02:03:48.302927 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls podName:c84dc269-43ae-4083-9998-a0b3c90bb681 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:50.302908286 +0000 UTC m=+115.646212526 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-d7sx4" (UID: "c84dc269-43ae-4083-9998-a0b3c90bb681") : secret "image-registry-operator-tls" not found Feb 24 02:03:48.303220 master-0 kubenswrapper[4207]: E0224 02:03:48.302942 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert podName:6320dbb5-b84d-4a57-8c65-fbed8421f84a nodeName:}" failed. No retries permitted until 2026-02-24 02:03:50.302935807 +0000 UTC m=+115.646240047 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-2hllb" (UID: "6320dbb5-b84d-4a57-8c65-fbed8421f84a") : secret "package-server-manager-serving-cert" not found Feb 24 02:03:48.303220 master-0 kubenswrapper[4207]: E0224 02:03:48.302992 4207 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 24 02:03:48.303220 master-0 kubenswrapper[4207]: E0224 02:03:48.303010 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls podName:6a9ccd8e-d964-4c03-8ffc-51b464030c25 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:50.303005049 +0000 UTC m=+115.646309289 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-8x6sd" (UID: "6a9ccd8e-d964-4c03-8ffc-51b464030c25") : secret "node-tuning-operator-tls" not found Feb 24 02:03:48.303220 master-0 kubenswrapper[4207]: I0224 02:03:48.302869 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2hllb\" (UID: \"6320dbb5-b84d-4a57-8c65-fbed8421f84a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" Feb 24 02:03:48.303220 master-0 kubenswrapper[4207]: I0224 02:03:48.303110 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-fkzdb\" (UID: \"f2e9cdff-8c15-43df-b8df-7fe3a73fda86\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:03:48.303220 master-0 kubenswrapper[4207]: I0224 02:03:48.303150 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-dg77f\" (UID: \"dc3d08db-45fa-4fef-b1fd-2875f22d5c45\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" Feb 24 02:03:48.303220 master-0 kubenswrapper[4207]: E0224 02:03:48.303214 4207 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 24 02:03:48.303445 master-0 kubenswrapper[4207]: E0224 02:03:48.303234 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs podName:dc3d08db-45fa-4fef-b1fd-2875f22d5c45 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:50.303226885 +0000 UTC m=+115.646531115 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-dg77f" (UID: "dc3d08db-45fa-4fef-b1fd-2875f22d5c45") : secret "multus-admission-controller-secret" not found Feb 24 02:03:48.303445 master-0 kubenswrapper[4207]: E0224 02:03:48.303266 4207 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Feb 24 02:03:48.303445 master-0 kubenswrapper[4207]: E0224 02:03:48.303283 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls podName:db8d6627-394c-4087-bfa4-bf7580f6bb4b nodeName:}" failed. No retries permitted until 2026-02-24 02:03:50.303278457 +0000 UTC m=+115.646582687 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls") pod "machine-config-operator-7f8c75f984-ffnq7" (UID: "db8d6627-394c-4087-bfa4-bf7580f6bb4b") : secret "mco-proxy-tls" not found Feb 24 02:03:48.303445 master-0 kubenswrapper[4207]: E0224 02:03:48.303332 4207 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 24 02:03:48.303445 master-0 kubenswrapper[4207]: E0224 02:03:48.303361 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls podName:f2e9cdff-8c15-43df-b8df-7fe3a73fda86 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:50.303353199 +0000 UTC m=+115.646657439 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-fkzdb" (UID: "f2e9cdff-8c15-43df-b8df-7fe3a73fda86") : secret "cluster-monitoring-operator-tls" not found Feb 24 02:03:48.303445 master-0 kubenswrapper[4207]: E0224 02:03:48.303397 4207 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 24 02:03:48.303445 master-0 kubenswrapper[4207]: E0224 02:03:48.303423 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert podName:6a9ccd8e-d964-4c03-8ffc-51b464030c25 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:50.303416811 +0000 UTC m=+115.646721051 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-8x6sd" (UID: "6a9ccd8e-d964-4c03-8ffc-51b464030c25") : secret "performance-addon-operator-webhook-cert" not found Feb 24 02:03:48.303445 master-0 kubenswrapper[4207]: E0224 02:03:48.303450 4207 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 24 02:03:48.303669 master-0 kubenswrapper[4207]: E0224 02:03:48.303468 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls podName:c3278a82-ee70-4d6c-9c96-f8cb1bcb9334 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:50.303463452 +0000 UTC m=+115.646767692 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls") pod "ingress-operator-6569778c84-6dlqb" (UID: "c3278a82-ee70-4d6c-9c96-f8cb1bcb9334") : secret "metrics-tls" not found Feb 24 02:03:50.221726 master-0 kubenswrapper[4207]: I0224 02:03:50.221423 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert\") pod \"catalog-operator-596f79dd6f-8cg5c\" (UID: \"12b89e05-a503-47aa-90b2-4d741e015b19\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:03:50.223031 master-0 kubenswrapper[4207]: I0224 02:03:50.221834 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:50.223031 master-0 kubenswrapper[4207]: I0224 02:03:50.221884 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-4qf9p\" (UID: \"91d16f7b-390a-4d9d-99d6-cc8e210801d1\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:03:50.223031 master-0 kubenswrapper[4207]: I0224 02:03:50.221989 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:50.223031 master-0 kubenswrapper[4207]: I0224 02:03:50.222056 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert\") pod \"olm-operator-5499d7f7bb-5g6nc\" (UID: \"02f1d753-983a-4c4a-b1a0-560de173859a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:03:50.223031 master-0 kubenswrapper[4207]: E0224 02:03:50.222304 4207 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 24 02:03:50.223031 master-0 kubenswrapper[4207]: E0224 02:03:50.222388 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert podName:02f1d753-983a-4c4a-b1a0-560de173859a nodeName:}" failed. No retries permitted until 2026-02-24 02:03:54.222358204 +0000 UTC m=+119.565662484 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert") pod "olm-operator-5499d7f7bb-5g6nc" (UID: "02f1d753-983a-4c4a-b1a0-560de173859a") : secret "olm-operator-serving-cert" not found Feb 24 02:03:50.223031 master-0 kubenswrapper[4207]: E0224 02:03:50.222410 4207 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Feb 24 02:03:50.223031 master-0 kubenswrapper[4207]: E0224 02:03:50.222482 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert podName:7b4e3ba0-5194-4e20-8f12-dea4b67504fe nodeName:}" failed. No retries permitted until 2026-02-24 02:03:54.222458867 +0000 UTC m=+119.565763107 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert") pod "cluster-baremetal-operator-d6bb9bb76-k98fq" (UID: "7b4e3ba0-5194-4e20-8f12-dea4b67504fe") : secret "cluster-baremetal-webhook-server-cert" not found Feb 24 02:03:50.223031 master-0 kubenswrapper[4207]: E0224 02:03:50.222529 4207 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 24 02:03:50.223031 master-0 kubenswrapper[4207]: E0224 02:03:50.222550 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert podName:12b89e05-a503-47aa-90b2-4d741e015b19 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:54.222542999 +0000 UTC m=+119.565847239 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert") pod "catalog-operator-596f79dd6f-8cg5c" (UID: "12b89e05-a503-47aa-90b2-4d741e015b19") : secret "catalog-operator-serving-cert" not found Feb 24 02:03:50.223031 master-0 kubenswrapper[4207]: E0224 02:03:50.222613 4207 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Feb 24 02:03:50.223031 master-0 kubenswrapper[4207]: E0224 02:03:50.222636 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls podName:7b4e3ba0-5194-4e20-8f12-dea4b67504fe nodeName:}" failed. No retries permitted until 2026-02-24 02:03:54.222629352 +0000 UTC m=+119.565933592 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-d6bb9bb76-k98fq" (UID: "7b4e3ba0-5194-4e20-8f12-dea4b67504fe") : secret "cluster-baremetal-operator-tls" not found Feb 24 02:03:50.223031 master-0 kubenswrapper[4207]: E0224 02:03:50.222679 4207 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 24 02:03:50.223031 master-0 kubenswrapper[4207]: E0224 02:03:50.222701 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics podName:91d16f7b-390a-4d9d-99d6-cc8e210801d1 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:54.222694524 +0000 UTC m=+119.565998764 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-4qf9p" (UID: "91d16f7b-390a-4d9d-99d6-cc8e210801d1") : secret "marketplace-operator-metrics" not found Feb 24 02:03:50.323311 master-0 kubenswrapper[4207]: I0224 02:03:50.323222 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:03:50.323311 master-0 kubenswrapper[4207]: I0224 02:03:50.323306 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:03:50.323636 master-0 kubenswrapper[4207]: I0224 02:03:50.323365 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:50.323636 master-0 kubenswrapper[4207]: E0224 02:03:50.323409 4207 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 24 02:03:50.323636 master-0 kubenswrapper[4207]: E0224 02:03:50.323510 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls podName:c84dc269-43ae-4083-9998-a0b3c90bb681 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:54.323487991 +0000 UTC m=+119.666792271 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-d7sx4" (UID: "c84dc269-43ae-4083-9998-a0b3c90bb681") : secret "image-registry-operator-tls" not found Feb 24 02:03:50.323636 master-0 kubenswrapper[4207]: E0224 02:03:50.323633 4207 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 24 02:03:50.323882 master-0 kubenswrapper[4207]: E0224 02:03:50.323715 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert podName:6320dbb5-b84d-4a57-8c65-fbed8421f84a nodeName:}" failed. No retries permitted until 2026-02-24 02:03:54.323689937 +0000 UTC m=+119.666994207 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-2hllb" (UID: "6320dbb5-b84d-4a57-8c65-fbed8421f84a") : secret "package-server-manager-serving-cert" not found Feb 24 02:03:50.323882 master-0 kubenswrapper[4207]: E0224 02:03:50.323798 4207 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 24 02:03:50.323882 master-0 kubenswrapper[4207]: E0224 02:03:50.323818 4207 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 24 02:03:50.324038 master-0 kubenswrapper[4207]: I0224 02:03:50.323413 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2hllb\" (UID: \"6320dbb5-b84d-4a57-8c65-fbed8421f84a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" Feb 24 02:03:50.324038 master-0 kubenswrapper[4207]: E0224 02:03:50.323837 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert podName:6a9ccd8e-d964-4c03-8ffc-51b464030c25 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:54.32382142 +0000 UTC m=+119.667125690 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-8x6sd" (UID: "6a9ccd8e-d964-4c03-8ffc-51b464030c25") : secret "performance-addon-operator-webhook-cert" not found Feb 24 02:03:50.324038 master-0 kubenswrapper[4207]: E0224 02:03:50.324008 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls podName:c3278a82-ee70-4d6c-9c96-f8cb1bcb9334 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:54.323962154 +0000 UTC m=+119.667266464 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls") pod "ingress-operator-6569778c84-6dlqb" (UID: "c3278a82-ee70-4d6c-9c96-f8cb1bcb9334") : secret "metrics-tls" not found Feb 24 02:03:50.324643 master-0 kubenswrapper[4207]: I0224 02:03:50.324227 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-fkzdb\" (UID: \"f2e9cdff-8c15-43df-b8df-7fe3a73fda86\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:03:50.324643 master-0 kubenswrapper[4207]: E0224 02:03:50.324492 4207 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 24 02:03:50.324643 master-0 kubenswrapper[4207]: I0224 02:03:50.324531 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-dg77f\" (UID: \"dc3d08db-45fa-4fef-b1fd-2875f22d5c45\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" Feb 24 02:03:50.324643 master-0 kubenswrapper[4207]: E0224 02:03:50.324608 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls podName:f2e9cdff-8c15-43df-b8df-7fe3a73fda86 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:54.324549931 +0000 UTC m=+119.667854211 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-fkzdb" (UID: "f2e9cdff-8c15-43df-b8df-7fe3a73fda86") : secret "cluster-monitoring-operator-tls" not found Feb 24 02:03:50.324917 master-0 kubenswrapper[4207]: I0224 02:03:50.324654 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls\") pod \"dns-operator-8c7d49845-hxcn2\" (UID: \"2cb764f6-40f8-4e87-8be0-b9d7b0364201\") " pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" Feb 24 02:03:50.324917 master-0 kubenswrapper[4207]: E0224 02:03:50.324717 4207 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 24 02:03:50.324917 master-0 kubenswrapper[4207]: E0224 02:03:50.324747 4207 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 24 02:03:50.324917 master-0 kubenswrapper[4207]: E0224 02:03:50.324779 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs podName:dc3d08db-45fa-4fef-b1fd-2875f22d5c45 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:54.324759406 +0000 UTC m=+119.668063686 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-dg77f" (UID: "dc3d08db-45fa-4fef-b1fd-2875f22d5c45") : secret "multus-admission-controller-secret" not found Feb 24 02:03:50.324917 master-0 kubenswrapper[4207]: E0224 02:03:50.324902 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls podName:2cb764f6-40f8-4e87-8be0-b9d7b0364201 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:54.32488627 +0000 UTC m=+119.668190540 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls") pod "dns-operator-8c7d49845-hxcn2" (UID: "2cb764f6-40f8-4e87-8be0-b9d7b0364201") : secret "metrics-tls" not found Feb 24 02:03:50.324917 master-0 kubenswrapper[4207]: I0224 02:03:50.324898 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:03:50.325240 master-0 kubenswrapper[4207]: E0224 02:03:50.324950 4207 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Feb 24 02:03:50.325240 master-0 kubenswrapper[4207]: I0224 02:03:50.324966 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:50.325240 master-0 kubenswrapper[4207]: E0224 02:03:50.325001 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls podName:db8d6627-394c-4087-bfa4-bf7580f6bb4b nodeName:}" failed. No retries permitted until 2026-02-24 02:03:54.324986663 +0000 UTC m=+119.668290933 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls") pod "machine-config-operator-7f8c75f984-ffnq7" (UID: "db8d6627-394c-4087-bfa4-bf7580f6bb4b") : secret "mco-proxy-tls" not found Feb 24 02:03:50.325240 master-0 kubenswrapper[4207]: E0224 02:03:50.325205 4207 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 24 02:03:50.325455 master-0 kubenswrapper[4207]: E0224 02:03:50.325277 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls podName:6a9ccd8e-d964-4c03-8ffc-51b464030c25 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:54.32525558 +0000 UTC m=+119.668559900 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-8x6sd" (UID: "6a9ccd8e-d964-4c03-8ffc-51b464030c25") : secret "node-tuning-operator-tls" not found Feb 24 02:03:54.281077 master-0 kubenswrapper[4207]: I0224 02:03:54.280615 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:54.281783 master-0 kubenswrapper[4207]: I0224 02:03:54.281089 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-4qf9p\" (UID: \"91d16f7b-390a-4d9d-99d6-cc8e210801d1\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:03:54.281783 master-0 kubenswrapper[4207]: E0224 02:03:54.280891 4207 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Feb 24 02:03:54.281783 master-0 kubenswrapper[4207]: E0224 02:03:54.281187 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert podName:7b4e3ba0-5194-4e20-8f12-dea4b67504fe nodeName:}" failed. No retries permitted until 2026-02-24 02:04:02.281160018 +0000 UTC m=+127.624464278 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert") pod "cluster-baremetal-operator-d6bb9bb76-k98fq" (UID: "7b4e3ba0-5194-4e20-8f12-dea4b67504fe") : secret "cluster-baremetal-webhook-server-cert" not found Feb 24 02:03:54.281783 master-0 kubenswrapper[4207]: E0224 02:03:54.281189 4207 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 24 02:03:54.281783 master-0 kubenswrapper[4207]: E0224 02:03:54.281223 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics podName:91d16f7b-390a-4d9d-99d6-cc8e210801d1 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:02.281215389 +0000 UTC m=+127.624519639 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-4qf9p" (UID: "91d16f7b-390a-4d9d-99d6-cc8e210801d1") : secret "marketplace-operator-metrics" not found Feb 24 02:03:54.281783 master-0 kubenswrapper[4207]: I0224 02:03:54.281324 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:54.281783 master-0 kubenswrapper[4207]: I0224 02:03:54.281452 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert\") pod \"olm-operator-5499d7f7bb-5g6nc\" (UID: \"02f1d753-983a-4c4a-b1a0-560de173859a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:03:54.281783 master-0 kubenswrapper[4207]: E0224 02:03:54.281461 4207 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Feb 24 02:03:54.281783 master-0 kubenswrapper[4207]: I0224 02:03:54.281606 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert\") pod \"catalog-operator-596f79dd6f-8cg5c\" (UID: \"12b89e05-a503-47aa-90b2-4d741e015b19\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:03:54.281783 master-0 kubenswrapper[4207]: E0224 02:03:54.281631 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls podName:7b4e3ba0-5194-4e20-8f12-dea4b67504fe nodeName:}" failed. No retries permitted until 2026-02-24 02:04:02.28161373 +0000 UTC m=+127.624917980 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-d6bb9bb76-k98fq" (UID: "7b4e3ba0-5194-4e20-8f12-dea4b67504fe") : secret "cluster-baremetal-operator-tls" not found Feb 24 02:03:54.281783 master-0 kubenswrapper[4207]: E0224 02:03:54.281497 4207 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 24 02:03:54.281783 master-0 kubenswrapper[4207]: E0224 02:03:54.281675 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert podName:02f1d753-983a-4c4a-b1a0-560de173859a nodeName:}" failed. No retries permitted until 2026-02-24 02:04:02.281664962 +0000 UTC m=+127.624969212 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert") pod "olm-operator-5499d7f7bb-5g6nc" (UID: "02f1d753-983a-4c4a-b1a0-560de173859a") : secret "olm-operator-serving-cert" not found Feb 24 02:03:54.281783 master-0 kubenswrapper[4207]: E0224 02:03:54.281768 4207 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 24 02:03:54.282492 master-0 kubenswrapper[4207]: E0224 02:03:54.281861 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert podName:12b89e05-a503-47aa-90b2-4d741e015b19 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:02.281832437 +0000 UTC m=+127.625136707 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert") pod "catalog-operator-596f79dd6f-8cg5c" (UID: "12b89e05-a503-47aa-90b2-4d741e015b19") : secret "catalog-operator-serving-cert" not found Feb 24 02:03:54.382732 master-0 kubenswrapper[4207]: I0224 02:03:54.382602 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:03:54.382732 master-0 kubenswrapper[4207]: I0224 02:03:54.382648 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:54.382949 master-0 kubenswrapper[4207]: E0224 02:03:54.382795 4207 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 24 02:03:54.382949 master-0 kubenswrapper[4207]: I0224 02:03:54.382844 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:03:54.382949 master-0 kubenswrapper[4207]: E0224 02:03:54.382901 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls podName:c84dc269-43ae-4083-9998-a0b3c90bb681 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:02.382867271 +0000 UTC m=+127.726171551 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-d7sx4" (UID: "c84dc269-43ae-4083-9998-a0b3c90bb681") : secret "image-registry-operator-tls" not found Feb 24 02:03:54.383132 master-0 kubenswrapper[4207]: I0224 02:03:54.382977 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2hllb\" (UID: \"6320dbb5-b84d-4a57-8c65-fbed8421f84a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" Feb 24 02:03:54.383132 master-0 kubenswrapper[4207]: E0224 02:03:54.382994 4207 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 24 02:03:54.383132 master-0 kubenswrapper[4207]: E0224 02:03:54.383048 4207 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 24 02:03:54.383132 master-0 kubenswrapper[4207]: E0224 02:03:54.383065 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls podName:c3278a82-ee70-4d6c-9c96-f8cb1bcb9334 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:02.383040495 +0000 UTC m=+127.726344835 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls") pod "ingress-operator-6569778c84-6dlqb" (UID: "c3278a82-ee70-4d6c-9c96-f8cb1bcb9334") : secret "metrics-tls" not found Feb 24 02:03:54.383132 master-0 kubenswrapper[4207]: E0224 02:03:54.383096 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert podName:6a9ccd8e-d964-4c03-8ffc-51b464030c25 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:02.383085237 +0000 UTC m=+127.726389467 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-8x6sd" (UID: "6a9ccd8e-d964-4c03-8ffc-51b464030c25") : secret "performance-addon-operator-webhook-cert" not found Feb 24 02:03:54.383132 master-0 kubenswrapper[4207]: E0224 02:03:54.383102 4207 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 24 02:03:54.383449 master-0 kubenswrapper[4207]: I0224 02:03:54.383141 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-fkzdb\" (UID: \"f2e9cdff-8c15-43df-b8df-7fe3a73fda86\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:03:54.383449 master-0 kubenswrapper[4207]: E0224 02:03:54.383160 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert podName:6320dbb5-b84d-4a57-8c65-fbed8421f84a nodeName:}" failed. No retries permitted until 2026-02-24 02:04:02.383146548 +0000 UTC m=+127.726450818 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-2hllb" (UID: "6320dbb5-b84d-4a57-8c65-fbed8421f84a") : secret "package-server-manager-serving-cert" not found Feb 24 02:03:54.383449 master-0 kubenswrapper[4207]: E0224 02:03:54.383229 4207 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 24 02:03:54.383449 master-0 kubenswrapper[4207]: E0224 02:03:54.383264 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls podName:f2e9cdff-8c15-43df-b8df-7fe3a73fda86 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:02.383253951 +0000 UTC m=+127.726558191 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-fkzdb" (UID: "f2e9cdff-8c15-43df-b8df-7fe3a73fda86") : secret "cluster-monitoring-operator-tls" not found Feb 24 02:03:54.384686 master-0 kubenswrapper[4207]: I0224 02:03:54.384644 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-dg77f\" (UID: \"dc3d08db-45fa-4fef-b1fd-2875f22d5c45\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" Feb 24 02:03:54.384858 master-0 kubenswrapper[4207]: I0224 02:03:54.384691 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls\") pod \"dns-operator-8c7d49845-hxcn2\" (UID: \"2cb764f6-40f8-4e87-8be0-b9d7b0364201\") " pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" Feb 24 02:03:54.384858 master-0 kubenswrapper[4207]: I0224 02:03:54.384717 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:54.384858 master-0 kubenswrapper[4207]: I0224 02:03:54.384749 4207 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:03:54.384858 master-0 kubenswrapper[4207]: E0224 02:03:54.384754 4207 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 24 02:03:54.384858 master-0 kubenswrapper[4207]: E0224 02:03:54.384832 4207 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 24 02:03:54.384858 master-0 kubenswrapper[4207]: E0224 02:03:54.384859 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs podName:dc3d08db-45fa-4fef-b1fd-2875f22d5c45 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:02.384847896 +0000 UTC m=+127.728152136 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-dg77f" (UID: "dc3d08db-45fa-4fef-b1fd-2875f22d5c45") : secret "multus-admission-controller-secret" not found Feb 24 02:03:54.385472 master-0 kubenswrapper[4207]: E0224 02:03:54.384881 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls podName:6a9ccd8e-d964-4c03-8ffc-51b464030c25 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:02.384871527 +0000 UTC m=+127.728175887 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-8x6sd" (UID: "6a9ccd8e-d964-4c03-8ffc-51b464030c25") : secret "node-tuning-operator-tls" not found Feb 24 02:03:54.385472 master-0 kubenswrapper[4207]: E0224 02:03:54.384794 4207 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 24 02:03:54.385472 master-0 kubenswrapper[4207]: E0224 02:03:54.384890 4207 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Feb 24 02:03:54.385472 master-0 kubenswrapper[4207]: E0224 02:03:54.384916 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls podName:db8d6627-394c-4087-bfa4-bf7580f6bb4b nodeName:}" failed. No retries permitted until 2026-02-24 02:04:02.384908038 +0000 UTC m=+127.728212278 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls") pod "machine-config-operator-7f8c75f984-ffnq7" (UID: "db8d6627-394c-4087-bfa4-bf7580f6bb4b") : secret "mco-proxy-tls" not found Feb 24 02:03:54.385472 master-0 kubenswrapper[4207]: E0224 02:03:54.384929 4207 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls podName:2cb764f6-40f8-4e87-8be0-b9d7b0364201 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:02.384922908 +0000 UTC m=+127.728227138 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls") pod "dns-operator-8c7d49845-hxcn2" (UID: "2cb764f6-40f8-4e87-8be0-b9d7b0364201") : secret "metrics-tls" not found Feb 24 02:03:55.560092 master-0 systemd[1]: Stopping Kubernetes Kubelet... Feb 24 02:03:55.615019 master-0 systemd[1]: kubelet.service: Deactivated successfully. Feb 24 02:03:55.615448 master-0 systemd[1]: Stopped Kubernetes Kubelet. Feb 24 02:03:55.617162 master-0 systemd[1]: kubelet.service: Consumed 11.536s CPU time. Feb 24 02:03:55.628434 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 24 02:03:55.723952 master-0 kubenswrapper[7864]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 02:03:55.723952 master-0 kubenswrapper[7864]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 24 02:03:55.725198 master-0 kubenswrapper[7864]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 02:03:55.725198 master-0 kubenswrapper[7864]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 02:03:55.725198 master-0 kubenswrapper[7864]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 24 02:03:55.725198 master-0 kubenswrapper[7864]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 02:03:55.725198 master-0 kubenswrapper[7864]: I0224 02:03:55.724208 7864 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 24 02:03:55.740532 master-0 kubenswrapper[7864]: W0224 02:03:55.740449 7864 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 02:03:55.740532 master-0 kubenswrapper[7864]: W0224 02:03:55.740539 7864 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 02:03:55.740532 master-0 kubenswrapper[7864]: W0224 02:03:55.740545 7864 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 02:03:55.740532 master-0 kubenswrapper[7864]: W0224 02:03:55.740552 7864 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 02:03:55.740532 master-0 kubenswrapper[7864]: W0224 02:03:55.740559 7864 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 02:03:55.740768 master-0 kubenswrapper[7864]: W0224 02:03:55.740565 7864 feature_gate.go:330] unrecognized feature gate: Example Feb 24 02:03:55.740768 master-0 kubenswrapper[7864]: W0224 02:03:55.740584 7864 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 24 02:03:55.740768 master-0 kubenswrapper[7864]: W0224 02:03:55.740590 7864 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 02:03:55.740768 master-0 kubenswrapper[7864]: W0224 02:03:55.740594 7864 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 02:03:55.740768 master-0 kubenswrapper[7864]: W0224 02:03:55.740598 7864 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 02:03:55.740768 master-0 kubenswrapper[7864]: W0224 02:03:55.740602 7864 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 02:03:55.740768 master-0 kubenswrapper[7864]: W0224 02:03:55.740606 7864 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 02:03:55.740768 master-0 kubenswrapper[7864]: W0224 02:03:55.740610 7864 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 02:03:55.740768 master-0 kubenswrapper[7864]: W0224 02:03:55.740614 7864 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 02:03:55.740768 master-0 kubenswrapper[7864]: W0224 02:03:55.740617 7864 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 02:03:55.740768 master-0 kubenswrapper[7864]: W0224 02:03:55.740621 7864 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 02:03:55.740768 master-0 kubenswrapper[7864]: W0224 02:03:55.740625 7864 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 02:03:55.740768 master-0 kubenswrapper[7864]: W0224 02:03:55.740629 7864 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 02:03:55.740768 master-0 kubenswrapper[7864]: W0224 02:03:55.740633 7864 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 02:03:55.740768 master-0 kubenswrapper[7864]: W0224 02:03:55.740637 7864 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 02:03:55.740768 master-0 kubenswrapper[7864]: W0224 02:03:55.740640 7864 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 02:03:55.740768 master-0 kubenswrapper[7864]: W0224 02:03:55.740644 7864 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 02:03:55.740768 master-0 kubenswrapper[7864]: W0224 02:03:55.740647 7864 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 02:03:55.740768 master-0 kubenswrapper[7864]: W0224 02:03:55.740651 7864 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 02:03:55.740768 master-0 kubenswrapper[7864]: W0224 02:03:55.740655 7864 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 02:03:55.741237 master-0 kubenswrapper[7864]: W0224 02:03:55.740784 7864 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 02:03:55.741237 master-0 kubenswrapper[7864]: W0224 02:03:55.740792 7864 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 02:03:55.741237 master-0 kubenswrapper[7864]: W0224 02:03:55.740799 7864 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 02:03:55.741237 master-0 kubenswrapper[7864]: W0224 02:03:55.740805 7864 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 02:03:55.741237 master-0 kubenswrapper[7864]: W0224 02:03:55.740811 7864 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 02:03:55.741237 master-0 kubenswrapper[7864]: W0224 02:03:55.740816 7864 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 02:03:55.741237 master-0 kubenswrapper[7864]: W0224 02:03:55.740821 7864 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 02:03:55.741237 master-0 kubenswrapper[7864]: W0224 02:03:55.740825 7864 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 02:03:55.741237 master-0 kubenswrapper[7864]: W0224 02:03:55.740829 7864 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 02:03:55.741237 master-0 kubenswrapper[7864]: W0224 02:03:55.740844 7864 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 02:03:55.741237 master-0 kubenswrapper[7864]: W0224 02:03:55.740849 7864 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 02:03:55.741237 master-0 kubenswrapper[7864]: W0224 02:03:55.740854 7864 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 02:03:55.741237 master-0 kubenswrapper[7864]: W0224 02:03:55.740857 7864 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 02:03:55.741237 master-0 kubenswrapper[7864]: W0224 02:03:55.740861 7864 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 02:03:55.741237 master-0 kubenswrapper[7864]: W0224 02:03:55.740866 7864 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 02:03:55.741237 master-0 kubenswrapper[7864]: W0224 02:03:55.740947 7864 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 02:03:55.741237 master-0 kubenswrapper[7864]: W0224 02:03:55.740953 7864 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 02:03:55.741237 master-0 kubenswrapper[7864]: W0224 02:03:55.740957 7864 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 02:03:55.741237 master-0 kubenswrapper[7864]: W0224 02:03:55.740962 7864 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 02:03:55.741747 master-0 kubenswrapper[7864]: W0224 02:03:55.740966 7864 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 02:03:55.741747 master-0 kubenswrapper[7864]: W0224 02:03:55.740970 7864 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 02:03:55.741747 master-0 kubenswrapper[7864]: W0224 02:03:55.741022 7864 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 02:03:55.741747 master-0 kubenswrapper[7864]: W0224 02:03:55.741026 7864 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 02:03:55.741747 master-0 kubenswrapper[7864]: W0224 02:03:55.741031 7864 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 02:03:55.741747 master-0 kubenswrapper[7864]: W0224 02:03:55.741112 7864 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 02:03:55.741747 master-0 kubenswrapper[7864]: W0224 02:03:55.741117 7864 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 02:03:55.741747 master-0 kubenswrapper[7864]: W0224 02:03:55.741121 7864 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 02:03:55.741747 master-0 kubenswrapper[7864]: W0224 02:03:55.741125 7864 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 02:03:55.741747 master-0 kubenswrapper[7864]: W0224 02:03:55.741129 7864 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 02:03:55.741747 master-0 kubenswrapper[7864]: W0224 02:03:55.741135 7864 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 02:03:55.741747 master-0 kubenswrapper[7864]: W0224 02:03:55.741140 7864 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 02:03:55.741747 master-0 kubenswrapper[7864]: W0224 02:03:55.741154 7864 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 02:03:55.741747 master-0 kubenswrapper[7864]: W0224 02:03:55.741158 7864 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 02:03:55.741747 master-0 kubenswrapper[7864]: W0224 02:03:55.741162 7864 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 02:03:55.741747 master-0 kubenswrapper[7864]: W0224 02:03:55.741167 7864 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 02:03:55.741747 master-0 kubenswrapper[7864]: W0224 02:03:55.741171 7864 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 02:03:55.741747 master-0 kubenswrapper[7864]: W0224 02:03:55.741175 7864 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 02:03:55.741747 master-0 kubenswrapper[7864]: W0224 02:03:55.741179 7864 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 02:03:55.741747 master-0 kubenswrapper[7864]: W0224 02:03:55.741183 7864 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 02:03:55.742197 master-0 kubenswrapper[7864]: W0224 02:03:55.741286 7864 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 02:03:55.742197 master-0 kubenswrapper[7864]: W0224 02:03:55.741290 7864 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 02:03:55.742197 master-0 kubenswrapper[7864]: W0224 02:03:55.741295 7864 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 02:03:55.742197 master-0 kubenswrapper[7864]: W0224 02:03:55.741301 7864 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 02:03:55.742197 master-0 kubenswrapper[7864]: W0224 02:03:55.741305 7864 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 02:03:55.742197 master-0 kubenswrapper[7864]: W0224 02:03:55.741309 7864 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 02:03:55.742197 master-0 kubenswrapper[7864]: W0224 02:03:55.741313 7864 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 02:03:55.742197 master-0 kubenswrapper[7864]: W0224 02:03:55.741317 7864 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 02:03:55.742197 master-0 kubenswrapper[7864]: I0224 02:03:55.741588 7864 flags.go:64] FLAG: --address="0.0.0.0" Feb 24 02:03:55.742197 master-0 kubenswrapper[7864]: I0224 02:03:55.741618 7864 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 24 02:03:55.742197 master-0 kubenswrapper[7864]: I0224 02:03:55.741632 7864 flags.go:64] FLAG: --anonymous-auth="true" Feb 24 02:03:55.742197 master-0 kubenswrapper[7864]: I0224 02:03:55.741639 7864 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 24 02:03:55.742197 master-0 kubenswrapper[7864]: I0224 02:03:55.741649 7864 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 24 02:03:55.742197 master-0 kubenswrapper[7864]: I0224 02:03:55.741655 7864 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 24 02:03:55.742197 master-0 kubenswrapper[7864]: I0224 02:03:55.741664 7864 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 24 02:03:55.742197 master-0 kubenswrapper[7864]: I0224 02:03:55.741672 7864 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 24 02:03:55.742197 master-0 kubenswrapper[7864]: I0224 02:03:55.741678 7864 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 24 02:03:55.742197 master-0 kubenswrapper[7864]: I0224 02:03:55.741683 7864 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 24 02:03:55.742197 master-0 kubenswrapper[7864]: I0224 02:03:55.741689 7864 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 24 02:03:55.742197 master-0 kubenswrapper[7864]: I0224 02:03:55.741696 7864 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 24 02:03:55.742197 master-0 kubenswrapper[7864]: I0224 02:03:55.741702 7864 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 24 02:03:55.742197 master-0 kubenswrapper[7864]: I0224 02:03:55.741708 7864 flags.go:64] FLAG: --cgroup-root="" Feb 24 02:03:55.742695 master-0 kubenswrapper[7864]: I0224 02:03:55.741713 7864 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 24 02:03:55.742695 master-0 kubenswrapper[7864]: I0224 02:03:55.741719 7864 flags.go:64] FLAG: --client-ca-file="" Feb 24 02:03:55.742695 master-0 kubenswrapper[7864]: I0224 02:03:55.741725 7864 flags.go:64] FLAG: --cloud-config="" Feb 24 02:03:55.742695 master-0 kubenswrapper[7864]: I0224 02:03:55.741730 7864 flags.go:64] FLAG: --cloud-provider="" Feb 24 02:03:55.742695 master-0 kubenswrapper[7864]: I0224 02:03:55.741736 7864 flags.go:64] FLAG: --cluster-dns="[]" Feb 24 02:03:55.742695 master-0 kubenswrapper[7864]: I0224 02:03:55.741743 7864 flags.go:64] FLAG: --cluster-domain="" Feb 24 02:03:55.742695 master-0 kubenswrapper[7864]: I0224 02:03:55.741749 7864 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 24 02:03:55.742695 master-0 kubenswrapper[7864]: I0224 02:03:55.741757 7864 flags.go:64] FLAG: --config-dir="" Feb 24 02:03:55.742695 master-0 kubenswrapper[7864]: I0224 02:03:55.741763 7864 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 24 02:03:55.742695 master-0 kubenswrapper[7864]: I0224 02:03:55.741771 7864 flags.go:64] FLAG: --container-log-max-files="5" Feb 24 02:03:55.742695 master-0 kubenswrapper[7864]: I0224 02:03:55.741779 7864 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 24 02:03:55.742695 master-0 kubenswrapper[7864]: I0224 02:03:55.741785 7864 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 24 02:03:55.742695 master-0 kubenswrapper[7864]: I0224 02:03:55.741792 7864 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 24 02:03:55.742695 master-0 kubenswrapper[7864]: I0224 02:03:55.741800 7864 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 24 02:03:55.742695 master-0 kubenswrapper[7864]: I0224 02:03:55.741807 7864 flags.go:64] FLAG: --contention-profiling="false" Feb 24 02:03:55.742695 master-0 kubenswrapper[7864]: I0224 02:03:55.741812 7864 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 24 02:03:55.742695 master-0 kubenswrapper[7864]: I0224 02:03:55.741818 7864 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 24 02:03:55.742695 master-0 kubenswrapper[7864]: I0224 02:03:55.741824 7864 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 24 02:03:55.742695 master-0 kubenswrapper[7864]: I0224 02:03:55.741829 7864 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 24 02:03:55.742695 master-0 kubenswrapper[7864]: I0224 02:03:55.741837 7864 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 24 02:03:55.742695 master-0 kubenswrapper[7864]: I0224 02:03:55.741843 7864 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 24 02:03:55.742695 master-0 kubenswrapper[7864]: I0224 02:03:55.741849 7864 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 24 02:03:55.742695 master-0 kubenswrapper[7864]: I0224 02:03:55.741857 7864 flags.go:64] FLAG: --enable-load-reader="false" Feb 24 02:03:55.742695 master-0 kubenswrapper[7864]: I0224 02:03:55.741863 7864 flags.go:64] FLAG: --enable-server="true" Feb 24 02:03:55.742695 master-0 kubenswrapper[7864]: I0224 02:03:55.741868 7864 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 24 02:03:55.743286 master-0 kubenswrapper[7864]: I0224 02:03:55.741878 7864 flags.go:64] FLAG: --event-burst="100" Feb 24 02:03:55.743286 master-0 kubenswrapper[7864]: I0224 02:03:55.741884 7864 flags.go:64] FLAG: --event-qps="50" Feb 24 02:03:55.743286 master-0 kubenswrapper[7864]: I0224 02:03:55.741890 7864 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 24 02:03:55.743286 master-0 kubenswrapper[7864]: I0224 02:03:55.741896 7864 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 24 02:03:55.743286 master-0 kubenswrapper[7864]: I0224 02:03:55.741902 7864 flags.go:64] FLAG: --eviction-hard="" Feb 24 02:03:55.743286 master-0 kubenswrapper[7864]: I0224 02:03:55.741910 7864 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 24 02:03:55.743286 master-0 kubenswrapper[7864]: I0224 02:03:55.741942 7864 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 24 02:03:55.743286 master-0 kubenswrapper[7864]: I0224 02:03:55.741949 7864 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 24 02:03:55.743286 master-0 kubenswrapper[7864]: I0224 02:03:55.741955 7864 flags.go:64] FLAG: --eviction-soft="" Feb 24 02:03:55.743286 master-0 kubenswrapper[7864]: I0224 02:03:55.741960 7864 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 24 02:03:55.743286 master-0 kubenswrapper[7864]: I0224 02:03:55.741966 7864 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 24 02:03:55.743286 master-0 kubenswrapper[7864]: I0224 02:03:55.741972 7864 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 24 02:03:55.743286 master-0 kubenswrapper[7864]: I0224 02:03:55.741978 7864 flags.go:64] FLAG: --experimental-mounter-path="" Feb 24 02:03:55.743286 master-0 kubenswrapper[7864]: I0224 02:03:55.741983 7864 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 24 02:03:55.743286 master-0 kubenswrapper[7864]: I0224 02:03:55.741988 7864 flags.go:64] FLAG: --fail-swap-on="true" Feb 24 02:03:55.743286 master-0 kubenswrapper[7864]: I0224 02:03:55.741993 7864 flags.go:64] FLAG: --feature-gates="" Feb 24 02:03:55.743286 master-0 kubenswrapper[7864]: I0224 02:03:55.742024 7864 flags.go:64] FLAG: --file-check-frequency="20s" Feb 24 02:03:55.743286 master-0 kubenswrapper[7864]: I0224 02:03:55.742030 7864 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 24 02:03:55.743286 master-0 kubenswrapper[7864]: I0224 02:03:55.742036 7864 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 24 02:03:55.743286 master-0 kubenswrapper[7864]: I0224 02:03:55.742042 7864 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 24 02:03:55.743286 master-0 kubenswrapper[7864]: I0224 02:03:55.742050 7864 flags.go:64] FLAG: --healthz-port="10248" Feb 24 02:03:55.743286 master-0 kubenswrapper[7864]: I0224 02:03:55.742057 7864 flags.go:64] FLAG: --help="false" Feb 24 02:03:55.743286 master-0 kubenswrapper[7864]: I0224 02:03:55.742062 7864 flags.go:64] FLAG: --hostname-override="" Feb 24 02:03:55.743286 master-0 kubenswrapper[7864]: I0224 02:03:55.742068 7864 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 24 02:03:55.743286 master-0 kubenswrapper[7864]: I0224 02:03:55.742074 7864 flags.go:64] FLAG: --http-check-frequency="20s" Feb 24 02:03:55.743286 master-0 kubenswrapper[7864]: I0224 02:03:55.742100 7864 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 24 02:03:55.743857 master-0 kubenswrapper[7864]: I0224 02:03:55.742107 7864 flags.go:64] FLAG: --image-credential-provider-config="" Feb 24 02:03:55.743857 master-0 kubenswrapper[7864]: I0224 02:03:55.742113 7864 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 24 02:03:55.743857 master-0 kubenswrapper[7864]: I0224 02:03:55.742118 7864 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 24 02:03:55.743857 master-0 kubenswrapper[7864]: I0224 02:03:55.742124 7864 flags.go:64] FLAG: --image-service-endpoint="" Feb 24 02:03:55.743857 master-0 kubenswrapper[7864]: I0224 02:03:55.742129 7864 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 24 02:03:55.743857 master-0 kubenswrapper[7864]: I0224 02:03:55.742135 7864 flags.go:64] FLAG: --kube-api-burst="100" Feb 24 02:03:55.743857 master-0 kubenswrapper[7864]: I0224 02:03:55.742140 7864 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 24 02:03:55.743857 master-0 kubenswrapper[7864]: I0224 02:03:55.742148 7864 flags.go:64] FLAG: --kube-api-qps="50" Feb 24 02:03:55.743857 master-0 kubenswrapper[7864]: I0224 02:03:55.742153 7864 flags.go:64] FLAG: --kube-reserved="" Feb 24 02:03:55.743857 master-0 kubenswrapper[7864]: I0224 02:03:55.742179 7864 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 24 02:03:55.743857 master-0 kubenswrapper[7864]: I0224 02:03:55.742187 7864 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 24 02:03:55.743857 master-0 kubenswrapper[7864]: I0224 02:03:55.742192 7864 flags.go:64] FLAG: --kubelet-cgroups="" Feb 24 02:03:55.743857 master-0 kubenswrapper[7864]: I0224 02:03:55.742198 7864 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 24 02:03:55.743857 master-0 kubenswrapper[7864]: I0224 02:03:55.742205 7864 flags.go:64] FLAG: --lock-file="" Feb 24 02:03:55.743857 master-0 kubenswrapper[7864]: I0224 02:03:55.742211 7864 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 24 02:03:55.743857 master-0 kubenswrapper[7864]: I0224 02:03:55.742218 7864 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 24 02:03:55.743857 master-0 kubenswrapper[7864]: I0224 02:03:55.742225 7864 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 24 02:03:55.743857 master-0 kubenswrapper[7864]: I0224 02:03:55.742235 7864 flags.go:64] FLAG: --log-json-split-stream="false" Feb 24 02:03:55.743857 master-0 kubenswrapper[7864]: I0224 02:03:55.742262 7864 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 24 02:03:55.743857 master-0 kubenswrapper[7864]: I0224 02:03:55.742269 7864 flags.go:64] FLAG: --log-text-split-stream="false" Feb 24 02:03:55.743857 master-0 kubenswrapper[7864]: I0224 02:03:55.742274 7864 flags.go:64] FLAG: --logging-format="text" Feb 24 02:03:55.743857 master-0 kubenswrapper[7864]: I0224 02:03:55.742280 7864 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 24 02:03:55.743857 master-0 kubenswrapper[7864]: I0224 02:03:55.742286 7864 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 24 02:03:55.743857 master-0 kubenswrapper[7864]: I0224 02:03:55.742292 7864 flags.go:64] FLAG: --manifest-url="" Feb 24 02:03:55.743857 master-0 kubenswrapper[7864]: I0224 02:03:55.742298 7864 flags.go:64] FLAG: --manifest-url-header="" Feb 24 02:03:55.744461 master-0 kubenswrapper[7864]: I0224 02:03:55.742306 7864 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 24 02:03:55.744461 master-0 kubenswrapper[7864]: I0224 02:03:55.742313 7864 flags.go:64] FLAG: --max-open-files="1000000" Feb 24 02:03:55.744461 master-0 kubenswrapper[7864]: I0224 02:03:55.742343 7864 flags.go:64] FLAG: --max-pods="110" Feb 24 02:03:55.744461 master-0 kubenswrapper[7864]: I0224 02:03:55.742349 7864 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 24 02:03:55.744461 master-0 kubenswrapper[7864]: I0224 02:03:55.742355 7864 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 24 02:03:55.744461 master-0 kubenswrapper[7864]: I0224 02:03:55.742361 7864 flags.go:64] FLAG: --memory-manager-policy="None" Feb 24 02:03:55.744461 master-0 kubenswrapper[7864]: I0224 02:03:55.742366 7864 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 24 02:03:55.744461 master-0 kubenswrapper[7864]: I0224 02:03:55.742373 7864 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 24 02:03:55.744461 master-0 kubenswrapper[7864]: I0224 02:03:55.742378 7864 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 24 02:03:55.744461 master-0 kubenswrapper[7864]: I0224 02:03:55.742383 7864 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 24 02:03:55.744461 master-0 kubenswrapper[7864]: I0224 02:03:55.742427 7864 flags.go:64] FLAG: --node-status-max-images="50" Feb 24 02:03:55.744461 master-0 kubenswrapper[7864]: I0224 02:03:55.742433 7864 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 24 02:03:55.744461 master-0 kubenswrapper[7864]: I0224 02:03:55.742440 7864 flags.go:64] FLAG: --oom-score-adj="-999" Feb 24 02:03:55.744461 master-0 kubenswrapper[7864]: I0224 02:03:55.742445 7864 flags.go:64] FLAG: --pod-cidr="" Feb 24 02:03:55.744461 master-0 kubenswrapper[7864]: I0224 02:03:55.742451 7864 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5001a555eb05eef7f23d64667303c2b4db8343ee900c265f7613c40c1db229" Feb 24 02:03:55.744461 master-0 kubenswrapper[7864]: I0224 02:03:55.742462 7864 flags.go:64] FLAG: --pod-manifest-path="" Feb 24 02:03:55.744461 master-0 kubenswrapper[7864]: I0224 02:03:55.742468 7864 flags.go:64] FLAG: --pod-max-pids="-1" Feb 24 02:03:55.744461 master-0 kubenswrapper[7864]: I0224 02:03:55.742473 7864 flags.go:64] FLAG: --pods-per-core="0" Feb 24 02:03:55.744461 master-0 kubenswrapper[7864]: I0224 02:03:55.742480 7864 flags.go:64] FLAG: --port="10250" Feb 24 02:03:55.744461 master-0 kubenswrapper[7864]: I0224 02:03:55.742485 7864 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 24 02:03:55.744461 master-0 kubenswrapper[7864]: I0224 02:03:55.742491 7864 flags.go:64] FLAG: --provider-id="" Feb 24 02:03:55.744461 master-0 kubenswrapper[7864]: I0224 02:03:55.742496 7864 flags.go:64] FLAG: --qos-reserved="" Feb 24 02:03:55.744461 master-0 kubenswrapper[7864]: I0224 02:03:55.742503 7864 flags.go:64] FLAG: --read-only-port="10255" Feb 24 02:03:55.744461 master-0 kubenswrapper[7864]: I0224 02:03:55.742509 7864 flags.go:64] FLAG: --register-node="true" Feb 24 02:03:55.744991 master-0 kubenswrapper[7864]: I0224 02:03:55.742514 7864 flags.go:64] FLAG: --register-schedulable="true" Feb 24 02:03:55.744991 master-0 kubenswrapper[7864]: I0224 02:03:55.742520 7864 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 24 02:03:55.744991 master-0 kubenswrapper[7864]: I0224 02:03:55.742538 7864 flags.go:64] FLAG: --registry-burst="10" Feb 24 02:03:55.744991 master-0 kubenswrapper[7864]: I0224 02:03:55.742544 7864 flags.go:64] FLAG: --registry-qps="5" Feb 24 02:03:55.744991 master-0 kubenswrapper[7864]: I0224 02:03:55.742549 7864 flags.go:64] FLAG: --reserved-cpus="" Feb 24 02:03:55.744991 master-0 kubenswrapper[7864]: I0224 02:03:55.742554 7864 flags.go:64] FLAG: --reserved-memory="" Feb 24 02:03:55.744991 master-0 kubenswrapper[7864]: I0224 02:03:55.742562 7864 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 24 02:03:55.744991 master-0 kubenswrapper[7864]: I0224 02:03:55.742583 7864 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 24 02:03:55.744991 master-0 kubenswrapper[7864]: I0224 02:03:55.742589 7864 flags.go:64] FLAG: --rotate-certificates="false" Feb 24 02:03:55.744991 master-0 kubenswrapper[7864]: I0224 02:03:55.742595 7864 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 24 02:03:55.744991 master-0 kubenswrapper[7864]: I0224 02:03:55.742601 7864 flags.go:64] FLAG: --runonce="false" Feb 24 02:03:55.744991 master-0 kubenswrapper[7864]: I0224 02:03:55.742606 7864 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 24 02:03:55.744991 master-0 kubenswrapper[7864]: I0224 02:03:55.742612 7864 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 24 02:03:55.744991 master-0 kubenswrapper[7864]: I0224 02:03:55.742618 7864 flags.go:64] FLAG: --seccomp-default="false" Feb 24 02:03:55.744991 master-0 kubenswrapper[7864]: I0224 02:03:55.742624 7864 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 24 02:03:55.744991 master-0 kubenswrapper[7864]: I0224 02:03:55.742629 7864 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 24 02:03:55.744991 master-0 kubenswrapper[7864]: I0224 02:03:55.742637 7864 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 24 02:03:55.744991 master-0 kubenswrapper[7864]: I0224 02:03:55.742643 7864 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 24 02:03:55.744991 master-0 kubenswrapper[7864]: I0224 02:03:55.742651 7864 flags.go:64] FLAG: --storage-driver-password="root" Feb 24 02:03:55.744991 master-0 kubenswrapper[7864]: I0224 02:03:55.742657 7864 flags.go:64] FLAG: --storage-driver-secure="false" Feb 24 02:03:55.744991 master-0 kubenswrapper[7864]: I0224 02:03:55.742663 7864 flags.go:64] FLAG: --storage-driver-table="stats" Feb 24 02:03:55.744991 master-0 kubenswrapper[7864]: I0224 02:03:55.742669 7864 flags.go:64] FLAG: --storage-driver-user="root" Feb 24 02:03:55.744991 master-0 kubenswrapper[7864]: I0224 02:03:55.742675 7864 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 24 02:03:55.744991 master-0 kubenswrapper[7864]: I0224 02:03:55.742681 7864 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 24 02:03:55.744991 master-0 kubenswrapper[7864]: I0224 02:03:55.742686 7864 flags.go:64] FLAG: --system-cgroups="" Feb 24 02:03:55.745612 master-0 kubenswrapper[7864]: I0224 02:03:55.742691 7864 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 24 02:03:55.745612 master-0 kubenswrapper[7864]: I0224 02:03:55.742705 7864 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 24 02:03:55.745612 master-0 kubenswrapper[7864]: I0224 02:03:55.742711 7864 flags.go:64] FLAG: --tls-cert-file="" Feb 24 02:03:55.745612 master-0 kubenswrapper[7864]: I0224 02:03:55.742718 7864 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 24 02:03:55.745612 master-0 kubenswrapper[7864]: I0224 02:03:55.742726 7864 flags.go:64] FLAG: --tls-min-version="" Feb 24 02:03:55.745612 master-0 kubenswrapper[7864]: I0224 02:03:55.742733 7864 flags.go:64] FLAG: --tls-private-key-file="" Feb 24 02:03:55.745612 master-0 kubenswrapper[7864]: I0224 02:03:55.742739 7864 flags.go:64] FLAG: --topology-manager-policy="none" Feb 24 02:03:55.745612 master-0 kubenswrapper[7864]: I0224 02:03:55.742745 7864 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 24 02:03:55.745612 master-0 kubenswrapper[7864]: I0224 02:03:55.742751 7864 flags.go:64] FLAG: --topology-manager-scope="container" Feb 24 02:03:55.745612 master-0 kubenswrapper[7864]: I0224 02:03:55.742757 7864 flags.go:64] FLAG: --v="2" Feb 24 02:03:55.745612 master-0 kubenswrapper[7864]: I0224 02:03:55.742766 7864 flags.go:64] FLAG: --version="false" Feb 24 02:03:55.745612 master-0 kubenswrapper[7864]: I0224 02:03:55.742773 7864 flags.go:64] FLAG: --vmodule="" Feb 24 02:03:55.745612 master-0 kubenswrapper[7864]: I0224 02:03:55.742781 7864 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 24 02:03:55.745612 master-0 kubenswrapper[7864]: I0224 02:03:55.742787 7864 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 24 02:03:55.745612 master-0 kubenswrapper[7864]: W0224 02:03:55.743013 7864 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 02:03:55.745612 master-0 kubenswrapper[7864]: W0224 02:03:55.743022 7864 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 02:03:55.745612 master-0 kubenswrapper[7864]: W0224 02:03:55.743028 7864 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 02:03:55.745612 master-0 kubenswrapper[7864]: W0224 02:03:55.743045 7864 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 02:03:55.745612 master-0 kubenswrapper[7864]: W0224 02:03:55.743072 7864 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 02:03:55.745612 master-0 kubenswrapper[7864]: W0224 02:03:55.743079 7864 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 02:03:55.745612 master-0 kubenswrapper[7864]: W0224 02:03:55.743085 7864 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 02:03:55.745612 master-0 kubenswrapper[7864]: W0224 02:03:55.743090 7864 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 02:03:55.746085 master-0 kubenswrapper[7864]: W0224 02:03:55.743097 7864 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 02:03:55.746085 master-0 kubenswrapper[7864]: W0224 02:03:55.743102 7864 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 02:03:55.746085 master-0 kubenswrapper[7864]: W0224 02:03:55.743107 7864 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 02:03:55.746085 master-0 kubenswrapper[7864]: W0224 02:03:55.743113 7864 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 02:03:55.746085 master-0 kubenswrapper[7864]: W0224 02:03:55.743118 7864 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 02:03:55.746085 master-0 kubenswrapper[7864]: W0224 02:03:55.743122 7864 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 02:03:55.746085 master-0 kubenswrapper[7864]: W0224 02:03:55.743148 7864 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 02:03:55.746085 master-0 kubenswrapper[7864]: W0224 02:03:55.743154 7864 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 02:03:55.746085 master-0 kubenswrapper[7864]: W0224 02:03:55.743159 7864 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 02:03:55.746085 master-0 kubenswrapper[7864]: W0224 02:03:55.743164 7864 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 02:03:55.746085 master-0 kubenswrapper[7864]: W0224 02:03:55.743169 7864 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 02:03:55.746085 master-0 kubenswrapper[7864]: W0224 02:03:55.743173 7864 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 02:03:55.746085 master-0 kubenswrapper[7864]: W0224 02:03:55.743178 7864 feature_gate.go:330] unrecognized feature gate: Example Feb 24 02:03:55.746085 master-0 kubenswrapper[7864]: W0224 02:03:55.743183 7864 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 02:03:55.746085 master-0 kubenswrapper[7864]: W0224 02:03:55.743187 7864 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 02:03:55.746085 master-0 kubenswrapper[7864]: W0224 02:03:55.743192 7864 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 02:03:55.746085 master-0 kubenswrapper[7864]: W0224 02:03:55.743197 7864 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 02:03:55.746085 master-0 kubenswrapper[7864]: W0224 02:03:55.743201 7864 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 02:03:55.746085 master-0 kubenswrapper[7864]: W0224 02:03:55.743206 7864 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 02:03:55.746085 master-0 kubenswrapper[7864]: W0224 02:03:55.743235 7864 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 02:03:55.746513 master-0 kubenswrapper[7864]: W0224 02:03:55.743240 7864 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 02:03:55.746513 master-0 kubenswrapper[7864]: W0224 02:03:55.743246 7864 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 02:03:55.746513 master-0 kubenswrapper[7864]: W0224 02:03:55.743252 7864 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 02:03:55.746513 master-0 kubenswrapper[7864]: W0224 02:03:55.743257 7864 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 02:03:55.746513 master-0 kubenswrapper[7864]: W0224 02:03:55.743262 7864 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 02:03:55.746513 master-0 kubenswrapper[7864]: W0224 02:03:55.743267 7864 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 02:03:55.746513 master-0 kubenswrapper[7864]: W0224 02:03:55.743272 7864 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 02:03:55.746513 master-0 kubenswrapper[7864]: W0224 02:03:55.743281 7864 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 02:03:55.746513 master-0 kubenswrapper[7864]: W0224 02:03:55.743307 7864 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 02:03:55.746513 master-0 kubenswrapper[7864]: W0224 02:03:55.743316 7864 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 02:03:55.746513 master-0 kubenswrapper[7864]: W0224 02:03:55.743321 7864 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 02:03:55.746513 master-0 kubenswrapper[7864]: W0224 02:03:55.743327 7864 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 02:03:55.746513 master-0 kubenswrapper[7864]: W0224 02:03:55.743332 7864 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 02:03:55.746513 master-0 kubenswrapper[7864]: W0224 02:03:55.743337 7864 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 02:03:55.746513 master-0 kubenswrapper[7864]: W0224 02:03:55.743342 7864 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 02:03:55.746513 master-0 kubenswrapper[7864]: W0224 02:03:55.743346 7864 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 02:03:55.746513 master-0 kubenswrapper[7864]: W0224 02:03:55.743352 7864 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 02:03:55.746513 master-0 kubenswrapper[7864]: W0224 02:03:55.743356 7864 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 02:03:55.746513 master-0 kubenswrapper[7864]: W0224 02:03:55.743361 7864 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 02:03:55.746513 master-0 kubenswrapper[7864]: W0224 02:03:55.743365 7864 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 02:03:55.746956 master-0 kubenswrapper[7864]: W0224 02:03:55.743390 7864 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 02:03:55.746956 master-0 kubenswrapper[7864]: W0224 02:03:55.743395 7864 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 02:03:55.746956 master-0 kubenswrapper[7864]: W0224 02:03:55.743400 7864 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 02:03:55.746956 master-0 kubenswrapper[7864]: W0224 02:03:55.743404 7864 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 02:03:55.746956 master-0 kubenswrapper[7864]: W0224 02:03:55.743410 7864 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 02:03:55.746956 master-0 kubenswrapper[7864]: W0224 02:03:55.743414 7864 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 02:03:55.746956 master-0 kubenswrapper[7864]: W0224 02:03:55.743419 7864 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 02:03:55.746956 master-0 kubenswrapper[7864]: W0224 02:03:55.743425 7864 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 02:03:55.746956 master-0 kubenswrapper[7864]: W0224 02:03:55.743431 7864 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 02:03:55.746956 master-0 kubenswrapper[7864]: W0224 02:03:55.743437 7864 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 02:03:55.746956 master-0 kubenswrapper[7864]: W0224 02:03:55.743443 7864 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 02:03:55.746956 master-0 kubenswrapper[7864]: W0224 02:03:55.743471 7864 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 02:03:55.746956 master-0 kubenswrapper[7864]: W0224 02:03:55.743477 7864 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 02:03:55.746956 master-0 kubenswrapper[7864]: W0224 02:03:55.743484 7864 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 02:03:55.746956 master-0 kubenswrapper[7864]: W0224 02:03:55.743490 7864 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 02:03:55.746956 master-0 kubenswrapper[7864]: W0224 02:03:55.743497 7864 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 02:03:55.746956 master-0 kubenswrapper[7864]: W0224 02:03:55.743502 7864 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 02:03:55.746956 master-0 kubenswrapper[7864]: W0224 02:03:55.743507 7864 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 02:03:55.746956 master-0 kubenswrapper[7864]: W0224 02:03:55.743512 7864 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 02:03:55.747374 master-0 kubenswrapper[7864]: W0224 02:03:55.743520 7864 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 02:03:55.747374 master-0 kubenswrapper[7864]: W0224 02:03:55.743526 7864 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 02:03:55.747374 master-0 kubenswrapper[7864]: W0224 02:03:55.743548 7864 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 02:03:55.747374 master-0 kubenswrapper[7864]: W0224 02:03:55.743555 7864 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 02:03:55.747374 master-0 kubenswrapper[7864]: W0224 02:03:55.743560 7864 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 24 02:03:55.747374 master-0 kubenswrapper[7864]: I0224 02:03:55.743608 7864 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 02:03:55.751965 master-0 kubenswrapper[7864]: I0224 02:03:55.751923 7864 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 24 02:03:55.752021 master-0 kubenswrapper[7864]: I0224 02:03:55.751969 7864 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 24 02:03:55.752188 master-0 kubenswrapper[7864]: W0224 02:03:55.752108 7864 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 02:03:55.752188 master-0 kubenswrapper[7864]: W0224 02:03:55.752117 7864 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 02:03:55.752188 master-0 kubenswrapper[7864]: W0224 02:03:55.752123 7864 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 02:03:55.752188 master-0 kubenswrapper[7864]: W0224 02:03:55.752130 7864 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 02:03:55.752188 master-0 kubenswrapper[7864]: W0224 02:03:55.752135 7864 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 02:03:55.752188 master-0 kubenswrapper[7864]: W0224 02:03:55.752141 7864 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 02:03:55.752188 master-0 kubenswrapper[7864]: W0224 02:03:55.752147 7864 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 02:03:55.752188 master-0 kubenswrapper[7864]: W0224 02:03:55.752153 7864 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 02:03:55.752188 master-0 kubenswrapper[7864]: W0224 02:03:55.752160 7864 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 02:03:55.752188 master-0 kubenswrapper[7864]: W0224 02:03:55.752170 7864 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 02:03:55.752188 master-0 kubenswrapper[7864]: W0224 02:03:55.752176 7864 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 02:03:55.752188 master-0 kubenswrapper[7864]: W0224 02:03:55.752182 7864 feature_gate.go:330] unrecognized feature gate: Example Feb 24 02:03:55.752188 master-0 kubenswrapper[7864]: W0224 02:03:55.752189 7864 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 02:03:55.752188 master-0 kubenswrapper[7864]: W0224 02:03:55.752195 7864 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 02:03:55.752188 master-0 kubenswrapper[7864]: W0224 02:03:55.752201 7864 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 02:03:55.752557 master-0 kubenswrapper[7864]: W0224 02:03:55.752207 7864 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 02:03:55.752557 master-0 kubenswrapper[7864]: W0224 02:03:55.752213 7864 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 02:03:55.752557 master-0 kubenswrapper[7864]: W0224 02:03:55.752218 7864 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 02:03:55.752557 master-0 kubenswrapper[7864]: W0224 02:03:55.752224 7864 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 02:03:55.752557 master-0 kubenswrapper[7864]: W0224 02:03:55.752229 7864 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 02:03:55.752557 master-0 kubenswrapper[7864]: W0224 02:03:55.752235 7864 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 02:03:55.752557 master-0 kubenswrapper[7864]: W0224 02:03:55.752240 7864 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 02:03:55.752557 master-0 kubenswrapper[7864]: W0224 02:03:55.752245 7864 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 02:03:55.752557 master-0 kubenswrapper[7864]: W0224 02:03:55.752251 7864 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 02:03:55.752557 master-0 kubenswrapper[7864]: W0224 02:03:55.752256 7864 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 24 02:03:55.752557 master-0 kubenswrapper[7864]: W0224 02:03:55.752262 7864 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 02:03:55.752557 master-0 kubenswrapper[7864]: W0224 02:03:55.752267 7864 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 02:03:55.752557 master-0 kubenswrapper[7864]: W0224 02:03:55.752273 7864 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 02:03:55.752557 master-0 kubenswrapper[7864]: W0224 02:03:55.752278 7864 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 02:03:55.752557 master-0 kubenswrapper[7864]: W0224 02:03:55.752284 7864 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 02:03:55.752557 master-0 kubenswrapper[7864]: W0224 02:03:55.752289 7864 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 02:03:55.752557 master-0 kubenswrapper[7864]: W0224 02:03:55.752295 7864 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 02:03:55.752557 master-0 kubenswrapper[7864]: W0224 02:03:55.752302 7864 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 02:03:55.752557 master-0 kubenswrapper[7864]: W0224 02:03:55.752308 7864 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 02:03:55.752557 master-0 kubenswrapper[7864]: W0224 02:03:55.752315 7864 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 02:03:55.752987 master-0 kubenswrapper[7864]: W0224 02:03:55.752320 7864 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 02:03:55.752987 master-0 kubenswrapper[7864]: W0224 02:03:55.752328 7864 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 02:03:55.752987 master-0 kubenswrapper[7864]: W0224 02:03:55.752335 7864 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 02:03:55.752987 master-0 kubenswrapper[7864]: W0224 02:03:55.752341 7864 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 02:03:55.752987 master-0 kubenswrapper[7864]: W0224 02:03:55.752347 7864 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 02:03:55.752987 master-0 kubenswrapper[7864]: W0224 02:03:55.752353 7864 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 02:03:55.752987 master-0 kubenswrapper[7864]: W0224 02:03:55.752358 7864 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 02:03:55.752987 master-0 kubenswrapper[7864]: W0224 02:03:55.752364 7864 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 02:03:55.752987 master-0 kubenswrapper[7864]: W0224 02:03:55.752369 7864 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 02:03:55.752987 master-0 kubenswrapper[7864]: W0224 02:03:55.752375 7864 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 02:03:55.752987 master-0 kubenswrapper[7864]: W0224 02:03:55.752380 7864 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 02:03:55.752987 master-0 kubenswrapper[7864]: W0224 02:03:55.752386 7864 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 02:03:55.752987 master-0 kubenswrapper[7864]: W0224 02:03:55.752392 7864 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 02:03:55.752987 master-0 kubenswrapper[7864]: W0224 02:03:55.752397 7864 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 02:03:55.752987 master-0 kubenswrapper[7864]: W0224 02:03:55.752403 7864 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 02:03:55.752987 master-0 kubenswrapper[7864]: W0224 02:03:55.752408 7864 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 02:03:55.752987 master-0 kubenswrapper[7864]: W0224 02:03:55.752413 7864 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 02:03:55.752987 master-0 kubenswrapper[7864]: W0224 02:03:55.752419 7864 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 02:03:55.752987 master-0 kubenswrapper[7864]: W0224 02:03:55.752424 7864 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 02:03:55.753554 master-0 kubenswrapper[7864]: W0224 02:03:55.752430 7864 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 02:03:55.753554 master-0 kubenswrapper[7864]: W0224 02:03:55.752435 7864 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 02:03:55.753554 master-0 kubenswrapper[7864]: W0224 02:03:55.752440 7864 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 02:03:55.753554 master-0 kubenswrapper[7864]: W0224 02:03:55.752445 7864 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 02:03:55.753554 master-0 kubenswrapper[7864]: W0224 02:03:55.752450 7864 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 02:03:55.753554 master-0 kubenswrapper[7864]: W0224 02:03:55.752456 7864 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 02:03:55.753554 master-0 kubenswrapper[7864]: W0224 02:03:55.752461 7864 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 02:03:55.753554 master-0 kubenswrapper[7864]: W0224 02:03:55.752469 7864 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 02:03:55.753554 master-0 kubenswrapper[7864]: W0224 02:03:55.752475 7864 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 02:03:55.753554 master-0 kubenswrapper[7864]: W0224 02:03:55.752481 7864 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 02:03:55.753554 master-0 kubenswrapper[7864]: W0224 02:03:55.752487 7864 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 02:03:55.753554 master-0 kubenswrapper[7864]: W0224 02:03:55.752492 7864 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 02:03:55.753554 master-0 kubenswrapper[7864]: W0224 02:03:55.752498 7864 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 02:03:55.753554 master-0 kubenswrapper[7864]: W0224 02:03:55.752504 7864 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 02:03:55.753554 master-0 kubenswrapper[7864]: W0224 02:03:55.752509 7864 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 02:03:55.753554 master-0 kubenswrapper[7864]: W0224 02:03:55.752515 7864 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 02:03:55.753554 master-0 kubenswrapper[7864]: W0224 02:03:55.752521 7864 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 02:03:55.753554 master-0 kubenswrapper[7864]: W0224 02:03:55.752526 7864 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 02:03:55.753987 master-0 kubenswrapper[7864]: I0224 02:03:55.752535 7864 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 02:03:55.753987 master-0 kubenswrapper[7864]: W0224 02:03:55.752724 7864 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 02:03:55.753987 master-0 kubenswrapper[7864]: W0224 02:03:55.752732 7864 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 02:03:55.753987 master-0 kubenswrapper[7864]: W0224 02:03:55.752739 7864 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 02:03:55.753987 master-0 kubenswrapper[7864]: W0224 02:03:55.752744 7864 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 02:03:55.753987 master-0 kubenswrapper[7864]: W0224 02:03:55.752751 7864 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 02:03:55.753987 master-0 kubenswrapper[7864]: W0224 02:03:55.752757 7864 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 02:03:55.753987 master-0 kubenswrapper[7864]: W0224 02:03:55.752762 7864 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 02:03:55.753987 master-0 kubenswrapper[7864]: W0224 02:03:55.752769 7864 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 24 02:03:55.753987 master-0 kubenswrapper[7864]: W0224 02:03:55.752774 7864 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 02:03:55.753987 master-0 kubenswrapper[7864]: W0224 02:03:55.752780 7864 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 02:03:55.753987 master-0 kubenswrapper[7864]: W0224 02:03:55.752785 7864 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 02:03:55.753987 master-0 kubenswrapper[7864]: W0224 02:03:55.752791 7864 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 02:03:55.753987 master-0 kubenswrapper[7864]: W0224 02:03:55.752796 7864 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 02:03:55.753987 master-0 kubenswrapper[7864]: W0224 02:03:55.752804 7864 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 02:03:55.754315 master-0 kubenswrapper[7864]: W0224 02:03:55.752810 7864 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 02:03:55.754315 master-0 kubenswrapper[7864]: W0224 02:03:55.752817 7864 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 02:03:55.754315 master-0 kubenswrapper[7864]: W0224 02:03:55.752823 7864 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 02:03:55.754315 master-0 kubenswrapper[7864]: W0224 02:03:55.752829 7864 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 02:03:55.754315 master-0 kubenswrapper[7864]: W0224 02:03:55.752835 7864 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 02:03:55.754315 master-0 kubenswrapper[7864]: W0224 02:03:55.752840 7864 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 02:03:55.754315 master-0 kubenswrapper[7864]: W0224 02:03:55.752846 7864 feature_gate.go:330] unrecognized feature gate: Example Feb 24 02:03:55.754315 master-0 kubenswrapper[7864]: W0224 02:03:55.752851 7864 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 02:03:55.754315 master-0 kubenswrapper[7864]: W0224 02:03:55.752856 7864 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 02:03:55.754315 master-0 kubenswrapper[7864]: W0224 02:03:55.752863 7864 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 02:03:55.754315 master-0 kubenswrapper[7864]: W0224 02:03:55.752870 7864 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 02:03:55.754315 master-0 kubenswrapper[7864]: W0224 02:03:55.752875 7864 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 02:03:55.754315 master-0 kubenswrapper[7864]: W0224 02:03:55.752881 7864 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 02:03:55.754315 master-0 kubenswrapper[7864]: W0224 02:03:55.752886 7864 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 02:03:55.754315 master-0 kubenswrapper[7864]: W0224 02:03:55.752891 7864 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 02:03:55.754315 master-0 kubenswrapper[7864]: W0224 02:03:55.752897 7864 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 02:03:55.754315 master-0 kubenswrapper[7864]: W0224 02:03:55.752902 7864 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 02:03:55.754315 master-0 kubenswrapper[7864]: W0224 02:03:55.752907 7864 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 02:03:55.754315 master-0 kubenswrapper[7864]: W0224 02:03:55.752912 7864 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 02:03:55.754315 master-0 kubenswrapper[7864]: W0224 02:03:55.752918 7864 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 02:03:55.755174 master-0 kubenswrapper[7864]: W0224 02:03:55.752924 7864 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 02:03:55.755174 master-0 kubenswrapper[7864]: W0224 02:03:55.752931 7864 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 02:03:55.755174 master-0 kubenswrapper[7864]: W0224 02:03:55.752937 7864 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 02:03:55.755174 master-0 kubenswrapper[7864]: W0224 02:03:55.752943 7864 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 02:03:55.755174 master-0 kubenswrapper[7864]: W0224 02:03:55.752950 7864 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 02:03:55.755174 master-0 kubenswrapper[7864]: W0224 02:03:55.752956 7864 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 02:03:55.755174 master-0 kubenswrapper[7864]: W0224 02:03:55.752962 7864 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 02:03:55.755174 master-0 kubenswrapper[7864]: W0224 02:03:55.752968 7864 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 02:03:55.755174 master-0 kubenswrapper[7864]: W0224 02:03:55.752974 7864 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 02:03:55.755174 master-0 kubenswrapper[7864]: W0224 02:03:55.752979 7864 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 02:03:55.755174 master-0 kubenswrapper[7864]: W0224 02:03:55.752985 7864 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 02:03:55.755174 master-0 kubenswrapper[7864]: W0224 02:03:55.752991 7864 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 02:03:55.755174 master-0 kubenswrapper[7864]: W0224 02:03:55.752997 7864 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 02:03:55.755174 master-0 kubenswrapper[7864]: W0224 02:03:55.753003 7864 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 02:03:55.755174 master-0 kubenswrapper[7864]: W0224 02:03:55.753009 7864 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 02:03:55.755174 master-0 kubenswrapper[7864]: W0224 02:03:55.753014 7864 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 02:03:55.755174 master-0 kubenswrapper[7864]: W0224 02:03:55.753020 7864 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 02:03:55.755174 master-0 kubenswrapper[7864]: W0224 02:03:55.753025 7864 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 02:03:55.755174 master-0 kubenswrapper[7864]: W0224 02:03:55.753030 7864 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 02:03:55.755174 master-0 kubenswrapper[7864]: W0224 02:03:55.753036 7864 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 02:03:55.755670 master-0 kubenswrapper[7864]: W0224 02:03:55.753041 7864 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 02:03:55.755670 master-0 kubenswrapper[7864]: W0224 02:03:55.753047 7864 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 02:03:55.755670 master-0 kubenswrapper[7864]: W0224 02:03:55.753052 7864 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 02:03:55.755670 master-0 kubenswrapper[7864]: W0224 02:03:55.753059 7864 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 02:03:55.755670 master-0 kubenswrapper[7864]: W0224 02:03:55.753066 7864 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 02:03:55.755670 master-0 kubenswrapper[7864]: W0224 02:03:55.753072 7864 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 02:03:55.755670 master-0 kubenswrapper[7864]: W0224 02:03:55.753078 7864 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 02:03:55.755670 master-0 kubenswrapper[7864]: W0224 02:03:55.753100 7864 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 02:03:55.755670 master-0 kubenswrapper[7864]: W0224 02:03:55.753106 7864 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 02:03:55.755670 master-0 kubenswrapper[7864]: W0224 02:03:55.753111 7864 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 02:03:55.755670 master-0 kubenswrapper[7864]: W0224 02:03:55.753116 7864 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 02:03:55.755670 master-0 kubenswrapper[7864]: W0224 02:03:55.753122 7864 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 02:03:55.755670 master-0 kubenswrapper[7864]: W0224 02:03:55.753127 7864 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 02:03:55.755670 master-0 kubenswrapper[7864]: W0224 02:03:55.753132 7864 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 02:03:55.755670 master-0 kubenswrapper[7864]: W0224 02:03:55.753138 7864 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 02:03:55.755670 master-0 kubenswrapper[7864]: W0224 02:03:55.753144 7864 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 02:03:55.755670 master-0 kubenswrapper[7864]: W0224 02:03:55.753149 7864 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 02:03:55.755670 master-0 kubenswrapper[7864]: W0224 02:03:55.753155 7864 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 02:03:55.756051 master-0 kubenswrapper[7864]: I0224 02:03:55.753165 7864 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 02:03:55.756051 master-0 kubenswrapper[7864]: I0224 02:03:55.753382 7864 server.go:940] "Client rotation is on, will bootstrap in background" Feb 24 02:03:55.756051 master-0 kubenswrapper[7864]: I0224 02:03:55.755659 7864 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 24 02:03:55.756051 master-0 kubenswrapper[7864]: I0224 02:03:55.755749 7864 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 24 02:03:55.756051 master-0 kubenswrapper[7864]: I0224 02:03:55.756035 7864 server.go:997] "Starting client certificate rotation" Feb 24 02:03:55.756051 master-0 kubenswrapper[7864]: I0224 02:03:55.756048 7864 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 24 02:03:55.756346 master-0 kubenswrapper[7864]: I0224 02:03:55.756249 7864 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-25 01:54:04 +0000 UTC, rotation deadline is 2026-02-24 21:21:46.564040187 +0000 UTC Feb 24 02:03:55.756346 master-0 kubenswrapper[7864]: I0224 02:03:55.756338 7864 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 19h17m50.807704402s for next certificate rotation Feb 24 02:03:55.756836 master-0 kubenswrapper[7864]: I0224 02:03:55.756823 7864 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 02:03:55.758663 master-0 kubenswrapper[7864]: I0224 02:03:55.758609 7864 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 02:03:55.764707 master-0 kubenswrapper[7864]: I0224 02:03:55.764662 7864 log.go:25] "Validated CRI v1 runtime API" Feb 24 02:03:55.769067 master-0 kubenswrapper[7864]: I0224 02:03:55.768921 7864 log.go:25] "Validated CRI v1 image API" Feb 24 02:03:55.770395 master-0 kubenswrapper[7864]: I0224 02:03:55.770339 7864 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 24 02:03:55.776485 master-0 kubenswrapper[7864]: I0224 02:03:55.776435 7864 fs.go:135] Filesystem UUIDs: map[19c17b43-4715-4d15-ba6d-72e795fc4d8f:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Feb 24 02:03:55.777029 master-0 kubenswrapper[7864]: I0224 02:03:55.776488 7864 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1cc1996551692c223eb12edcadd4f14bef06fed859ebb6d00f4391944783b38d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1cc1996551692c223eb12edcadd4f14bef06fed859ebb6d00f4391944783b38d/userdata/shm major:0 minor:280 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/29621914ea05b7d9aefb3ef92742f6212ca05bc6251d28674ae45265f66276a1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/29621914ea05b7d9aefb3ef92742f6212ca05bc6251d28674ae45265f66276a1/userdata/shm major:0 minor:325 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/35a3cac3cfce9496c0f221e8539970cdcedf87aabbcb92ba9a5c445596750d49/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/35a3cac3cfce9496c0f221e8539970cdcedf87aabbcb92ba9a5c445596750d49/userdata/shm major:0 minor:267 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/46f23e74184a869450a53e076049b086fc11c3d08fab3acc813aa63061b356f3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/46f23e74184a869450a53e076049b086fc11c3d08fab3acc813aa63061b356f3/userdata/shm major:0 minor:70 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5961c2c1ae4747bec6388a9fbe96dacb27b6a52832bcc7c5d12c3091d629abab/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5961c2c1ae4747bec6388a9fbe96dacb27b6a52832bcc7c5d12c3091d629abab/userdata/shm major:0 minor:143 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5ba2f6486b90f665f4193dee37876ce40336ba0c3b009bf85c911f6014a84585/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5ba2f6486b90f665f4193dee37876ce40336ba0c3b009bf85c911f6014a84585/userdata/shm major:0 minor:142 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/64c3913ef0868e964da24e47fde7afcb2edc5db0527066ac2d8451806802e649/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/64c3913ef0868e964da24e47fde7afcb2edc5db0527066ac2d8451806802e649/userdata/shm major:0 minor:296 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6d1c4a7e4e4241cdd4f673e537ec599a9ec1bd539d78669446c1a36b609a7a02/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6d1c4a7e4e4241cdd4f673e537ec599a9ec1bd539d78669446c1a36b609a7a02/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/83490c1a955fe6b943eda48c6b81b0120dda14df023aa9b81ab0e80b7e90cadf/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/83490c1a955fe6b943eda48c6b81b0120dda14df023aa9b81ab0e80b7e90cadf/userdata/shm major:0 minor:263 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8ce6ca3d3b13c63a9ea107eaabf4c609711cee1bd75660ff7fc88de79d18620c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8ce6ca3d3b13c63a9ea107eaabf4c609711cee1bd75660ff7fc88de79d18620c/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/97c357985567dbd0d0a6d267a8cc4448d666a74cc353c0643a50d8ab3f6c2302/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/97c357985567dbd0d0a6d267a8cc4448d666a74cc353c0643a50d8ab3f6c2302/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9b9766c83ab547d93c665b0d79f8c94f21cf677d4157ff5e1bc24f519048fa91/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9b9766c83ab547d93c665b0d79f8c94f21cf677d4157ff5e1bc24f519048fa91/userdata/shm major:0 minor:287 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a6e4933443321f6f827221301b84c881881ea51343c84ac3ad457e15891f86d0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a6e4933443321f6f827221301b84c881881ea51343c84ac3ad457e15891f86d0/userdata/shm major:0 minor:301 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a75855ac22ad61c526e140082a63e50802db589f96d5c1f8fe72f371e5c93069/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a75855ac22ad61c526e140082a63e50802db589f96d5c1f8fe72f371e5c93069/userdata/shm major:0 minor:270 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ca4a08102d80addfcc85dbdd564f6e40965982eca8126d325ae121c2e1c48c40/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ca4a08102d80addfcc85dbdd564f6e40965982eca8126d325ae121c2e1c48c40/userdata/shm major:0 minor:122 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d130af32cfc701e2db477383fd28dc66d411455c0f39e12c729c963e3f569427/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d130af32cfc701e2db477383fd28dc66d411455c0f39e12c729c963e3f569427/userdata/shm major:0 minor:47 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d884f8a9271a3be209f8b517c106210cb5d535a1b46d052e9c8de84e6be62441/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d884f8a9271a3be209f8b517c106210cb5d535a1b46d052e9c8de84e6be62441/userdata/shm major:0 minor:311 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/db3e2d765c6b8a0f8e83a15ab78326f0bd14411e923e027c42dbca04e32ebad8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/db3e2d765c6b8a0f8e83a15ab78326f0bd14411e923e027c42dbca04e32ebad8/userdata/shm major:0 minor:129 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e54e467fec77344a591689afeb76ae49385e45cfe4c4aeb2a94eec65da6cdce5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e54e467fec77344a591689afeb76ae49385e45cfe4c4aeb2a94eec65da6cdce5/userdata/shm major:0 minor:284 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e6d28a4266f3905d697e133577d4e67e6ee815cccb7f5ef59b536b8c0d26cb94/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e6d28a4266f3905d697e133577d4e67e6ee815cccb7f5ef59b536b8c0d26cb94/userdata/shm major:0 minor:315 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f64a1a9e81543288c082ba54493b536ca2db47fef63c0b6ea8e2ecd8d4fc6a3b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f64a1a9e81543288c082ba54493b536ca2db47fef63c0b6ea8e2ecd8d4fc6a3b/userdata/shm major:0 minor:299 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fb524201fadda92a97019a1e36f215d113e21212244e9e77433e72e6adcfc793/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fb524201fadda92a97019a1e36f215d113e21212244e9e77433e72e6adcfc793/userdata/shm major:0 minor:44 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fe31ec10252333ed40b830b2aacce6de4e895210a7d0b0aebae765349ccfa670/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fe31ec10252333ed40b830b2aacce6de4e895210a7d0b0aebae765349ccfa670/userdata/shm major:0 minor:168 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ff39808811189af69f67503d76fa167bb97add817a078f10dcf74a7660201e4e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ff39808811189af69f67503d76fa167bb97add817a078f10dcf74a7660201e4e/userdata/shm major:0 minor:277 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/02f1d753-983a-4c4a-b1a0-560de173859a/volumes/kubernetes.io~projected/kube-api-access-mb52w:{mountpoint:/var/lib/kubelet/pods/02f1d753-983a-4c4a-b1a0-560de173859a/volumes/kubernetes.io~projected/kube-api-access-mb52w major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/02f1d753-983a-4c4a-b1a0-560de173859a/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/02f1d753-983a-4c4a-b1a0-560de173859a/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/12b89e05-a503-47aa-90b2-4d741e015b19/volumes/kubernetes.io~projected/kube-api-access-twgrj:{mountpoint:/var/lib/kubelet/pods/12b89e05-a503-47aa-90b2-4d741e015b19/volumes/kubernetes.io~projected/kube-api-access-twgrj major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/12b89e05-a503-47aa-90b2-4d741e015b19/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/12b89e05-a503-47aa-90b2-4d741e015b19/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2cb764f6-40f8-4e87-8be0-b9d7b0364201/volumes/kubernetes.io~projected/kube-api-access-sp8hv:{mountpoint:/var/lib/kubelet/pods/2cb764f6-40f8-4e87-8be0-b9d7b0364201/volumes/kubernetes.io~projected/kube-api-access-sp8hv major:0 minor:286 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/303d5058-84df-40d1-a941-896b093ae470/volumes/kubernetes.io~projected/kube-api-access-79bl6:{mountpoint:/var/lib/kubelet/pods/303d5058-84df-40d1-a941-896b093ae470/volumes/kubernetes.io~projected/kube-api-access-79bl6 major:0 minor:291 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/303d5058-84df-40d1-a941-896b093ae470/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/303d5058-84df-40d1-a941-896b093ae470/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3332acec-1553-4594-a903-a322399f6d9d/volumes/kubernetes.io~projected/kube-api-access-x6qs2:{mountpoint:/var/lib/kubelet/pods/3332acec-1553-4594-a903-a322399f6d9d/volumes/kubernetes.io~projected/kube-api-access-x6qs2 major:0 minor:69 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3332acec-1553-4594-a903-a322399f6d9d/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/3332acec-1553-4594-a903-a322399f6d9d/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4f72a322-2142-482a-9b0b-2ad890181d7a/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/4f72a322-2142-482a-9b0b-2ad890181d7a/volumes/kubernetes.io~projected/kube-api-access major:0 minor:110 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/523033b8-4101-4a55-8320-55bef04ddaaf/volumes/kubernetes.io~projected/kube-api-access-dlg2j:{mountpoint:/var/lib/kubelet/pods/523033b8-4101-4a55-8320-55bef04ddaaf/volumes/kubernetes.io~projected/kube-api-access-dlg2j major:0 minor:139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/523033b8-4101-4a55-8320-55bef04ddaaf/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/523033b8-4101-4a55-8320-55bef04ddaaf/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/57811d07-ae8a-44b7-8efb-dafc5afad31e/volumes/kubernetes.io~projected/kube-api-access-vrmsh:{mountpoint:/var/lib/kubelet/pods/57811d07-ae8a-44b7-8efb-dafc5afad31e/volumes/kubernetes.io~projected/kube-api-access-vrmsh major:0 minor:128 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b/volumes/kubernetes.io~projected/kube-api-access-bbnd2:{mountpoint:/var/lib/kubelet/pods/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b/volumes/kubernetes.io~projected/kube-api-access-bbnd2 major:0 minor:118 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6320dbb5-b84d-4a57-8c65-fbed8421f84a/volumes/kubernetes.io~projected/kube-api-access-pgjlz:{mountpoint:/var/lib/kubelet/pods/6320dbb5-b84d-4a57-8c65-fbed8421f84a/volumes/kubernetes.io~projected/kube-api-access-pgjlz major:0 minor:279 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6a9ccd8e-d964-4c03-8ffc-51b464030c25/volumes/kubernetes.io~projected/kube-api-access-ssz8p:{mountpoint:/var/lib/kubelet/pods/6a9ccd8e-d964-4c03-8ffc-51b464030c25/volumes/kubernetes.io~projected/kube-api-access-ssz8p major:0 minor:276 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/70e2ba24-4871-4d1d-9935-156fdbeb2810/volumes/kubernetes.io~projected/kube-api-access-4nmd6:{mountpoint:/var/lib/kubelet/pods/70e2ba24-4871-4d1d-9935-156fdbeb2810/volumes/kubernetes.io~projected/kube-api-access-4nmd6 major:0 minor:135 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7b098bd4-5751-4b01-8409-0688fd29233e/volumes/kubernetes.io~projected/kube-api-access-86pcb:{mountpoint:/var/lib/kubelet/pods/7b098bd4-5751-4b01-8409-0688fd29233e/volumes/kubernetes.io~projected/kube-api-access-86pcb major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7b4e3ba0-5194-4e20-8f12-dea4b67504fe/volumes/kubernetes.io~projected/kube-api-access-dqqkv:{mountpoint:/var/lib/kubelet/pods/7b4e3ba0-5194-4e20-8f12-dea4b67504fe/volumes/kubernetes.io~projected/kube-api-access-dqqkv major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c/volumes/kubernetes.io~projected/kube-api-access-n2b65:{mountpoint:/var/lib/kubelet/pods/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c/volumes/kubernetes.io~projected/kube-api-access-n2b65 major:0 minor:294 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c/volumes/kubernetes.io~secret/serving-cert major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/91d16f7b-390a-4d9d-99d6-cc8e210801d1/volumes/kubernetes.io~projected/kube-api-access-b8rjx:{mountpoint:/var/lib/kubelet/pods/91d16f7b-390a-4d9d-99d6-cc8e210801d1/volumes/kubernetes.io~projected/kube-api-access-b8rjx major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9b5620d6-a5fe-45d7-b39e-8bed7f602a17/volumes/kubernetes.io~projected/kube-api-access-jtf52:{mountpoint:/var/lib/kubelet/pods/9b5620d6-a5fe-45d7-b39e-8bed7f602a17/volumes/kubernetes.io~projected/kube-api-access-jtf52 major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9b5620d6-a5fe-45d7-b39e-8bed7f602a17/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/9b5620d6-a5fe-45d7-b39e-8bed7f602a17/volumes/kubernetes.io~secret/serving-cert major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a02536a3-7d3e-4e74-9625-aefed518ec35/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/a02536a3-7d3e-4e74-9625-aefed518ec35/volumes/kubernetes.io~projected/kube-api-access major:0 minor:265 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a02536a3-7d3e-4e74-9625-aefed518ec35/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a02536a3-7d3e-4e74-9625-aefed518ec35/volumes/kubernetes.io~secret/serving-cert major:0 minor:256 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/adc1097b-c1ab-4f09-965d-1c819671475b/volumes/kubernetes.io~projected/kube-api-access-nqtld:{mountpoint:/var/lib/kubelet/pods/adc1097b-c1ab-4f09-965d-1c819671475b/volumes/kubernetes.io~projected/kube-api-access-nqtld major:0 minor:163 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/adc1097b-c1ab-4f09-965d-1c819671475b/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/adc1097b-c1ab-4f09-965d-1c819671475b/volumes/kubernetes.io~secret/webhook-cert major:0 minor:164 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b36d8451-0fda-4d9d-a850-d05c8f847016/volumes/kubernetes.io~projected/kube-api-access-njjq8:{mountpoint:/var/lib/kubelet/pods/b36d8451-0fda-4d9d-a850-d05c8f847016/volumes/kubernetes.io~projected/kube-api-access-njjq8 major:0 minor:266 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b36d8451-0fda-4d9d-a850-d05c8f847016/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b36d8451-0fda-4d9d-a850-d05c8f847016/volumes/kubernetes.io~secret/serving-cert major:0 minor:254 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:261 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/volumes/kubernetes.io~projected/kube-api-access-qph4g:{mountpoint:/var/lib/kubelet/pods/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/volumes/kubernetes.io~projected/kube-api-access-qph4g major:0 minor:257 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c84dc269-43ae-4083-9998-a0b3c90bb681/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/c84dc269-43ae-4083-9998-a0b3c90bb681/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:258 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c84dc269-43ae-4083-9998-a0b3c90bb681/volumes/kubernetes.io~projected/kube-api-access-9sp95:{mountpoint:/var/lib/kubelet/pods/c84dc269-43ae-4083-9998-a0b3c90bb681/volumes/kubernetes.io~projected/kube-api-access-9sp95 major:0 minor:282 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c92835f0-7f32-4584-8304-843d7979392a/volumes/kubernetes.io~projected/kube-api-access-6nwzm:{mountpoint:/var/lib/kubelet/pods/c92835f0-7f32-4584-8304-843d7979392a/volumes/kubernetes.io~projected/kube-api-access-6nwzm major:0 minor:273 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c92835f0-7f32-4584-8304-843d7979392a/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c92835f0-7f32-4584-8304-843d7979392a/volumes/kubernetes.io~secret/serving-cert major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cabdddba-5507-4e47-98ef-a00c6d0f305d/volumes/kubernetes.io~projected/kube-api-access-h6f7j:{mountpoint:/var/lib/kubelet/pods/cabdddba-5507-4e47-98ef-a00c6d0f305d/volumes/kubernetes.io~projected/kube-api-access-h6f7j major:0 minor:272 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cabdddba-5507-4e47-98ef-a00c6d0f305d/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/cabdddba-5507-4e47-98ef-a00c6d0f305d/volumes/kubernetes.io~secret/serving-cert major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d8e20d47-aeb6-41bf-9715-c437beb8e9e4/volumes/kubernetes.io~projected/kube-api-access-qv6t5:{mountpoint:/var/lib/kubelet/pods/d8e20d47-aeb6-41bf-9715-c437beb8e9e4/volumes/kubernetes.io~projected/kube-api-access-qv6t5 major:0 minor:298 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/db8d6627-394c-4087-bfa4-bf7580f6bb4b/volumes/kubernetes.io~projected/kube-api-access-x6lsp:{mountpoint:/var/lib/kubelet/pods/db8d6627-394c-4087-bfa4-bf7580f6bb4b/volumes/kubernetes.io~projected/kube-api-access-x6lsp major:0 minor:283 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dc3d08db-45fa-4fef-b1fd-2875f22d5c45/volumes/kubernetes.io~projected/kube-api-access-2ssxg:{mountpoint:/var/lib/kubelet/pods/dc3d08db-45fa-4fef-b1fd-2875f22d5c45/volumes/kubernetes.io~projected/kube-api-access-2ssxg major:0 minor:269 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2e9cdff-8c15-43df-b8df-7fe3a73fda86/volumes/kubernetes.io~projected/kube-api-access-82hfh:{mountpoint:/var/lib/kubelet/pods/f2e9cdff-8c15-43df-b8df-7fe3a73fda86/volumes/kubernetes.io~projected/kube-api-access-82hfh major:0 minor:275 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7/volumes/kubernetes.io~projected/kube-api-access major:0 minor:262 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7/volumes/kubernetes.io~secret/serving-cert major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f85222bf-f51a-4232-8db1-1e6ee593617b/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/f85222bf-f51a-4232-8db1-1e6ee593617b/volumes/kubernetes.io~projected/kube-api-access major:0 minor:274 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f85222bf-f51a-4232-8db1-1e6ee593617b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f85222bf-f51a-4232-8db1-1e6ee593617b/volumes/kubernetes.io~secret/serving-cert major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fb39fcc8-beb4-410e-b2a4-0b3e150719cc/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/fb39fcc8-beb4-410e-b2a4-0b3e150719cc/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fb39fcc8-beb4-410e-b2a4-0b3e150719cc/volumes/kubernetes.io~projected/kube-api-access-rc8jx:{mountpoint:/var/lib/kubelet/pods/fb39fcc8-beb4-410e-b2a4-0b3e150719cc/volumes/kubernetes.io~projected/kube-api-access-rc8jx major:0 minor:141 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fb39fcc8-beb4-410e-b2a4-0b3e150719cc/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/fb39fcc8-beb4-410e-b2a4-0b3e150719cc/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:140 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2/volumes/kubernetes.io~projected/kube-api-access-pwjpw:{mountpoint:/var/lib/kubelet/pods/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2/volumes/kubernetes.io~projected/kube-api-access-pwjpw major:0 minor:259 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2/volumes/kubernetes.io~secret/etcd-client major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2/volumes/kubernetes.io~secret/serving-cert major:0 minor:253 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fcbda577-b943-4b5c-b041-948aece8e40f/volumes/kubernetes.io~projected/kube-api-access-vpg26:{mountpoint:/var/lib/kubelet/pods/fcbda577-b943-4b5c-b041-948aece8e40f/volumes/kubernetes.io~projected/kube-api-access-vpg26 major:0 minor:295 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fcbda577-b943-4b5c-b041-948aece8e40f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/fcbda577-b943-4b5c-b041-948aece8e40f/volumes/kubernetes.io~secret/serving-cert major:0 minor:247 fsType:tmpfs blockSize:0} overlay_0-108:{mountpoint:/var/lib/containers/storage/overlay/308c9143338255a2d781821e28724bda81f721e6ae5622f8a472a85e2ff54d30/merged major:0 minor:108 fsType:overlay blockSize:0} overlay_0-111:{mountpoint:/var/lib/containers/storage/overlay/209811ecfef8a1e8fb561d54a8573aa9dd59eeeaf1b37c270ff642e1d19fabd9/merged major:0 minor:111 fsType:overlay blockSize:0} overlay_0-113:{mountpoint:/var/lib/containers/storage/overlay/be67b62786fcab8e0b5a3bda7d5d3546a6251f36272f3ec2972ec914b996c1e8/merged major:0 minor:113 fsType:overlay blockSize:0} overlay_0-124:{mountpoint:/var/lib/containers/storage/overlay/a00d1e591f800be5890ddc494c1cd97e74511bf6d2ded156a3829ee39e1d0f50/merged major:0 minor:124 fsType:overlay blockSize:0} overlay_0-126:{mountpoint:/var/lib/containers/storage/overlay/40564801473c16d5d6d6a93efdd44136f51778ea4fbe993bf95dbeccbb69cb78/merged major:0 minor:126 fsType:overlay blockSize:0} overlay_0-131:{mountpoint:/var/lib/containers/storage/overlay/4296fd63e79e107201bb40b9e24ea7409a5dde248c665ab48d48ba8ea3b94035/merged major:0 minor:131 fsType:overlay blockSize:0} overlay_0-133:{mountpoint:/var/lib/containers/storage/overlay/da7667320cc5bea12973bff2f7edd7090ee5428b7f583e90b3e4d031915e98fe/merged major:0 minor:133 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/799211e2bc3ccf15eddecfcf8891bb54b6be6784d440865391cc3a7ed8a3c8ad/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-146:{mountpoint:/var/lib/containers/storage/overlay/5037d156baea2600dcbd12b26fbcf1ea397d7de610fc348c4f436d074ce07a87/merged major:0 minor:146 fsType:overlay blockSize:0} overlay_0-148:{mountpoint:/var/lib/containers/storage/overlay/4f6f0481619d9ce8fdd81b1a71f7dd42be6d0fe097e556b3dacc28bd5986f444/merged major:0 minor:148 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/a7e735f04439ecc0f7df8008813b7c2b25a3b428aafc85b5f7a58ce279f1024f/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/1b7aed5ada281a842722cde2200e5cf4f8793fcd606ce02a7cba945eebdae018/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/fc3ea121c41bfec68b325768f6f90cc367e1a6eac103fd21e7a05898744bfff4/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/ad8660e33ed3bb92daa8ab78168d98e7c4a86ba929fdbbb9fea44d9e9a20c570/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-161:{mountpoint:/var/lib/containers/storage/overlay/9fdb5e6e2fffcd87a6925276df5ec8cf25e36cf7c1f353722f5971fe887c4561/merged major:0 minor:161 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/ae5c80abbbfd6f0980efc8baed3ed024676bd0ee548d0a2f51f46603d3a977f4/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/47efc9d6e25b8424bdc25fd0d853abbbbede36b15483a38387b9ad389b01db56/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/2a5f84e8de8a56ac584bf1270459762747fd63944364a2d4d71955af6d2039ed/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-176:{mountpoint:/var/lib/containers/storage/overlay/3b02a4958a53faa5343c0593a8f7d1eda2b65a82330b5403e07f5c685b368ed5/merged major:0 minor:176 fsType:overlay blockSize:0} overlay_0-177:{mountpoint:/var/lib/containers/storage/overlay/35da909b255469a4fe9319b5223aa02c0268ca37fea7c3373366cdaa1f652854/merged major:0 minor:177 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/b78398c1c620898efbb7eb92bfc5080113a5c7093a99e2c16c287a58fc2319d8/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/a48eef0f416d57233b5b8f52ea746975d881c4a781fb62e6b30882b48cf2ee32/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-187:{mountpoint:/var/lib/containers/storage/overlay/ba25f8038f57ec1d0490903af258e0e9c5eb32d4618e5d217a1f4a85d5d36d58/merged major:0 minor:187 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/92ee17d746458be41e2614197f92d1c003800ae000a336e123fd8c15078b6e57/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-197:{mountpoint:/var/lib/containers/storage/overlay/e17c24b072f1bd8f98da69b5ceca83e0c2eb90c2d29e811b36076df4cf9a81cd/merged major:0 minor:197 fsType:overlay blockSize:0} overlay_0-205:{mountpoint:/var/lib/containers/storage/overlay/3e780453bf5b256d8750695debe3906b7c5fe1c86eea09ae56e2594278876b1b/merged major:0 minor:205 fsType:overlay blockSize:0} overlay_0-210:{mountpoint:/var/lib/containers/storage/overlay/cd818539fd064bef04c67a80bf1649c669b34a82fbaba2b18aa6ab2696be1885/merged major:0 minor:210 fsType:overlay blockSize:0} overlay_0-215:{mountpoint:/var/lib/containers/storage/overlay/4be79f89cc80f16351628aa40eba4bdd39f0e1bc98929f15a52ba8182e958b5c/merged major:0 minor:215 fsType:overlay blockSize:0} overlay_0-220:{mountpoint:/var/lib/containers/storage/overlay/126db6a05fc8da76433f2532cbd478b25ac8cbdcdbbac760b50c1796da2cf1f9/merged major:0 minor:220 fsType:overlay blockSize:0} overlay_0-221:{mountpoint:/var/lib/containers/storage/overlay/9f80212e517418678c8d00c551abb0c27d5409cfcd5c3c81ee688eb8061b61c4/merged major:0 minor:221 fsType:overlay blockSize:0} overlay_0-230:{mountpoint:/var/lib/containers/storage/overlay/03e5dada94badedb386ca7f5fc57737a01bb05acf8b206cd556eafab6b09eca3/merged major:0 minor:230 fsType:overlay blockSize:0} overlay_0-289:{mountpoint:/var/lib/containers/storage/overlay/194782edf342f65e2be81a4ea666f2c1d9554de829aeee4a2b636875da0ea34e/merged major:0 minor:289 fsType:overlay blockSize:0} overlay_0-292:{mountpoint:/var/lib/containers/storage/overlay/4283d28af102c923f119c2a591ce9f8de2e63cf08a0720388af4959dbf37c911/merged major:0 minor:292 fsType:overlay blockSize:0} overlay_0-303:{mountpoint:/var/lib/containers/storage/overlay/6eaf58e5425d3360cd421d9b6e58088f11e72ae4dcb6015181d2f7e44c28526f/merged major:0 minor:303 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/8743293ad202b0a94b82ac74715922407ef2df1697bb53bbdef1a4c6e6285a89/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-307:{mountpoint:/var/lib/containers/storage/overlay/56c3d2b2bc0d68dde18355cf1de922fc8bfabf7e3a8938e0b3a8f007055db113/merged major:0 minor:307 fsType:overlay blockSize:0} overlay_0-309:{mountpoint:/var/lib/containers/storage/overlay/4ff59adf238af09bfccd2172cfa75090e3e3e6e730e14fc3f2fd5178ec57ea53/merged major:0 minor:309 fsType:overlay blockSize:0} overlay_0-313:{mountpoint:/var/lib/containers/storage/overlay/6318cec2444ca3ca024c86c379a20a5e1d888a2fbef286b618c10bd70f88fda0/merged major:0 minor:313 fsType:overlay blockSize:0} overlay_0-317:{mountpoint:/var/lib/containers/storage/overlay/faa4d366134c75afad4a939a2913b82fc890296fbb24c0438c84973b6bc8927f/merged major:0 minor:317 fsType:overlay blockSize:0} overlay_0-319:{mountpoint:/var/lib/containers/storage/overlay/f166a85d420bf8c887847ab4710898fe476cfe20eccd6921f7c414ce248be584/merged major:0 minor:319 fsType:overlay blockSize:0} overlay_0-321:{mountpoint:/var/lib/containers/storage/overlay/6f269062d78f239e5837ccc3c68ba6ba30ebe652f257988df68b60771d978328/merged major:0 minor:321 fsType:overlay blockSize:0} overlay_0-323:{mountpoint:/var/lib/containers/storage/overlay/bcc3a485e9337fcae85b691f80b074f1aefb70626d8ee61c208f7efb29b33312/merged major:0 minor:323 fsType:overlay blockSize:0} overlay_0-327:{mountpoint:/var/lib/containers/storage/overlay/a0a366102d3ea86e9308feccf69b4a90ed92d6ffa5860868c19134f22e721f28/merged major:0 minor:327 fsType:overlay blockSize:0} overlay_0-329:{mountpoint:/var/lib/containers/storage/overlay/48fdd596850fb33dee39a08039a25b0e7aeaa2d8062363243e80f83de151b79e/merged major:0 minor:329 fsType:overlay blockSize:0} overlay_0-332:{mountpoint:/var/lib/containers/storage/overlay/e6649a6850233f601e7cdd2c8bb2b3af8f7835b1704352610abec5b53d3de44c/merged major:0 minor:332 fsType:overlay blockSize:0} overlay_0-45:{mountpoint:/var/lib/containers/storage/overlay/ecdc0f247a1c51172081ed8d92dc6326f3539aa1e6931561df048afc8ab23dfe/merged major:0 minor:45 fsType:overlay blockSize:0} overlay_0-50:{mountpoint:/var/lib/containers/storage/overlay/43709e9e627b2e29b081d8085ba3b6f4a5cc10f309965a7d9d6f0587add4d769/merged major:0 minor:50 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/9c48be07aa4a2b83910ad2078509501a3bd3fa65550cef8c463904880c287e20/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/714185c7203f6f6794260008cc5d0333e27866879ce95c5be26a31971f9599ba/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/6a50e03f17bdf40f1bd225955570b112c41937521cecdf4f69cd4e88e7d8d868/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/778aa1c44f55690ecae82358d03205d1cc72cd30c43d5b7e2cf9553ed7a0a60f/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/478149fef8c8376e40265472104474866b7ec13f4752e90645bfb653d36d4517/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/277a523a9a47a8e2e7bcb85c54a8915e17b8774dbb6ee94f9db94833aed49a6e/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-68:{mountpoint:/var/lib/containers/storage/overlay/112fef78265e4400c9ba8ee4d5edbe3f7154cd01d71c41f5a6ccb6c033cdc539/merged major:0 minor:68 fsType:overlay blockSize:0} overlay_0-71:{mountpoint:/var/lib/containers/storage/overlay/6aa6281ad9bc908b2a9a5b9c15f976efa99b65d11fa654df0eb1de0430e24836/merged major:0 minor:71 fsType:overlay blockSize:0} overlay_0-80:{mountpoint:/var/lib/containers/storage/overlay/04b839c0450c2d317f119fff59dc622e0a643ec517a5f5ae54175b5231c4c445/merged major:0 minor:80 fsType:overlay blockSize:0} overlay_0-82:{mountpoint:/var/lib/containers/storage/overlay/a59a5e0187f9dafb3e607918ef5df7cd82a0f7f2b518210991c1f0ebb289c9ca/merged major:0 minor:82 fsType:overlay blockSize:0} overlay_0-83:{mountpoint:/var/lib/containers/storage/overlay/aa3c9336488fe18ee9e2320c410118fc856f30e16995b090554e349533f1f730/merged major:0 minor:83 fsType:overlay blockSize:0} overlay_0-85:{mountpoint:/var/lib/containers/storage/overlay/dc6e1862c306b9ba9674114824f17dbd227146db9b33a64c73bde257d356cf11/merged major:0 minor:85 fsType:overlay blockSize:0} overlay_0-97:{mountpoint:/var/lib/containers/storage/overlay/22bc5c7d6d3d24180b1b5226a36ec9437496f7c62386a768ef28d8acb61fd8b9/merged major:0 minor:97 fsType:overlay blockSize:0} overlay_0-99:{mountpoint:/var/lib/containers/storage/overlay/504b349aa8734fd2baf65b11510f4bb99dcada21352ae6c55a0951e6f6b5d0cd/merged major:0 minor:99 fsType:overlay blockSize:0}] Feb 24 02:03:55.814844 master-0 kubenswrapper[7864]: I0224 02:03:55.813790 7864 manager.go:217] Machine: {Timestamp:2026-02-24 02:03:55.812007526 +0000 UTC m=+0.139661178 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514153472 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:1d448a69ed5349cda3229fbde6198537 SystemUUID:1d448a69-ed53-49cd-a322-9fbde6198537 BootID:db0156e3-cefa-4894-85d6-ad7931f79daa Filesystems:[{Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f85222bf-f51a-4232-8db1-1e6ee593617b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:255 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b36d8451-0fda-4d9d-a850-d05c8f847016/volumes/kubernetes.io~projected/kube-api-access-njjq8 DeviceMajor:0 DeviceMinor:266 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/cabdddba-5507-4e47-98ef-a00c6d0f305d/volumes/kubernetes.io~projected/kube-api-access-h6f7j DeviceMajor:0 DeviceMinor:272 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-146 DeviceMajor:0 DeviceMinor:146 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7b098bd4-5751-4b01-8409-0688fd29233e/volumes/kubernetes.io~projected/kube-api-access-86pcb DeviceMajor:0 DeviceMinor:260 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-82 DeviceMajor:0 DeviceMinor:82 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/adc1097b-c1ab-4f09-965d-1c819671475b/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:164 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/523033b8-4101-4a55-8320-55bef04ddaaf/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:138 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-220 DeviceMajor:0 DeviceMinor:220 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:246 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:252 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/f2e9cdff-8c15-43df-b8df-7fe3a73fda86/volumes/kubernetes.io~projected/kube-api-access-82hfh DeviceMajor:0 DeviceMinor:275 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-289 DeviceMajor:0 DeviceMinor:289 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fb524201fadda92a97019a1e36f215d113e21212244e9e77433e72e6adcfc793/userdata/shm DeviceMajor:0 DeviceMinor:44 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-83 DeviceMajor:0 DeviceMinor:83 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-111 DeviceMajor:0 DeviceMinor:111 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-126 DeviceMajor:0 DeviceMinor:126 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ca4a08102d80addfcc85dbdd564f6e40965982eca8126d325ae121c2e1c48c40/userdata/shm DeviceMajor:0 DeviceMinor:122 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-327 DeviceMajor:0 DeviceMinor:327 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-210 DeviceMajor:0 DeviceMinor:210 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2/volumes/kubernetes.io~projected/kube-api-access-pwjpw DeviceMajor:0 DeviceMinor:259 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:262 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a02536a3-7d3e-4e74-9625-aefed518ec35/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:265 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-319 DeviceMajor:0 DeviceMinor:319 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9b5620d6-a5fe-45d7-b39e-8bed7f602a17/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:240 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1cc1996551692c223eb12edcadd4f14bef06fed859ebb6d00f4391944783b38d/userdata/shm DeviceMajor:0 DeviceMinor:280 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c84dc269-43ae-4083-9998-a0b3c90bb681/volumes/kubernetes.io~projected/kube-api-access-9sp95 DeviceMajor:0 DeviceMinor:282 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/3332acec-1553-4594-a903-a322399f6d9d/volumes/kubernetes.io~projected/kube-api-access-x6qs2 DeviceMajor:0 DeviceMinor:69 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/db3e2d765c6b8a0f8e83a15ab78326f0bd14411e923e027c42dbca04e32ebad8/userdata/shm DeviceMajor:0 DeviceMinor:129 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5961c2c1ae4747bec6388a9fbe96dacb27b6a52832bcc7c5d12c3091d629abab/userdata/shm DeviceMajor:0 DeviceMinor:143 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-221 DeviceMajor:0 DeviceMinor:221 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2cb764f6-40f8-4e87-8be0-b9d7b0364201/volumes/kubernetes.io~projected/kube-api-access-sp8hv DeviceMajor:0 DeviceMinor:286 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e54e467fec77344a591689afeb76ae49385e45cfe4c4aeb2a94eec65da6cdce5/userdata/shm DeviceMajor:0 DeviceMinor:284 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/303d5058-84df-40d1-a941-896b093ae470/volumes/kubernetes.io~projected/kube-api-access-79bl6 DeviceMajor:0 DeviceMinor:291 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f64a1a9e81543288c082ba54493b536ca2db47fef63c0b6ea8e2ecd8d4fc6a3b/userdata/shm DeviceMajor:0 DeviceMinor:299 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-97 DeviceMajor:0 DeviceMinor:97 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-124 DeviceMajor:0 DeviceMinor:124 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/12b89e05-a503-47aa-90b2-4d741e015b19/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:235 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-323 DeviceMajor:0 DeviceMinor:323 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:253 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c/volumes/kubernetes.io~projected/kube-api-access-n2b65 DeviceMajor:0 DeviceMinor:294 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run/containers/storage/overlay-containers/97c357985567dbd0d0a6d267a8cc4448d666a74cc353c0643a50d8ab3f6c2302/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-161 DeviceMajor:0 DeviceMinor:161 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-215 DeviceMajor:0 DeviceMinor:215 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7b4e3ba0-5194-4e20-8f12-dea4b67504fe/volumes/kubernetes.io~projected/kube-api-access-dqqkv DeviceMajor:0 DeviceMinor:241 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/fcbda577-b943-4b5c-b041-948aece8e40f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:247 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c84dc269-43ae-4083-9998-a0b3c90bb681/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:258 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a75855ac22ad61c526e140082a63e50802db589f96d5c1f8fe72f371e5c93069/userdata/shm DeviceMajor:0 DeviceMinor:270 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d130af32cfc701e2db477383fd28dc66d411455c0f39e12c729c963e3f569427/userdata/shm DeviceMajor:0 DeviceMinor:47 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-187 DeviceMajor:0 DeviceMinor:187 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9b5620d6-a5fe-45d7-b39e-8bed7f602a17/volumes/kubernetes.io~projected/kube-api-access-jtf52 DeviceMajor:0 DeviceMinor:244 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/f85222bf-f51a-4232-8db1-1e6ee593617b/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:274 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-332 DeviceMajor:0 DeviceMinor:332 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6320dbb5-b84d-4a57-8c65-fbed8421f84a/volumes/kubernetes.io~projected/kube-api-access-pgjlz DeviceMajor:0 DeviceMinor:279 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-292 DeviceMajor:0 DeviceMinor:292 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-108 DeviceMajor:0 DeviceMinor:108 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/91d16f7b-390a-4d9d-99d6-cc8e210801d1/volumes/kubernetes.io~projected/kube-api-access-b8rjx DeviceMajor:0 DeviceMinor:242 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c92835f0-7f32-4584-8304-843d7979392a/volumes/kubernetes.io~projected/kube-api-access-6nwzm DeviceMajor:0 DeviceMinor:273 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-133 DeviceMajor:0 DeviceMinor:133 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b36d8451-0fda-4d9d-a850-d05c8f847016/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:254 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-303 DeviceMajor:0 DeviceMinor:303 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5ba2f6486b90f665f4193dee37876ce40336ba0c3b009bf85c911f6014a84585/userdata/shm DeviceMajor:0 DeviceMinor:142 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/fb39fcc8-beb4-410e-b2a4-0b3e150719cc/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:140 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-148 DeviceMajor:0 DeviceMinor:148 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/35a3cac3cfce9496c0f221e8539970cdcedf87aabbcb92ba9a5c445596750d49/userdata/shm DeviceMajor:0 DeviceMinor:267 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-313 DeviceMajor:0 DeviceMinor:313 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-71 DeviceMajor:0 DeviceMinor:71 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/volumes/kubernetes.io~projected/kube-api-access-qph4g DeviceMajor:0 DeviceMinor:257 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c92835f0-7f32-4584-8304-843d7979392a/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:250 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/dc3d08db-45fa-4fef-b1fd-2875f22d5c45/volumes/kubernetes.io~projected/kube-api-access-2ssxg DeviceMajor:0 DeviceMinor:269 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-68 DeviceMajor:0 DeviceMinor:68 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/57811d07-ae8a-44b7-8efb-dafc5afad31e/volumes/kubernetes.io~projected/kube-api-access-vrmsh DeviceMajor:0 DeviceMinor:128 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/cabdddba-5507-4e47-98ef-a00c6d0f305d/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:251 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/12b89e05-a503-47aa-90b2-4d741e015b19/volumes/kubernetes.io~projected/kube-api-access-twgrj DeviceMajor:0 DeviceMinor:243 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6a9ccd8e-d964-4c03-8ffc-51b464030c25/volumes/kubernetes.io~projected/kube-api-access-ssz8p DeviceMajor:0 DeviceMinor:276 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-309 DeviceMajor:0 DeviceMinor:309 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-99 DeviceMajor:0 DeviceMinor:99 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3332acec-1553-4594-a903-a322399f6d9d/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:249 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/303d5058-84df-40d1-a941-896b093ae470/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:248 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/fcbda577-b943-4b5c-b041-948aece8e40f/volumes/kubernetes.io~projected/kube-api-access-vpg26 DeviceMajor:0 DeviceMinor:295 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/64c3913ef0868e964da24e47fde7afcb2edc5db0527066ac2d8451806802e649/userdata/shm DeviceMajor:0 DeviceMinor:296 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-307 DeviceMajor:0 DeviceMinor:307 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-321 DeviceMajor:0 DeviceMinor:321 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-45 DeviceMajor:0 DeviceMinor:45 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-50 DeviceMajor:0 DeviceMinor:50 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-85 DeviceMajor:0 DeviceMinor:85 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-177 DeviceMajor:0 DeviceMinor:177 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-317 DeviceMajor:0 DeviceMinor:317 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8ce6ca3d3b13c63a9ea107eaabf4c609711cee1bd75660ff7fc88de79d18620c/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/70e2ba24-4871-4d1d-9935-156fdbeb2810/volumes/kubernetes.io~projected/kube-api-access-4nmd6 DeviceMajor:0 DeviceMinor:135 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/adc1097b-c1ab-4f09-965d-1c819671475b/volumes/kubernetes.io~projected/kube-api-access-nqtld DeviceMajor:0 DeviceMinor:163 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e6d28a4266f3905d697e133577d4e67e6ee815cccb7f5ef59b536b8c0d26cb94/userdata/shm DeviceMajor:0 DeviceMinor:315 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ff39808811189af69f67503d76fa167bb97add817a078f10dcf74a7660201e4e/userdata/shm DeviceMajor:0 DeviceMinor:277 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/02f1d753-983a-4c4a-b1a0-560de173859a/volumes/kubernetes.io~projected/kube-api-access-mb52w DeviceMajor:0 DeviceMinor:245 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a02536a3-7d3e-4e74-9625-aefed518ec35/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:256 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/db8d6627-394c-4087-bfa4-bf7580f6bb4b/volumes/kubernetes.io~projected/kube-api-access-x6lsp DeviceMajor:0 DeviceMinor:283 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-80 DeviceMajor:0 DeviceMinor:80 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-113 DeviceMajor:0 DeviceMinor:113 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b/volumes/kubernetes.io~projected/kube-api-access-bbnd2 DeviceMajor:0 DeviceMinor:118 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/523033b8-4101-4a55-8320-55bef04ddaaf/volumes/kubernetes.io~projected/kube-api-access-dlg2j DeviceMajor:0 DeviceMinor:139 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-176 DeviceMajor:0 DeviceMinor:176 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fb39fcc8-beb4-410e-b2a4-0b3e150719cc/volumes/kubernetes.io~projected/kube-api-access-rc8jx DeviceMajor:0 DeviceMinor:141 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-230 DeviceMajor:0 DeviceMinor:230 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/83490c1a955fe6b943eda48c6b81b0120dda14df023aa9b81ab0e80b7e90cadf/userdata/shm DeviceMajor:0 DeviceMinor:263 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-197 DeviceMajor:0 DeviceMinor:197 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-205 DeviceMajor:0 DeviceMinor:205 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a6e4933443321f6f827221301b84c881881ea51343c84ac3ad457e15891f86d0/userdata/shm DeviceMajor:0 DeviceMinor:301 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-329 DeviceMajor:0 DeviceMinor:329 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/29621914ea05b7d9aefb3ef92742f6212ca05bc6251d28674ae45265f66276a1/userdata/shm DeviceMajor:0 DeviceMinor:325 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6d1c4a7e4e4241cdd4f673e537ec599a9ec1bd539d78669446c1a36b609a7a02/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4f72a322-2142-482a-9b0b-2ad890181d7a/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:110 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-131 DeviceMajor:0 DeviceMinor:131 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fe31ec10252333ed40b830b2aacce6de4e895210a7d0b0aebae765349ccfa670/userdata/shm DeviceMajor:0 DeviceMinor:168 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d884f8a9271a3be209f8b517c106210cb5d535a1b46d052e9c8de84e6be62441/userdata/shm DeviceMajor:0 DeviceMinor:311 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d8e20d47-aeb6-41bf-9715-c437beb8e9e4/volumes/kubernetes.io~projected/kube-api-access-qv6t5 DeviceMajor:0 DeviceMinor:298 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/46f23e74184a869450a53e076049b086fc11c3d08fab3acc813aa63061b356f3/userdata/shm DeviceMajor:0 DeviceMinor:70 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/fb39fcc8-beb4-410e-b2a4-0b3e150719cc/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/02f1d753-983a-4c4a-b1a0-560de173859a/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:239 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:261 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9b9766c83ab547d93c665b0d79f8c94f21cf677d4157ff5e1bc24f519048fa91/userdata/shm DeviceMajor:0 DeviceMinor:287 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:1cc1996551692c2 MacAddress:5a:29:ec:ef:58:96 Speed:10000 Mtu:8900} {Name:35a3cac3cfce949 MacAddress:ee:9e:9b:75:36:0d Speed:10000 Mtu:8900} {Name:64c3913ef0868e9 MacAddress:7e:3f:1e:1d:af:fd Speed:10000 Mtu:8900} {Name:83490c1a955fe6b MacAddress:e6:33:82:ce:bc:48 Speed:10000 Mtu:8900} {Name:9b9766c83ab547d MacAddress:46:8b:6f:b4:88:e8 Speed:10000 Mtu:8900} {Name:a6e4933443321f6 MacAddress:d6:6d:9f:5c:d5:20 Speed:10000 Mtu:8900} {Name:a75855ac22ad61c MacAddress:8e:db:c6:e3:b7:1f Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:56:04:d0:ce:78:ac Speed:0 Mtu:8900} {Name:d884f8a9271a3be MacAddress:0e:0d:b7:3a:9b:0f Speed:10000 Mtu:8900} {Name:e54e467fec77344 MacAddress:ce:70:87:ab:71:a9 Speed:10000 Mtu:8900} {Name:e6d28a4266f3905 MacAddress:76:ad:3d:9a:5b:1f Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:b3:1a:4a Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:91:46:2f Speed:-1 Mtu:9000} {Name:f64a1a9e8154328 MacAddress:96:6b:44:66:74:fa Speed:10000 Mtu:8900} {Name:ff39808811189af MacAddress:ae:4c:bd:18:d8:ba Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:72:e3:6b:de:66:46 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514153472 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 24 02:03:55.814844 master-0 kubenswrapper[7864]: I0224 02:03:55.814829 7864 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 24 02:03:55.815116 master-0 kubenswrapper[7864]: I0224 02:03:55.814952 7864 manager.go:233] Version: {KernelVersion:5.14.0-427.109.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602022246-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 24 02:03:55.815269 master-0 kubenswrapper[7864]: I0224 02:03:55.815178 7864 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 24 02:03:55.815445 master-0 kubenswrapper[7864]: I0224 02:03:55.815413 7864 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 24 02:03:55.815713 master-0 kubenswrapper[7864]: I0224 02:03:55.815445 7864 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 24 02:03:55.815781 master-0 kubenswrapper[7864]: I0224 02:03:55.815745 7864 topology_manager.go:138] "Creating topology manager with none policy" Feb 24 02:03:55.815781 master-0 kubenswrapper[7864]: I0224 02:03:55.815759 7864 container_manager_linux.go:303] "Creating device plugin manager" Feb 24 02:03:55.815781 master-0 kubenswrapper[7864]: I0224 02:03:55.815770 7864 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 24 02:03:55.815881 master-0 kubenswrapper[7864]: I0224 02:03:55.815801 7864 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 24 02:03:55.815939 master-0 kubenswrapper[7864]: I0224 02:03:55.815909 7864 state_mem.go:36] "Initialized new in-memory state store" Feb 24 02:03:55.816085 master-0 kubenswrapper[7864]: I0224 02:03:55.816005 7864 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 24 02:03:55.816123 master-0 kubenswrapper[7864]: I0224 02:03:55.816090 7864 kubelet.go:418] "Attempting to sync node with API server" Feb 24 02:03:55.816168 master-0 kubenswrapper[7864]: I0224 02:03:55.816123 7864 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 24 02:03:55.816168 master-0 kubenswrapper[7864]: I0224 02:03:55.816144 7864 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 24 02:03:55.816168 master-0 kubenswrapper[7864]: I0224 02:03:55.816158 7864 kubelet.go:324] "Adding apiserver pod source" Feb 24 02:03:55.816264 master-0 kubenswrapper[7864]: I0224 02:03:55.816173 7864 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 24 02:03:55.823176 master-0 kubenswrapper[7864]: I0224 02:03:55.823140 7864 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-6.rhaos4.18.git7ed6156.el9" apiVersion="v1" Feb 24 02:03:55.823328 master-0 kubenswrapper[7864]: I0224 02:03:55.823312 7864 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 24 02:03:55.823646 master-0 kubenswrapper[7864]: I0224 02:03:55.823627 7864 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 24 02:03:55.823759 master-0 kubenswrapper[7864]: I0224 02:03:55.823740 7864 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 24 02:03:55.823794 master-0 kubenswrapper[7864]: I0224 02:03:55.823762 7864 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 24 02:03:55.823794 master-0 kubenswrapper[7864]: I0224 02:03:55.823771 7864 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 24 02:03:55.823794 master-0 kubenswrapper[7864]: I0224 02:03:55.823778 7864 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 24 02:03:55.823794 master-0 kubenswrapper[7864]: I0224 02:03:55.823785 7864 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 24 02:03:55.823794 master-0 kubenswrapper[7864]: I0224 02:03:55.823793 7864 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 24 02:03:55.823957 master-0 kubenswrapper[7864]: I0224 02:03:55.823801 7864 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 24 02:03:55.823957 master-0 kubenswrapper[7864]: I0224 02:03:55.823809 7864 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 24 02:03:55.823957 master-0 kubenswrapper[7864]: I0224 02:03:55.823817 7864 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 24 02:03:55.823957 master-0 kubenswrapper[7864]: I0224 02:03:55.823824 7864 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 24 02:03:55.823957 master-0 kubenswrapper[7864]: I0224 02:03:55.823835 7864 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 24 02:03:55.823957 master-0 kubenswrapper[7864]: I0224 02:03:55.823848 7864 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 24 02:03:55.823957 master-0 kubenswrapper[7864]: I0224 02:03:55.823873 7864 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 24 02:03:55.824330 master-0 kubenswrapper[7864]: I0224 02:03:55.824255 7864 server.go:1280] "Started kubelet" Feb 24 02:03:55.824953 master-0 kubenswrapper[7864]: I0224 02:03:55.824900 7864 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 24 02:03:55.825645 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 24 02:03:55.826446 master-0 kubenswrapper[7864]: I0224 02:03:55.826358 7864 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 24 02:03:55.826483 master-0 kubenswrapper[7864]: I0224 02:03:55.826467 7864 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 24 02:03:55.828241 master-0 kubenswrapper[7864]: I0224 02:03:55.827245 7864 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 24 02:03:55.828241 master-0 kubenswrapper[7864]: I0224 02:03:55.827952 7864 server.go:449] "Adding debug handlers to kubelet server" Feb 24 02:03:55.828343 master-0 kubenswrapper[7864]: I0224 02:03:55.828316 7864 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 24 02:03:55.829564 master-0 kubenswrapper[7864]: I0224 02:03:55.829541 7864 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 24 02:03:55.830032 master-0 kubenswrapper[7864]: I0224 02:03:55.829971 7864 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 24 02:03:55.830032 master-0 kubenswrapper[7864]: I0224 02:03:55.830008 7864 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 24 02:03:55.830228 master-0 kubenswrapper[7864]: I0224 02:03:55.830133 7864 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-25 01:54:04 +0000 UTC, rotation deadline is 2026-02-24 21:06:29.228804974 +0000 UTC Feb 24 02:03:55.830228 master-0 kubenswrapper[7864]: I0224 02:03:55.830175 7864 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 19h2m33.398631738s for next certificate rotation Feb 24 02:03:55.830313 master-0 kubenswrapper[7864]: I0224 02:03:55.830263 7864 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 24 02:03:55.830313 master-0 kubenswrapper[7864]: I0224 02:03:55.830277 7864 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 24 02:03:55.830365 master-0 kubenswrapper[7864]: I0224 02:03:55.830348 7864 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 24 02:03:55.831254 master-0 kubenswrapper[7864]: I0224 02:03:55.831227 7864 factory.go:55] Registering systemd factory Feb 24 02:03:55.831321 master-0 kubenswrapper[7864]: I0224 02:03:55.831261 7864 factory.go:221] Registration of the systemd container factory successfully Feb 24 02:03:55.831788 master-0 kubenswrapper[7864]: I0224 02:03:55.831768 7864 factory.go:153] Registering CRI-O factory Feb 24 02:03:55.831788 master-0 kubenswrapper[7864]: I0224 02:03:55.831788 7864 factory.go:221] Registration of the crio container factory successfully Feb 24 02:03:55.831891 master-0 kubenswrapper[7864]: I0224 02:03:55.831874 7864 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 24 02:03:55.831945 master-0 kubenswrapper[7864]: I0224 02:03:55.831912 7864 factory.go:103] Registering Raw factory Feb 24 02:03:55.831945 master-0 kubenswrapper[7864]: I0224 02:03:55.831938 7864 manager.go:1196] Started watching for new ooms in manager Feb 24 02:03:55.838120 master-0 kubenswrapper[7864]: I0224 02:03:55.837518 7864 manager.go:319] Starting recovery of all containers Feb 24 02:03:55.841299 master-0 kubenswrapper[7864]: I0224 02:03:55.840646 7864 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 24 02:03:55.849438 master-0 kubenswrapper[7864]: I0224 02:03:55.849375 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f72a322-2142-482a-9b0b-2ad890181d7a" volumeName="kubernetes.io/configmap/4f72a322-2142-482a-9b0b-2ad890181d7a-service-ca" seLinuxMountContext="" Feb 24 02:03:55.849508 master-0 kubenswrapper[7864]: I0224 02:03:55.849448 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b36d8451-0fda-4d9d-a850-d05c8f847016" volumeName="kubernetes.io/secret/b36d8451-0fda-4d9d-a850-d05c8f847016-serving-cert" seLinuxMountContext="" Feb 24 02:03:55.849508 master-0 kubenswrapper[7864]: I0224 02:03:55.849474 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c84dc269-43ae-4083-9998-a0b3c90bb681" volumeName="kubernetes.io/projected/c84dc269-43ae-4083-9998-a0b3c90bb681-bound-sa-token" seLinuxMountContext="" Feb 24 02:03:55.849508 master-0 kubenswrapper[7864]: I0224 02:03:55.849495 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d8e20d47-aeb6-41bf-9715-c437beb8e9e4" volumeName="kubernetes.io/projected/d8e20d47-aeb6-41bf-9715-c437beb8e9e4-kube-api-access-qv6t5" seLinuxMountContext="" Feb 24 02:03:55.849646 master-0 kubenswrapper[7864]: I0224 02:03:55.849517 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fbe9964a-9e82-48e9-82b0-7c07e4cec3a2" volumeName="kubernetes.io/secret/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-serving-cert" seLinuxMountContext="" Feb 24 02:03:55.849646 master-0 kubenswrapper[7864]: I0224 02:03:55.849537 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fbe9964a-9e82-48e9-82b0-7c07e4cec3a2" volumeName="kubernetes.io/secret/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-etcd-client" seLinuxMountContext="" Feb 24 02:03:55.849646 master-0 kubenswrapper[7864]: I0224 02:03:55.849556 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="303d5058-84df-40d1-a941-896b093ae470" volumeName="kubernetes.io/secret/303d5058-84df-40d1-a941-896b093ae470-cluster-olm-operator-serving-cert" seLinuxMountContext="" Feb 24 02:03:55.849646 master-0 kubenswrapper[7864]: I0224 02:03:55.849616 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="523033b8-4101-4a55-8320-55bef04ddaaf" volumeName="kubernetes.io/configmap/523033b8-4101-4a55-8320-55bef04ddaaf-ovnkube-config" seLinuxMountContext="" Feb 24 02:03:55.849646 master-0 kubenswrapper[7864]: I0224 02:03:55.849641 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c84dc269-43ae-4083-9998-a0b3c90bb681" volumeName="kubernetes.io/projected/c84dc269-43ae-4083-9998-a0b3c90bb681-kube-api-access-9sp95" seLinuxMountContext="" Feb 24 02:03:55.849813 master-0 kubenswrapper[7864]: I0224 02:03:55.849664 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb39fcc8-beb4-410e-b2a4-0b3e150719cc" volumeName="kubernetes.io/configmap/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-ovnkube-script-lib" seLinuxMountContext="" Feb 24 02:03:55.849813 master-0 kubenswrapper[7864]: I0224 02:03:55.849685 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fbe9964a-9e82-48e9-82b0-7c07e4cec3a2" volumeName="kubernetes.io/configmap/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-config" seLinuxMountContext="" Feb 24 02:03:55.849813 master-0 kubenswrapper[7864]: I0224 02:03:55.849704 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f85222bf-f51a-4232-8db1-1e6ee593617b" volumeName="kubernetes.io/projected/f85222bf-f51a-4232-8db1-1e6ee593617b-kube-api-access" seLinuxMountContext="" Feb 24 02:03:55.849813 master-0 kubenswrapper[7864]: I0224 02:03:55.849723 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb39fcc8-beb4-410e-b2a4-0b3e150719cc" volumeName="kubernetes.io/projected/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-kube-api-access-rc8jx" seLinuxMountContext="" Feb 24 02:03:55.849813 master-0 kubenswrapper[7864]: I0224 02:03:55.849754 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="523033b8-4101-4a55-8320-55bef04ddaaf" volumeName="kubernetes.io/configmap/523033b8-4101-4a55-8320-55bef04ddaaf-env-overrides" seLinuxMountContext="" Feb 24 02:03:55.849813 master-0 kubenswrapper[7864]: I0224 02:03:55.849773 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ccd8e-d964-4c03-8ffc-51b464030c25" volumeName="kubernetes.io/configmap/6a9ccd8e-d964-4c03-8ffc-51b464030c25-trusted-ca" seLinuxMountContext="" Feb 24 02:03:55.850021 master-0 kubenswrapper[7864]: I0224 02:03:55.849793 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b098bd4-5751-4b01-8409-0688fd29233e" volumeName="kubernetes.io/projected/7b098bd4-5751-4b01-8409-0688fd29233e-kube-api-access-86pcb" seLinuxMountContext="" Feb 24 02:03:55.850021 master-0 kubenswrapper[7864]: I0224 02:03:55.849846 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b4e3ba0-5194-4e20-8f12-dea4b67504fe" volumeName="kubernetes.io/projected/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-kube-api-access-dqqkv" seLinuxMountContext="" Feb 24 02:03:55.850021 master-0 kubenswrapper[7864]: I0224 02:03:55.849865 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c84dc269-43ae-4083-9998-a0b3c90bb681" volumeName="kubernetes.io/configmap/c84dc269-43ae-4083-9998-a0b3c90bb681-trusted-ca" seLinuxMountContext="" Feb 24 02:03:55.850021 master-0 kubenswrapper[7864]: I0224 02:03:55.849887 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b4e3ba0-5194-4e20-8f12-dea4b67504fe" volumeName="kubernetes.io/configmap/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-config" seLinuxMountContext="" Feb 24 02:03:55.850021 master-0 kubenswrapper[7864]: I0224 02:03:55.849936 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="adc1097b-c1ab-4f09-965d-1c819671475b" volumeName="kubernetes.io/configmap/adc1097b-c1ab-4f09-965d-1c819671475b-env-overrides" seLinuxMountContext="" Feb 24 02:03:55.850021 master-0 kubenswrapper[7864]: I0224 02:03:55.849955 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c92835f0-7f32-4584-8304-843d7979392a" volumeName="kubernetes.io/projected/c92835f0-7f32-4584-8304-843d7979392a-kube-api-access-6nwzm" seLinuxMountContext="" Feb 24 02:03:55.850021 master-0 kubenswrapper[7864]: I0224 02:03:55.849978 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5463fbf-ac21-4058-9a3b-30d0e5ea31b7" volumeName="kubernetes.io/projected/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7-kube-api-access" seLinuxMountContext="" Feb 24 02:03:55.850021 master-0 kubenswrapper[7864]: I0224 02:03:55.849996 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="523033b8-4101-4a55-8320-55bef04ddaaf" volumeName="kubernetes.io/secret/523033b8-4101-4a55-8320-55bef04ddaaf-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 24 02:03:55.850021 master-0 kubenswrapper[7864]: I0224 02:03:55.850015 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c" volumeName="kubernetes.io/secret/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c-serving-cert" seLinuxMountContext="" Feb 24 02:03:55.850293 master-0 kubenswrapper[7864]: I0224 02:03:55.850036 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9b5620d6-a5fe-45d7-b39e-8bed7f602a17" volumeName="kubernetes.io/configmap/9b5620d6-a5fe-45d7-b39e-8bed7f602a17-config" seLinuxMountContext="" Feb 24 02:03:55.850293 master-0 kubenswrapper[7864]: I0224 02:03:55.850057 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="adc1097b-c1ab-4f09-965d-1c819671475b" volumeName="kubernetes.io/configmap/adc1097b-c1ab-4f09-965d-1c819671475b-ovnkube-identity-cm" seLinuxMountContext="" Feb 24 02:03:55.850293 master-0 kubenswrapper[7864]: I0224 02:03:55.850079 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b36d8451-0fda-4d9d-a850-d05c8f847016" volumeName="kubernetes.io/configmap/b36d8451-0fda-4d9d-a850-d05c8f847016-config" seLinuxMountContext="" Feb 24 02:03:55.850293 master-0 kubenswrapper[7864]: I0224 02:03:55.850102 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="02f1d753-983a-4c4a-b1a0-560de173859a" volumeName="kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-profile-collector-cert" seLinuxMountContext="" Feb 24 02:03:55.850293 master-0 kubenswrapper[7864]: I0224 02:03:55.850123 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="12b89e05-a503-47aa-90b2-4d741e015b19" volumeName="kubernetes.io/projected/12b89e05-a503-47aa-90b2-4d741e015b19-kube-api-access-twgrj" seLinuxMountContext="" Feb 24 02:03:55.850293 master-0 kubenswrapper[7864]: I0224 02:03:55.850143 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="303d5058-84df-40d1-a941-896b093ae470" volumeName="kubernetes.io/empty-dir/303d5058-84df-40d1-a941-896b093ae470-operand-assets" seLinuxMountContext="" Feb 24 02:03:55.850293 master-0 kubenswrapper[7864]: I0224 02:03:55.850164 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3332acec-1553-4594-a903-a322399f6d9d" volumeName="kubernetes.io/projected/3332acec-1553-4594-a903-a322399f6d9d-kube-api-access-x6qs2" seLinuxMountContext="" Feb 24 02:03:55.850293 master-0 kubenswrapper[7864]: I0224 02:03:55.850183 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ccd8e-d964-4c03-8ffc-51b464030c25" volumeName="kubernetes.io/projected/6a9ccd8e-d964-4c03-8ffc-51b464030c25-kube-api-access-ssz8p" seLinuxMountContext="" Feb 24 02:03:55.850293 master-0 kubenswrapper[7864]: I0224 02:03:55.850204 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d8e20d47-aeb6-41bf-9715-c437beb8e9e4" volumeName="kubernetes.io/configmap/d8e20d47-aeb6-41bf-9715-c437beb8e9e4-iptables-alerter-script" seLinuxMountContext="" Feb 24 02:03:55.850293 master-0 kubenswrapper[7864]: I0224 02:03:55.850225 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="db8d6627-394c-4087-bfa4-bf7580f6bb4b" volumeName="kubernetes.io/configmap/db8d6627-394c-4087-bfa4-bf7580f6bb4b-images" seLinuxMountContext="" Feb 24 02:03:55.850293 master-0 kubenswrapper[7864]: I0224 02:03:55.850244 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb39fcc8-beb4-410e-b2a4-0b3e150719cc" volumeName="kubernetes.io/secret/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-ovn-node-metrics-cert" seLinuxMountContext="" Feb 24 02:03:55.850293 master-0 kubenswrapper[7864]: I0224 02:03:55.850265 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fbe9964a-9e82-48e9-82b0-7c07e4cec3a2" volumeName="kubernetes.io/projected/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-kube-api-access-pwjpw" seLinuxMountContext="" Feb 24 02:03:55.850293 master-0 kubenswrapper[7864]: I0224 02:03:55.850284 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="12b89e05-a503-47aa-90b2-4d741e015b19" volumeName="kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-profile-collector-cert" seLinuxMountContext="" Feb 24 02:03:55.850293 master-0 kubenswrapper[7864]: I0224 02:03:55.850303 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="303d5058-84df-40d1-a941-896b093ae470" volumeName="kubernetes.io/projected/303d5058-84df-40d1-a941-896b093ae470-kube-api-access-79bl6" seLinuxMountContext="" Feb 24 02:03:55.850765 master-0 kubenswrapper[7864]: I0224 02:03:55.850325 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57811d07-ae8a-44b7-8efb-dafc5afad31e" volumeName="kubernetes.io/configmap/57811d07-ae8a-44b7-8efb-dafc5afad31e-whereabouts-configmap" seLinuxMountContext="" Feb 24 02:03:55.850765 master-0 kubenswrapper[7864]: I0224 02:03:55.850345 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c92835f0-7f32-4584-8304-843d7979392a" volumeName="kubernetes.io/empty-dir/c92835f0-7f32-4584-8304-843d7979392a-available-featuregates" seLinuxMountContext="" Feb 24 02:03:55.850765 master-0 kubenswrapper[7864]: I0224 02:03:55.850366 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2e9cdff-8c15-43df-b8df-7fe3a73fda86" volumeName="kubernetes.io/configmap/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-telemetry-config" seLinuxMountContext="" Feb 24 02:03:55.850765 master-0 kubenswrapper[7864]: I0224 02:03:55.850384 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91d16f7b-390a-4d9d-99d6-cc8e210801d1" volumeName="kubernetes.io/configmap/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-trusted-ca" seLinuxMountContext="" Feb 24 02:03:55.850765 master-0 kubenswrapper[7864]: I0224 02:03:55.850403 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dc3d08db-45fa-4fef-b1fd-2875f22d5c45" volumeName="kubernetes.io/projected/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-kube-api-access-2ssxg" seLinuxMountContext="" Feb 24 02:03:55.850765 master-0 kubenswrapper[7864]: I0224 02:03:55.850438 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3278a82-ee70-4d6c-9c96-f8cb1bcb9334" volumeName="kubernetes.io/projected/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-kube-api-access-qph4g" seLinuxMountContext="" Feb 24 02:03:55.850765 master-0 kubenswrapper[7864]: I0224 02:03:55.850462 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3278a82-ee70-4d6c-9c96-f8cb1bcb9334" volumeName="kubernetes.io/projected/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-bound-sa-token" seLinuxMountContext="" Feb 24 02:03:55.850765 master-0 kubenswrapper[7864]: I0224 02:03:55.850490 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cabdddba-5507-4e47-98ef-a00c6d0f305d" volumeName="kubernetes.io/configmap/cabdddba-5507-4e47-98ef-a00c6d0f305d-config" seLinuxMountContext="" Feb 24 02:03:55.850765 master-0 kubenswrapper[7864]: I0224 02:03:55.850518 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="523033b8-4101-4a55-8320-55bef04ddaaf" volumeName="kubernetes.io/projected/523033b8-4101-4a55-8320-55bef04ddaaf-kube-api-access-dlg2j" seLinuxMountContext="" Feb 24 02:03:55.850765 master-0 kubenswrapper[7864]: I0224 02:03:55.850537 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57811d07-ae8a-44b7-8efb-dafc5afad31e" volumeName="kubernetes.io/configmap/57811d07-ae8a-44b7-8efb-dafc5afad31e-cni-sysctl-allowlist" seLinuxMountContext="" Feb 24 02:03:55.850765 master-0 kubenswrapper[7864]: I0224 02:03:55.850561 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6320dbb5-b84d-4a57-8c65-fbed8421f84a" volumeName="kubernetes.io/projected/6320dbb5-b84d-4a57-8c65-fbed8421f84a-kube-api-access-pgjlz" seLinuxMountContext="" Feb 24 02:03:55.850765 master-0 kubenswrapper[7864]: I0224 02:03:55.850621 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a02536a3-7d3e-4e74-9625-aefed518ec35" volumeName="kubernetes.io/secret/a02536a3-7d3e-4e74-9625-aefed518ec35-serving-cert" seLinuxMountContext="" Feb 24 02:03:55.850765 master-0 kubenswrapper[7864]: I0224 02:03:55.850650 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b36d8451-0fda-4d9d-a850-d05c8f847016" volumeName="kubernetes.io/projected/b36d8451-0fda-4d9d-a850-d05c8f847016-kube-api-access-njjq8" seLinuxMountContext="" Feb 24 02:03:55.850765 master-0 kubenswrapper[7864]: I0224 02:03:55.850676 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fcbda577-b943-4b5c-b041-948aece8e40f" volumeName="kubernetes.io/configmap/fcbda577-b943-4b5c-b041-948aece8e40f-config" seLinuxMountContext="" Feb 24 02:03:55.851216 master-0 kubenswrapper[7864]: I0224 02:03:55.850788 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5da829af-05fb-4f6e-9bec-c4dcc9cbec4b" volumeName="kubernetes.io/configmap/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-cni-binary-copy" seLinuxMountContext="" Feb 24 02:03:55.851216 master-0 kubenswrapper[7864]: I0224 02:03:55.850828 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="adc1097b-c1ab-4f09-965d-1c819671475b" volumeName="kubernetes.io/projected/adc1097b-c1ab-4f09-965d-1c819671475b-kube-api-access-nqtld" seLinuxMountContext="" Feb 24 02:03:55.851216 master-0 kubenswrapper[7864]: I0224 02:03:55.850866 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c92835f0-7f32-4584-8304-843d7979392a" volumeName="kubernetes.io/secret/c92835f0-7f32-4584-8304-843d7979392a-serving-cert" seLinuxMountContext="" Feb 24 02:03:55.851216 master-0 kubenswrapper[7864]: I0224 02:03:55.850887 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f85222bf-f51a-4232-8db1-1e6ee593617b" volumeName="kubernetes.io/configmap/f85222bf-f51a-4232-8db1-1e6ee593617b-config" seLinuxMountContext="" Feb 24 02:03:55.851216 master-0 kubenswrapper[7864]: I0224 02:03:55.850906 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fcbda577-b943-4b5c-b041-948aece8e40f" volumeName="kubernetes.io/projected/fcbda577-b943-4b5c-b041-948aece8e40f-kube-api-access-vpg26" seLinuxMountContext="" Feb 24 02:03:55.851216 master-0 kubenswrapper[7864]: I0224 02:03:55.850940 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="02f1d753-983a-4c4a-b1a0-560de173859a" volumeName="kubernetes.io/projected/02f1d753-983a-4c4a-b1a0-560de173859a-kube-api-access-mb52w" seLinuxMountContext="" Feb 24 02:03:55.851216 master-0 kubenswrapper[7864]: I0224 02:03:55.850959 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="adc1097b-c1ab-4f09-965d-1c819671475b" volumeName="kubernetes.io/secret/adc1097b-c1ab-4f09-965d-1c819671475b-webhook-cert" seLinuxMountContext="" Feb 24 02:03:55.851216 master-0 kubenswrapper[7864]: I0224 02:03:55.850979 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cabdddba-5507-4e47-98ef-a00c6d0f305d" volumeName="kubernetes.io/configmap/cabdddba-5507-4e47-98ef-a00c6d0f305d-service-ca-bundle" seLinuxMountContext="" Feb 24 02:03:55.851216 master-0 kubenswrapper[7864]: I0224 02:03:55.850997 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5463fbf-ac21-4058-9a3b-30d0e5ea31b7" volumeName="kubernetes.io/secret/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7-serving-cert" seLinuxMountContext="" Feb 24 02:03:55.851216 master-0 kubenswrapper[7864]: I0224 02:03:55.851017 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f85222bf-f51a-4232-8db1-1e6ee593617b" volumeName="kubernetes.io/secret/f85222bf-f51a-4232-8db1-1e6ee593617b-serving-cert" seLinuxMountContext="" Feb 24 02:03:55.851216 master-0 kubenswrapper[7864]: I0224 02:03:55.851044 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fbe9964a-9e82-48e9-82b0-7c07e4cec3a2" volumeName="kubernetes.io/configmap/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-etcd-ca" seLinuxMountContext="" Feb 24 02:03:55.851216 master-0 kubenswrapper[7864]: I0224 02:03:55.851072 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5da829af-05fb-4f6e-9bec-c4dcc9cbec4b" volumeName="kubernetes.io/configmap/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-daemon-config" seLinuxMountContext="" Feb 24 02:03:55.851216 master-0 kubenswrapper[7864]: I0224 02:03:55.851097 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a02536a3-7d3e-4e74-9625-aefed518ec35" volumeName="kubernetes.io/configmap/a02536a3-7d3e-4e74-9625-aefed518ec35-config" seLinuxMountContext="" Feb 24 02:03:55.851216 master-0 kubenswrapper[7864]: I0224 02:03:55.851115 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3278a82-ee70-4d6c-9c96-f8cb1bcb9334" volumeName="kubernetes.io/configmap/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-trusted-ca" seLinuxMountContext="" Feb 24 02:03:55.851216 master-0 kubenswrapper[7864]: I0224 02:03:55.851134 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cabdddba-5507-4e47-98ef-a00c6d0f305d" volumeName="kubernetes.io/secret/cabdddba-5507-4e47-98ef-a00c6d0f305d-serving-cert" seLinuxMountContext="" Feb 24 02:03:55.851216 master-0 kubenswrapper[7864]: I0224 02:03:55.851154 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5463fbf-ac21-4058-9a3b-30d0e5ea31b7" volumeName="kubernetes.io/configmap/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7-config" seLinuxMountContext="" Feb 24 02:03:55.851216 master-0 kubenswrapper[7864]: I0224 02:03:55.851175 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57811d07-ae8a-44b7-8efb-dafc5afad31e" volumeName="kubernetes.io/projected/57811d07-ae8a-44b7-8efb-dafc5afad31e-kube-api-access-vrmsh" seLinuxMountContext="" Feb 24 02:03:55.851216 master-0 kubenswrapper[7864]: I0224 02:03:55.851195 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70e2ba24-4871-4d1d-9935-156fdbeb2810" volumeName="kubernetes.io/projected/70e2ba24-4871-4d1d-9935-156fdbeb2810-kube-api-access-4nmd6" seLinuxMountContext="" Feb 24 02:03:55.851216 master-0 kubenswrapper[7864]: I0224 02:03:55.851213 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cabdddba-5507-4e47-98ef-a00c6d0f305d" volumeName="kubernetes.io/projected/cabdddba-5507-4e47-98ef-a00c6d0f305d-kube-api-access-h6f7j" seLinuxMountContext="" Feb 24 02:03:55.851867 master-0 kubenswrapper[7864]: I0224 02:03:55.851235 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb39fcc8-beb4-410e-b2a4-0b3e150719cc" volumeName="kubernetes.io/configmap/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-env-overrides" seLinuxMountContext="" Feb 24 02:03:55.851867 master-0 kubenswrapper[7864]: I0224 02:03:55.851254 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fcbda577-b943-4b5c-b041-948aece8e40f" volumeName="kubernetes.io/secret/fcbda577-b943-4b5c-b041-948aece8e40f-serving-cert" seLinuxMountContext="" Feb 24 02:03:55.851867 master-0 kubenswrapper[7864]: I0224 02:03:55.851273 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3332acec-1553-4594-a903-a322399f6d9d" volumeName="kubernetes.io/secret/3332acec-1553-4594-a903-a322399f6d9d-metrics-tls" seLinuxMountContext="" Feb 24 02:03:55.851867 master-0 kubenswrapper[7864]: I0224 02:03:55.851332 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b4e3ba0-5194-4e20-8f12-dea4b67504fe" volumeName="kubernetes.io/configmap/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-images" seLinuxMountContext="" Feb 24 02:03:55.851867 master-0 kubenswrapper[7864]: I0224 02:03:55.851356 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cabdddba-5507-4e47-98ef-a00c6d0f305d" volumeName="kubernetes.io/configmap/cabdddba-5507-4e47-98ef-a00c6d0f305d-trusted-ca-bundle" seLinuxMountContext="" Feb 24 02:03:55.851867 master-0 kubenswrapper[7864]: I0224 02:03:55.851374 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fbe9964a-9e82-48e9-82b0-7c07e4cec3a2" volumeName="kubernetes.io/configmap/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-etcd-service-ca" seLinuxMountContext="" Feb 24 02:03:55.851867 master-0 kubenswrapper[7864]: I0224 02:03:55.851394 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="db8d6627-394c-4087-bfa4-bf7580f6bb4b" volumeName="kubernetes.io/configmap/db8d6627-394c-4087-bfa4-bf7580f6bb4b-auth-proxy-config" seLinuxMountContext="" Feb 24 02:03:55.851867 master-0 kubenswrapper[7864]: I0224 02:03:55.851415 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="db8d6627-394c-4087-bfa4-bf7580f6bb4b" volumeName="kubernetes.io/projected/db8d6627-394c-4087-bfa4-bf7580f6bb4b-kube-api-access-x6lsp" seLinuxMountContext="" Feb 24 02:03:55.851867 master-0 kubenswrapper[7864]: I0224 02:03:55.851436 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb39fcc8-beb4-410e-b2a4-0b3e150719cc" volumeName="kubernetes.io/configmap/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-ovnkube-config" seLinuxMountContext="" Feb 24 02:03:55.851867 master-0 kubenswrapper[7864]: I0224 02:03:55.851457 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f72a322-2142-482a-9b0b-2ad890181d7a" volumeName="kubernetes.io/projected/4f72a322-2142-482a-9b0b-2ad890181d7a-kube-api-access" seLinuxMountContext="" Feb 24 02:03:55.851867 master-0 kubenswrapper[7864]: I0224 02:03:55.851477 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c" volumeName="kubernetes.io/configmap/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c-config" seLinuxMountContext="" Feb 24 02:03:55.851867 master-0 kubenswrapper[7864]: I0224 02:03:55.851531 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c" volumeName="kubernetes.io/projected/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c-kube-api-access-n2b65" seLinuxMountContext="" Feb 24 02:03:55.851867 master-0 kubenswrapper[7864]: I0224 02:03:55.851551 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91d16f7b-390a-4d9d-99d6-cc8e210801d1" volumeName="kubernetes.io/projected/91d16f7b-390a-4d9d-99d6-cc8e210801d1-kube-api-access-b8rjx" seLinuxMountContext="" Feb 24 02:03:55.851867 master-0 kubenswrapper[7864]: I0224 02:03:55.851596 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9b5620d6-a5fe-45d7-b39e-8bed7f602a17" volumeName="kubernetes.io/projected/9b5620d6-a5fe-45d7-b39e-8bed7f602a17-kube-api-access-jtf52" seLinuxMountContext="" Feb 24 02:03:55.851867 master-0 kubenswrapper[7864]: I0224 02:03:55.851618 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2e9cdff-8c15-43df-b8df-7fe3a73fda86" volumeName="kubernetes.io/projected/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-kube-api-access-82hfh" seLinuxMountContext="" Feb 24 02:03:55.851867 master-0 kubenswrapper[7864]: I0224 02:03:55.851638 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2cb764f6-40f8-4e87-8be0-b9d7b0364201" volumeName="kubernetes.io/projected/2cb764f6-40f8-4e87-8be0-b9d7b0364201-kube-api-access-sp8hv" seLinuxMountContext="" Feb 24 02:03:55.851867 master-0 kubenswrapper[7864]: I0224 02:03:55.851657 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57811d07-ae8a-44b7-8efb-dafc5afad31e" volumeName="kubernetes.io/configmap/57811d07-ae8a-44b7-8efb-dafc5afad31e-cni-binary-copy" seLinuxMountContext="" Feb 24 02:03:55.851867 master-0 kubenswrapper[7864]: I0224 02:03:55.851676 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5da829af-05fb-4f6e-9bec-c4dcc9cbec4b" volumeName="kubernetes.io/projected/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-kube-api-access-bbnd2" seLinuxMountContext="" Feb 24 02:03:55.851867 master-0 kubenswrapper[7864]: I0224 02:03:55.851695 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9b5620d6-a5fe-45d7-b39e-8bed7f602a17" volumeName="kubernetes.io/secret/9b5620d6-a5fe-45d7-b39e-8bed7f602a17-serving-cert" seLinuxMountContext="" Feb 24 02:03:55.851867 master-0 kubenswrapper[7864]: I0224 02:03:55.851717 7864 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a02536a3-7d3e-4e74-9625-aefed518ec35" volumeName="kubernetes.io/projected/a02536a3-7d3e-4e74-9625-aefed518ec35-kube-api-access" seLinuxMountContext="" Feb 24 02:03:55.851867 master-0 kubenswrapper[7864]: I0224 02:03:55.851743 7864 reconstruct.go:97] "Volume reconstruction finished" Feb 24 02:03:55.851867 master-0 kubenswrapper[7864]: I0224 02:03:55.851757 7864 reconciler.go:26] "Reconciler: start to sync state" Feb 24 02:03:55.854048 master-0 kubenswrapper[7864]: I0224 02:03:55.854027 7864 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 24 02:03:55.870453 master-0 kubenswrapper[7864]: I0224 02:03:55.870347 7864 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 24 02:03:55.873276 master-0 kubenswrapper[7864]: I0224 02:03:55.873255 7864 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 24 02:03:55.873340 master-0 kubenswrapper[7864]: I0224 02:03:55.873309 7864 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 24 02:03:55.873340 master-0 kubenswrapper[7864]: I0224 02:03:55.873339 7864 kubelet.go:2335] "Starting kubelet main sync loop" Feb 24 02:03:55.873446 master-0 kubenswrapper[7864]: E0224 02:03:55.873388 7864 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 24 02:03:55.876055 master-0 kubenswrapper[7864]: I0224 02:03:55.876013 7864 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 24 02:03:55.893118 master-0 kubenswrapper[7864]: I0224 02:03:55.893048 7864 generic.go:334] "Generic (PLEG): container finished" podID="57811d07-ae8a-44b7-8efb-dafc5afad31e" containerID="0261b05dc86f44c57d1260d8e9e574b7afb0942396c397b4be98f1486a4e967b" exitCode=0 Feb 24 02:03:55.893118 master-0 kubenswrapper[7864]: I0224 02:03:55.893112 7864 generic.go:334] "Generic (PLEG): container finished" podID="57811d07-ae8a-44b7-8efb-dafc5afad31e" containerID="46f1df0f3044924b6c94bc53975525ce01b17baddc32b6007d1fff90c64f595f" exitCode=0 Feb 24 02:03:55.893225 master-0 kubenswrapper[7864]: I0224 02:03:55.893126 7864 generic.go:334] "Generic (PLEG): container finished" podID="57811d07-ae8a-44b7-8efb-dafc5afad31e" containerID="cfdb24d0d0b1a9e1ffe1c98259396806799adff6a318a37a19e4e31ee02f6987" exitCode=0 Feb 24 02:03:55.893225 master-0 kubenswrapper[7864]: I0224 02:03:55.893151 7864 generic.go:334] "Generic (PLEG): container finished" podID="57811d07-ae8a-44b7-8efb-dafc5afad31e" containerID="bdb96a50270730f3bce2e557a04b02a2063f4f2e15fbd55d5081bf5036b5f652" exitCode=0 Feb 24 02:03:55.893225 master-0 kubenswrapper[7864]: I0224 02:03:55.893183 7864 generic.go:334] "Generic (PLEG): container finished" podID="57811d07-ae8a-44b7-8efb-dafc5afad31e" containerID="73bcd3ba04771dbfaf54cb795e59bd88d55d88d355f426be066ffb50beee1f86" exitCode=0 Feb 24 02:03:55.893225 master-0 kubenswrapper[7864]: I0224 02:03:55.893198 7864 generic.go:334] "Generic (PLEG): container finished" podID="57811d07-ae8a-44b7-8efb-dafc5afad31e" containerID="78fb207cbc767c0fee7b7d210f99c9aaf3165a7c791dd4e586c95fb618507ed8" exitCode=0 Feb 24 02:03:55.896735 master-0 kubenswrapper[7864]: I0224 02:03:55.896689 7864 generic.go:334] "Generic (PLEG): container finished" podID="e8e1e397-edd4-4278-b3ea-25fe829de509" containerID="9084ba926bf7975865b803686ed689ae33dbbe263dc377c963e7af79a6dfafbb" exitCode=0 Feb 24 02:03:55.902361 master-0 kubenswrapper[7864]: I0224 02:03:55.902331 7864 generic.go:334] "Generic (PLEG): container finished" podID="fb39fcc8-beb4-410e-b2a4-0b3e150719cc" containerID="c3063a301534062c954aa79867d0cc96573d7146ccda3bfb83406935c96bf2b9" exitCode=0 Feb 24 02:03:55.910789 master-0 kubenswrapper[7864]: I0224 02:03:55.910756 7864 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="2d7d0e564c8b8a31e63160aa69eba9da27910f88d4e6e998d994db3817f8b416" exitCode=0 Feb 24 02:03:55.940686 master-0 kubenswrapper[7864]: I0224 02:03:55.940641 7864 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="25efea610d9bc2514ea6f62c6d5763641769d0262eae03f839a9b98d1e3382eb" exitCode=1 Feb 24 02:03:55.946345 master-0 kubenswrapper[7864]: I0224 02:03:55.946314 7864 generic.go:334] "Generic (PLEG): container finished" podID="7fa1462b-8f1c-4a77-9c1c-f0f79910737f" containerID="35c312973828464e3d9786034ffddad219bbd2d62792822db99238b48a9c981d" exitCode=0 Feb 24 02:03:55.949793 master-0 kubenswrapper[7864]: I0224 02:03:55.949768 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 24 02:03:55.950211 master-0 kubenswrapper[7864]: I0224 02:03:55.950183 7864 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="4ea164dd4d44c905424ce0b0b3ea58702494938b88cbbbe52d4ce16914c7762b" exitCode=1 Feb 24 02:03:55.950211 master-0 kubenswrapper[7864]: I0224 02:03:55.950207 7864 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="17bbd7fbbeaf0dec034d902b8e7575b6559c59f312f1a534ffc4119208ba1272" exitCode=0 Feb 24 02:03:55.974916 master-0 kubenswrapper[7864]: I0224 02:03:55.974866 7864 manager.go:324] Recovery completed Feb 24 02:03:55.978465 master-0 kubenswrapper[7864]: E0224 02:03:55.978426 7864 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 24 02:03:56.006161 master-0 kubenswrapper[7864]: I0224 02:03:56.006134 7864 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 24 02:03:56.006251 master-0 kubenswrapper[7864]: I0224 02:03:56.006165 7864 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 24 02:03:56.006251 master-0 kubenswrapper[7864]: I0224 02:03:56.006198 7864 state_mem.go:36] "Initialized new in-memory state store" Feb 24 02:03:56.006480 master-0 kubenswrapper[7864]: I0224 02:03:56.006457 7864 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 24 02:03:56.006542 master-0 kubenswrapper[7864]: I0224 02:03:56.006485 7864 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 24 02:03:56.006542 master-0 kubenswrapper[7864]: I0224 02:03:56.006524 7864 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Feb 24 02:03:56.006542 master-0 kubenswrapper[7864]: I0224 02:03:56.006538 7864 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Feb 24 02:03:56.006653 master-0 kubenswrapper[7864]: I0224 02:03:56.006553 7864 policy_none.go:49] "None policy: Start" Feb 24 02:03:56.008669 master-0 kubenswrapper[7864]: I0224 02:03:56.008635 7864 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 24 02:03:56.008720 master-0 kubenswrapper[7864]: I0224 02:03:56.008678 7864 state_mem.go:35] "Initializing new in-memory state store" Feb 24 02:03:56.009005 master-0 kubenswrapper[7864]: I0224 02:03:56.008976 7864 state_mem.go:75] "Updated machine memory state" Feb 24 02:03:56.009005 master-0 kubenswrapper[7864]: I0224 02:03:56.009000 7864 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Feb 24 02:03:56.024209 master-0 kubenswrapper[7864]: I0224 02:03:56.024167 7864 manager.go:334] "Starting Device Plugin manager" Feb 24 02:03:56.024391 master-0 kubenswrapper[7864]: I0224 02:03:56.024366 7864 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 24 02:03:56.024391 master-0 kubenswrapper[7864]: I0224 02:03:56.024390 7864 server.go:79] "Starting device plugin registration server" Feb 24 02:03:56.024915 master-0 kubenswrapper[7864]: I0224 02:03:56.024876 7864 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 24 02:03:56.024968 master-0 kubenswrapper[7864]: I0224 02:03:56.024896 7864 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 24 02:03:56.025166 master-0 kubenswrapper[7864]: I0224 02:03:56.025138 7864 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 24 02:03:56.025270 master-0 kubenswrapper[7864]: I0224 02:03:56.025244 7864 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 24 02:03:56.025270 master-0 kubenswrapper[7864]: I0224 02:03:56.025258 7864 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 24 02:03:56.125989 master-0 kubenswrapper[7864]: I0224 02:03:56.125877 7864 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:03:56.128093 master-0 kubenswrapper[7864]: I0224 02:03:56.128058 7864 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:03:56.128160 master-0 kubenswrapper[7864]: I0224 02:03:56.128114 7864 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:03:56.128160 master-0 kubenswrapper[7864]: I0224 02:03:56.128129 7864 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:03:56.128245 master-0 kubenswrapper[7864]: I0224 02:03:56.128204 7864 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 24 02:03:56.140210 master-0 kubenswrapper[7864]: I0224 02:03:56.140136 7864 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Feb 24 02:03:56.140402 master-0 kubenswrapper[7864]: I0224 02:03:56.140373 7864 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 24 02:03:56.179408 master-0 kubenswrapper[7864]: I0224 02:03:56.179283 7864 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Feb 24 02:03:56.180350 master-0 kubenswrapper[7864]: I0224 02:03:56.180310 7864 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88f77a5110cbd6ffb82334a858bc74333d14fee5bea94889fdaf87723880303b" Feb 24 02:03:56.180504 master-0 kubenswrapper[7864]: I0224 02:03:56.180380 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"5049829a75ed57116cd28e395328151f0338930890f9c71ea2b44193759d72ce"} Feb 24 02:03:56.180504 master-0 kubenswrapper[7864]: I0224 02:03:56.180465 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"c3247cb01609ca2f589c633a4d3bc99376b997d00c142d6d2beea7dccc5eceb8"} Feb 24 02:03:56.181157 master-0 kubenswrapper[7864]: I0224 02:03:56.180485 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerDied","Data":"2d7d0e564c8b8a31e63160aa69eba9da27910f88d4e6e998d994db3817f8b416"} Feb 24 02:03:56.181157 master-0 kubenswrapper[7864]: I0224 02:03:56.180712 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"97c357985567dbd0d0a6d267a8cc4448d666a74cc353c0643a50d8ab3f6c2302"} Feb 24 02:03:56.181157 master-0 kubenswrapper[7864]: I0224 02:03:56.181017 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"e4194596cf49a5fc19a191ab2dc31b30d69af67944bd0d82e51ad0a2c8b76803"} Feb 24 02:03:56.181157 master-0 kubenswrapper[7864]: I0224 02:03:56.181031 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"5dc9293fc7d43ce4e59058df58bcbd64acfc36ec310c06377528876b47e64e22"} Feb 24 02:03:56.181157 master-0 kubenswrapper[7864]: I0224 02:03:56.181043 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"8ce6ca3d3b13c63a9ea107eaabf4c609711cee1bd75660ff7fc88de79d18620c"} Feb 24 02:03:56.181157 master-0 kubenswrapper[7864]: I0224 02:03:56.181058 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"bb19051dc2b31dca07092846d1f69e1993bc40ba384cd5a5b58d6d990afdcb5d"} Feb 24 02:03:56.181157 master-0 kubenswrapper[7864]: I0224 02:03:56.181073 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"1547c14f4eb619b96bd863bc221c6f76692c0c715d2c89a10b3b0a90e8c9a765"} Feb 24 02:03:56.181157 master-0 kubenswrapper[7864]: I0224 02:03:56.181086 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"25efea610d9bc2514ea6f62c6d5763641769d0262eae03f839a9b98d1e3382eb"} Feb 24 02:03:56.181157 master-0 kubenswrapper[7864]: I0224 02:03:56.181099 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"d130af32cfc701e2db477383fd28dc66d411455c0f39e12c729c963e3f569427"} Feb 24 02:03:56.181157 master-0 kubenswrapper[7864]: I0224 02:03:56.181111 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"0be92c811440a62e120f914b735c96a60861f339574fbe9008068727fac04419"} Feb 24 02:03:56.181157 master-0 kubenswrapper[7864]: I0224 02:03:56.181122 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"6d1c4a7e4e4241cdd4f673e537ec599a9ec1bd539d78669446c1a36b609a7a02"} Feb 24 02:03:56.181157 master-0 kubenswrapper[7864]: I0224 02:03:56.181140 7864 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acf2d712c422a3cffe6f1ecfbb7b0b5262ef2f0636bb7727cd7fd72186b9cf69" Feb 24 02:03:56.181157 master-0 kubenswrapper[7864]: I0224 02:03:56.181155 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"73250fbf83eb734a494f12593474f38faaba12f425754ad28c833c6cc94b24a7"} Feb 24 02:03:56.181157 master-0 kubenswrapper[7864]: I0224 02:03:56.181167 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"4ea164dd4d44c905424ce0b0b3ea58702494938b88cbbbe52d4ce16914c7762b"} Feb 24 02:03:56.181157 master-0 kubenswrapper[7864]: I0224 02:03:56.181182 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"17bbd7fbbeaf0dec034d902b8e7575b6559c59f312f1a534ffc4119208ba1272"} Feb 24 02:03:56.181157 master-0 kubenswrapper[7864]: I0224 02:03:56.181196 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"fb524201fadda92a97019a1e36f215d113e21212244e9e77433e72e6adcfc793"} Feb 24 02:03:56.182994 master-0 kubenswrapper[7864]: I0224 02:03:56.181209 7864 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39e4dd152418288c83854aae3e150fd3be1fd966f2ead04dae32ed3cad75dace" Feb 24 02:03:56.195715 master-0 kubenswrapper[7864]: E0224 02:03:56.195653 7864 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 02:03:56.197430 master-0 kubenswrapper[7864]: E0224 02:03:56.197389 7864 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 02:03:56.197649 master-0 kubenswrapper[7864]: W0224 02:03:56.197614 7864 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Feb 24 02:03:56.197649 master-0 kubenswrapper[7864]: E0224 02:03:56.197647 7864 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Feb 24 02:03:56.197909 master-0 kubenswrapper[7864]: E0224 02:03:56.197874 7864 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:03:56.256764 master-0 kubenswrapper[7864]: I0224 02:03:56.256726 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 02:03:56.256887 master-0 kubenswrapper[7864]: I0224 02:03:56.256771 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 02:03:56.256887 master-0 kubenswrapper[7864]: I0224 02:03:56.256802 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:03:56.256887 master-0 kubenswrapper[7864]: I0224 02:03:56.256822 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 02:03:56.256887 master-0 kubenswrapper[7864]: I0224 02:03:56.256841 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:03:56.256887 master-0 kubenswrapper[7864]: I0224 02:03:56.256861 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:03:56.256887 master-0 kubenswrapper[7864]: I0224 02:03:56.256878 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:03:56.257217 master-0 kubenswrapper[7864]: I0224 02:03:56.256898 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 24 02:03:56.257217 master-0 kubenswrapper[7864]: I0224 02:03:56.256919 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:03:56.257217 master-0 kubenswrapper[7864]: I0224 02:03:56.256937 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:03:56.257217 master-0 kubenswrapper[7864]: I0224 02:03:56.256956 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:03:56.257217 master-0 kubenswrapper[7864]: I0224 02:03:56.256998 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 02:03:56.257217 master-0 kubenswrapper[7864]: I0224 02:03:56.257016 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:03:56.257217 master-0 kubenswrapper[7864]: I0224 02:03:56.257032 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:03:56.257217 master-0 kubenswrapper[7864]: I0224 02:03:56.257051 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:03:56.257217 master-0 kubenswrapper[7864]: I0224 02:03:56.257068 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 24 02:03:56.257217 master-0 kubenswrapper[7864]: I0224 02:03:56.257085 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:03:56.358259 master-0 kubenswrapper[7864]: I0224 02:03:56.358170 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 24 02:03:56.358432 master-0 kubenswrapper[7864]: I0224 02:03:56.358263 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:03:56.358432 master-0 kubenswrapper[7864]: I0224 02:03:56.358303 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:03:56.358432 master-0 kubenswrapper[7864]: I0224 02:03:56.358329 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 24 02:03:56.358432 master-0 kubenswrapper[7864]: I0224 02:03:56.358338 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:03:56.358432 master-0 kubenswrapper[7864]: I0224 02:03:56.358400 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 02:03:56.358432 master-0 kubenswrapper[7864]: I0224 02:03:56.358433 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 02:03:56.359151 master-0 kubenswrapper[7864]: I0224 02:03:56.358451 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:03:56.359151 master-0 kubenswrapper[7864]: I0224 02:03:56.358511 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 02:03:56.359151 master-0 kubenswrapper[7864]: I0224 02:03:56.358514 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 02:03:56.359151 master-0 kubenswrapper[7864]: I0224 02:03:56.358552 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 02:03:56.359151 master-0 kubenswrapper[7864]: I0224 02:03:56.358760 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:03:56.359151 master-0 kubenswrapper[7864]: I0224 02:03:56.358855 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 02:03:56.359151 master-0 kubenswrapper[7864]: I0224 02:03:56.358888 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:03:56.359151 master-0 kubenswrapper[7864]: I0224 02:03:56.358918 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 24 02:03:56.359151 master-0 kubenswrapper[7864]: I0224 02:03:56.358963 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:03:56.359151 master-0 kubenswrapper[7864]: I0224 02:03:56.358980 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 24 02:03:56.359151 master-0 kubenswrapper[7864]: I0224 02:03:56.359000 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:03:56.359151 master-0 kubenswrapper[7864]: I0224 02:03:56.359039 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:03:56.359151 master-0 kubenswrapper[7864]: I0224 02:03:56.359039 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:03:56.359151 master-0 kubenswrapper[7864]: I0224 02:03:56.359091 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:03:56.359151 master-0 kubenswrapper[7864]: I0224 02:03:56.359124 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:03:56.359151 master-0 kubenswrapper[7864]: I0224 02:03:56.359154 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:03:56.360497 master-0 kubenswrapper[7864]: I0224 02:03:56.359208 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:03:56.360497 master-0 kubenswrapper[7864]: I0224 02:03:56.359219 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:03:56.360497 master-0 kubenswrapper[7864]: I0224 02:03:56.359259 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:03:56.360497 master-0 kubenswrapper[7864]: I0224 02:03:56.359297 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:03:56.360497 master-0 kubenswrapper[7864]: I0224 02:03:56.359489 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:03:56.360497 master-0 kubenswrapper[7864]: I0224 02:03:56.359462 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:03:56.360497 master-0 kubenswrapper[7864]: I0224 02:03:56.359537 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 02:03:56.360497 master-0 kubenswrapper[7864]: I0224 02:03:56.359526 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:03:56.360497 master-0 kubenswrapper[7864]: I0224 02:03:56.359615 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:03:56.360497 master-0 kubenswrapper[7864]: I0224 02:03:56.359654 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 02:03:56.360497 master-0 kubenswrapper[7864]: I0224 02:03:56.359644 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:03:56.578777 master-0 kubenswrapper[7864]: I0224 02:03:56.578740 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:03:56.817300 master-0 kubenswrapper[7864]: I0224 02:03:56.817184 7864 apiserver.go:52] "Watching apiserver" Feb 24 02:03:56.833481 master-0 kubenswrapper[7864]: I0224 02:03:56.833370 7864 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 24 02:03:56.840528 master-0 kubenswrapper[7864]: I0224 02:03:56.836713 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg","openshift-network-diagnostics/network-check-target-54b95","openshift-network-node-identity/network-node-identity-p5b6q","openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn","openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq","openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq","openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-c95qc","openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg","openshift-ingress-operator/ingress-operator-6569778c84-6dlqb","openshift-multus/multus-7fbjw","openshift-multus/multus-additional-cni-plugins-jtdht","openshift-ovn-kubernetes/ovnkube-node-rg9r6","openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr","kube-system/bootstrap-kube-scheduler-master-0","openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt","openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4","openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f","openshift-multus/network-metrics-daemon-tntcf","openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc","openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb","openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n","openshift-network-operator/iptables-alerter-rjbl5","openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c","assisted-installer/assisted-installer-controller-f2lj9","kube-system/bootstrap-kube-controller-manager-master-0","openshift-dns-operator/dns-operator-8c7d49845-hxcn2","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7","openshift-marketplace/marketplace-operator-6f5488b997-4qf9p","openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd","openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-network-operator/network-operator-7d7db75979-drrqm","openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k","openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb","openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb"] Feb 24 02:03:56.840528 master-0 kubenswrapper[7864]: I0224 02:03:56.837113 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-f2lj9" Feb 24 02:03:56.840528 master-0 kubenswrapper[7864]: I0224 02:03:56.837221 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:56.840528 master-0 kubenswrapper[7864]: I0224 02:03:56.837343 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:03:56.840528 master-0 kubenswrapper[7864]: I0224 02:03:56.837414 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" Feb 24 02:03:56.840528 master-0 kubenswrapper[7864]: I0224 02:03:56.837592 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:03:56.840528 master-0 kubenswrapper[7864]: I0224 02:03:56.838188 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:03:56.840528 master-0 kubenswrapper[7864]: I0224 02:03:56.838388 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:56.840528 master-0 kubenswrapper[7864]: I0224 02:03:56.838611 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" Feb 24 02:03:56.840528 master-0 kubenswrapper[7864]: I0224 02:03:56.838801 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:03:56.840528 master-0 kubenswrapper[7864]: I0224 02:03:56.839831 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:03:56.841227 master-0 kubenswrapper[7864]: I0224 02:03:56.840882 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:03:56.841289 master-0 kubenswrapper[7864]: I0224 02:03:56.841261 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:03:56.841458 master-0 kubenswrapper[7864]: I0224 02:03:56.841400 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:03:56.841562 master-0 kubenswrapper[7864]: I0224 02:03:56.841527 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:56.842555 master-0 kubenswrapper[7864]: I0224 02:03:56.842312 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" Feb 24 02:03:56.843242 master-0 kubenswrapper[7864]: I0224 02:03:56.843201 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:56.844039 master-0 kubenswrapper[7864]: I0224 02:03:56.843975 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 24 02:03:56.844669 master-0 kubenswrapper[7864]: I0224 02:03:56.844619 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 24 02:03:56.859798 master-0 kubenswrapper[7864]: I0224 02:03:56.844052 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 24 02:03:56.859798 master-0 kubenswrapper[7864]: I0224 02:03:56.844820 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 24 02:03:56.859798 master-0 kubenswrapper[7864]: I0224 02:03:56.844863 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 24 02:03:56.859798 master-0 kubenswrapper[7864]: I0224 02:03:56.844994 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 24 02:03:56.859798 master-0 kubenswrapper[7864]: I0224 02:03:56.845223 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 24 02:03:56.859798 master-0 kubenswrapper[7864]: I0224 02:03:56.846214 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 24 02:03:56.859798 master-0 kubenswrapper[7864]: I0224 02:03:56.846346 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 24 02:03:56.859798 master-0 kubenswrapper[7864]: I0224 02:03:56.846525 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 24 02:03:56.859798 master-0 kubenswrapper[7864]: I0224 02:03:56.848712 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 24 02:03:56.859798 master-0 kubenswrapper[7864]: I0224 02:03:56.848853 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 24 02:03:56.859798 master-0 kubenswrapper[7864]: I0224 02:03:56.848982 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 24 02:03:56.859798 master-0 kubenswrapper[7864]: I0224 02:03:56.854846 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 24 02:03:56.859798 master-0 kubenswrapper[7864]: I0224 02:03:56.854982 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 24 02:03:56.871157 master-0 kubenswrapper[7864]: I0224 02:03:56.868089 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 24 02:03:56.871157 master-0 kubenswrapper[7864]: I0224 02:03:56.868226 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 24 02:03:56.871157 master-0 kubenswrapper[7864]: I0224 02:03:56.868229 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 24 02:03:56.871157 master-0 kubenswrapper[7864]: I0224 02:03:56.868853 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 24 02:03:56.871157 master-0 kubenswrapper[7864]: I0224 02:03:56.869048 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 24 02:03:56.871157 master-0 kubenswrapper[7864]: I0224 02:03:56.869398 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 24 02:03:56.871157 master-0 kubenswrapper[7864]: I0224 02:03:56.869570 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 24 02:03:56.871157 master-0 kubenswrapper[7864]: I0224 02:03:56.870158 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 24 02:03:56.871157 master-0 kubenswrapper[7864]: I0224 02:03:56.870186 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 24 02:03:56.871157 master-0 kubenswrapper[7864]: I0224 02:03:56.870696 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 24 02:03:56.871157 master-0 kubenswrapper[7864]: I0224 02:03:56.870815 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 24 02:03:56.873500 master-0 kubenswrapper[7864]: I0224 02:03:56.871686 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 24 02:03:56.873500 master-0 kubenswrapper[7864]: I0224 02:03:56.871911 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 24 02:03:56.873500 master-0 kubenswrapper[7864]: I0224 02:03:56.872106 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 24 02:03:56.873500 master-0 kubenswrapper[7864]: I0224 02:03:56.872617 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 24 02:03:56.873500 master-0 kubenswrapper[7864]: I0224 02:03:56.872763 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 24 02:03:56.873500 master-0 kubenswrapper[7864]: I0224 02:03:56.873006 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 24 02:03:56.873500 master-0 kubenswrapper[7864]: I0224 02:03:56.873236 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 24 02:03:56.873500 master-0 kubenswrapper[7864]: I0224 02:03:56.873313 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 24 02:03:56.873500 master-0 kubenswrapper[7864]: I0224 02:03:56.873346 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 24 02:03:56.873500 master-0 kubenswrapper[7864]: I0224 02:03:56.873477 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 24 02:03:56.874493 master-0 kubenswrapper[7864]: I0224 02:03:56.873539 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 24 02:03:56.874493 master-0 kubenswrapper[7864]: I0224 02:03:56.873658 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 24 02:03:56.874493 master-0 kubenswrapper[7864]: I0224 02:03:56.873725 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 24 02:03:56.874493 master-0 kubenswrapper[7864]: I0224 02:03:56.873665 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 24 02:03:56.874493 master-0 kubenswrapper[7864]: I0224 02:03:56.873249 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 24 02:03:56.874493 master-0 kubenswrapper[7864]: I0224 02:03:56.873969 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 24 02:03:56.874493 master-0 kubenswrapper[7864]: I0224 02:03:56.874127 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 24 02:03:56.874493 master-0 kubenswrapper[7864]: I0224 02:03:56.874190 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 24 02:03:56.874493 master-0 kubenswrapper[7864]: I0224 02:03:56.874205 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 24 02:03:56.874493 master-0 kubenswrapper[7864]: I0224 02:03:56.874277 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 24 02:03:56.874493 master-0 kubenswrapper[7864]: I0224 02:03:56.874511 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 24 02:03:56.879257 master-0 kubenswrapper[7864]: I0224 02:03:56.874551 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 24 02:03:56.879257 master-0 kubenswrapper[7864]: I0224 02:03:56.874871 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 24 02:03:56.879257 master-0 kubenswrapper[7864]: I0224 02:03:56.874997 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 24 02:03:56.879257 master-0 kubenswrapper[7864]: I0224 02:03:56.875275 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 24 02:03:56.879257 master-0 kubenswrapper[7864]: I0224 02:03:56.875333 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 24 02:03:56.879257 master-0 kubenswrapper[7864]: I0224 02:03:56.875474 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 24 02:03:56.879257 master-0 kubenswrapper[7864]: I0224 02:03:56.876115 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 24 02:03:56.879257 master-0 kubenswrapper[7864]: I0224 02:03:56.877011 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 24 02:03:56.880667 master-0 kubenswrapper[7864]: I0224 02:03:56.880297 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 24 02:03:56.880667 master-0 kubenswrapper[7864]: I0224 02:03:56.880505 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 24 02:03:56.881134 master-0 kubenswrapper[7864]: I0224 02:03:56.881084 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 24 02:03:56.881226 master-0 kubenswrapper[7864]: I0224 02:03:56.881191 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 24 02:03:56.881257 master-0 kubenswrapper[7864]: I0224 02:03:56.881246 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 24 02:03:56.881389 master-0 kubenswrapper[7864]: I0224 02:03:56.881115 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 24 02:03:56.881645 master-0 kubenswrapper[7864]: I0224 02:03:56.881622 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 24 02:03:56.882103 master-0 kubenswrapper[7864]: I0224 02:03:56.882083 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 24 02:03:56.882158 master-0 kubenswrapper[7864]: I0224 02:03:56.882133 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 24 02:03:56.882241 master-0 kubenswrapper[7864]: I0224 02:03:56.882217 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 24 02:03:56.882437 master-0 kubenswrapper[7864]: I0224 02:03:56.882416 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 24 02:03:56.882504 master-0 kubenswrapper[7864]: I0224 02:03:56.882468 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 24 02:03:56.882589 master-0 kubenswrapper[7864]: I0224 02:03:56.882559 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 24 02:03:56.882698 master-0 kubenswrapper[7864]: I0224 02:03:56.882681 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 24 02:03:56.882768 master-0 kubenswrapper[7864]: I0224 02:03:56.882745 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 24 02:03:56.882917 master-0 kubenswrapper[7864]: I0224 02:03:56.882898 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 24 02:03:56.882979 master-0 kubenswrapper[7864]: I0224 02:03:56.882960 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 24 02:03:56.883038 master-0 kubenswrapper[7864]: I0224 02:03:56.883017 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 24 02:03:56.883083 master-0 kubenswrapper[7864]: I0224 02:03:56.883027 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 24 02:03:56.883153 master-0 kubenswrapper[7864]: I0224 02:03:56.883136 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 24 02:03:56.883225 master-0 kubenswrapper[7864]: I0224 02:03:56.883199 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 24 02:03:56.883314 master-0 kubenswrapper[7864]: I0224 02:03:56.883292 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 24 02:03:56.883438 master-0 kubenswrapper[7864]: I0224 02:03:56.883419 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 24 02:03:56.883658 master-0 kubenswrapper[7864]: I0224 02:03:56.883638 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 24 02:03:56.883705 master-0 kubenswrapper[7864]: I0224 02:03:56.883656 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 24 02:03:56.883886 master-0 kubenswrapper[7864]: I0224 02:03:56.883862 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 24 02:03:56.884025 master-0 kubenswrapper[7864]: I0224 02:03:56.883931 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 24 02:03:56.884025 master-0 kubenswrapper[7864]: I0224 02:03:56.883981 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 24 02:03:56.884189 master-0 kubenswrapper[7864]: I0224 02:03:56.884077 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 24 02:03:56.884189 master-0 kubenswrapper[7864]: I0224 02:03:56.884084 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 24 02:03:56.884388 master-0 kubenswrapper[7864]: I0224 02:03:56.884369 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 24 02:03:56.884425 master-0 kubenswrapper[7864]: I0224 02:03:56.884399 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 24 02:03:56.884778 master-0 kubenswrapper[7864]: I0224 02:03:56.884754 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 24 02:03:56.884928 master-0 kubenswrapper[7864]: I0224 02:03:56.884901 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 24 02:03:56.885311 master-0 kubenswrapper[7864]: I0224 02:03:56.885239 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 24 02:03:56.885500 master-0 kubenswrapper[7864]: I0224 02:03:56.885452 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 24 02:03:56.886126 master-0 kubenswrapper[7864]: I0224 02:03:56.886088 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 24 02:03:56.886177 master-0 kubenswrapper[7864]: I0224 02:03:56.886157 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 24 02:03:56.886758 master-0 kubenswrapper[7864]: I0224 02:03:56.886720 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 24 02:03:56.888560 master-0 kubenswrapper[7864]: I0224 02:03:56.888537 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 24 02:03:56.891543 master-0 kubenswrapper[7864]: I0224 02:03:56.891513 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 24 02:03:56.900110 master-0 kubenswrapper[7864]: I0224 02:03:56.900080 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 24 02:03:56.900861 master-0 kubenswrapper[7864]: I0224 02:03:56.900830 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 24 02:03:56.901931 master-0 kubenswrapper[7864]: I0224 02:03:56.901896 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 24 02:03:56.902823 master-0 kubenswrapper[7864]: I0224 02:03:56.902776 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 24 02:03:56.903404 master-0 kubenswrapper[7864]: I0224 02:03:56.903382 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 24 02:03:56.915779 master-0 kubenswrapper[7864]: I0224 02:03:56.915736 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 24 02:03:56.919903 master-0 kubenswrapper[7864]: I0224 02:03:56.919875 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 24 02:03:56.933881 master-0 kubenswrapper[7864]: I0224 02:03:56.933854 7864 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 24 02:03:56.940374 master-0 kubenswrapper[7864]: I0224 02:03:56.940337 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 24 02:03:56.959885 master-0 kubenswrapper[7864]: I0224 02:03:56.959864 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 24 02:03:56.968880 master-0 kubenswrapper[7864]: I0224 02:03:56.968824 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-run-ovn\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:56.969024 master-0 kubenswrapper[7864]: I0224 02:03:56.968956 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:56.969112 master-0 kubenswrapper[7864]: I0224 02:03:56.969050 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02536a3-7d3e-4e74-9625-aefed518ec35-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-tl97n\" (UID: \"a02536a3-7d3e-4e74-9625-aefed518ec35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" Feb 24 02:03:56.969161 master-0 kubenswrapper[7864]: I0224 02:03:56.969132 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-cni-binary-copy\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:56.969246 master-0 kubenswrapper[7864]: I0224 02:03:56.969219 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/57811d07-ae8a-44b7-8efb-dafc5afad31e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:03:56.969342 master-0 kubenswrapper[7864]: I0224 02:03:56.969307 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2hllb\" (UID: \"6320dbb5-b84d-4a57-8c65-fbed8421f84a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" Feb 24 02:03:56.969419 master-0 kubenswrapper[7864]: I0224 02:03:56.969393 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:56.969468 master-0 kubenswrapper[7864]: I0224 02:03:56.969451 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02536a3-7d3e-4e74-9625-aefed518ec35-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-tl97n\" (UID: \"a02536a3-7d3e-4e74-9625-aefed518ec35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" Feb 24 02:03:56.969504 master-0 kubenswrapper[7864]: I0224 02:03:56.969476 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwjpw\" (UniqueName: \"kubernetes.io/projected/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-kube-api-access-pwjpw\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:56.969533 master-0 kubenswrapper[7864]: I0224 02:03:56.969507 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:56.969912 master-0 kubenswrapper[7864]: I0224 02:03:56.969551 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82hfh\" (UniqueName: \"kubernetes.io/projected/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-kube-api-access-82hfh\") pod \"cluster-monitoring-operator-6bb6d78bf-fkzdb\" (UID: \"f2e9cdff-8c15-43df-b8df-7fe3a73fda86\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:03:56.969998 master-0 kubenswrapper[7864]: I0224 02:03:56.969956 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6lsp\" (UniqueName: \"kubernetes.io/projected/db8d6627-394c-4087-bfa4-bf7580f6bb4b-kube-api-access-x6lsp\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:03:56.970064 master-0 kubenswrapper[7864]: I0224 02:03:56.970028 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-env-overrides\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:56.970104 master-0 kubenswrapper[7864]: I0224 02:03:56.970053 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/57811d07-ae8a-44b7-8efb-dafc5afad31e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:03:56.970104 master-0 kubenswrapper[7864]: I0224 02:03:56.970083 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twgrj\" (UniqueName: \"kubernetes.io/projected/12b89e05-a503-47aa-90b2-4d741e015b19-kube-api-access-twgrj\") pod \"catalog-operator-596f79dd6f-8cg5c\" (UID: \"12b89e05-a503-47aa-90b2-4d741e015b19\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:03:56.970366 master-0 kubenswrapper[7864]: I0224 02:03:56.970323 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c84dc269-43ae-4083-9998-a0b3c90bb681-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:03:56.970398 master-0 kubenswrapper[7864]: I0224 02:03:56.970365 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-env-overrides\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:56.970434 master-0 kubenswrapper[7864]: I0224 02:03:56.970392 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sp95\" (UniqueName: \"kubernetes.io/projected/c84dc269-43ae-4083-9998-a0b3c90bb681-kube-api-access-9sp95\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:03:56.970507 master-0 kubenswrapper[7864]: I0224 02:03:56.969651 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-cni-binary-copy\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:56.970700 master-0 kubenswrapper[7864]: I0224 02:03:56.970605 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-run-systemd\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:56.970700 master-0 kubenswrapper[7864]: I0224 02:03:56.970672 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/adc1097b-c1ab-4f09-965d-1c819671475b-env-overrides\") pod \"network-node-identity-p5b6q\" (UID: \"adc1097b-c1ab-4f09-965d-1c819671475b\") " pod="openshift-network-node-identity/network-node-identity-p5b6q" Feb 24 02:03:56.970766 master-0 kubenswrapper[7864]: I0224 02:03:56.970712 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-etc-openvswitch\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:56.970766 master-0 kubenswrapper[7864]: I0224 02:03:56.970751 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:56.971132 master-0 kubenswrapper[7864]: I0224 02:03:56.971062 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-ovnkube-config\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:56.971477 master-0 kubenswrapper[7864]: I0224 02:03:56.971441 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-ovnkube-config\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:56.971538 master-0 kubenswrapper[7864]: I0224 02:03:56.971506 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-config\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:56.971732 master-0 kubenswrapper[7864]: I0224 02:03:56.971681 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-config\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:56.972161 master-0 kubenswrapper[7864]: I0224 02:03:56.972120 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-4qf9p\" (UID: \"91d16f7b-390a-4d9d-99d6-cc8e210801d1\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:03:56.972234 master-0 kubenswrapper[7864]: I0224 02:03:56.972200 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-4qf9p\" (UID: \"91d16f7b-390a-4d9d-99d6-cc8e210801d1\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:03:56.972319 master-0 kubenswrapper[7864]: I0224 02:03:56.972284 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-fkzdb\" (UID: \"f2e9cdff-8c15-43df-b8df-7fe3a73fda86\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:03:56.972437 master-0 kubenswrapper[7864]: I0224 02:03:56.972404 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:56.972469 master-0 kubenswrapper[7864]: I0224 02:03:56.972454 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-fkzdb\" (UID: \"f2e9cdff-8c15-43df-b8df-7fe3a73fda86\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:03:56.972545 master-0 kubenswrapper[7864]: I0224 02:03:56.972522 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-images\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:56.973097 master-0 kubenswrapper[7864]: I0224 02:03:56.973052 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-images\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:56.973180 master-0 kubenswrapper[7864]: I0224 02:03:56.973134 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-fkzdb\" (UID: \"f2e9cdff-8c15-43df-b8df-7fe3a73fda86\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:03:56.973180 master-0 kubenswrapper[7864]: I0224 02:03:56.973166 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nwzm\" (UniqueName: \"kubernetes.io/projected/c92835f0-7f32-4584-8304-843d7979392a-kube-api-access-6nwzm\") pod \"openshift-config-operator-6f47d587d6-ccrxg\" (UID: \"c92835f0-7f32-4584-8304-843d7979392a\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:03:56.973391 master-0 kubenswrapper[7864]: I0224 02:03:56.973359 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-hostroot\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:56.973478 master-0 kubenswrapper[7864]: I0224 02:03:56.973412 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/db8d6627-394c-4087-bfa4-bf7580f6bb4b-images\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:03:56.973478 master-0 kubenswrapper[7864]: I0224 02:03:56.973455 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-dg77f\" (UID: \"dc3d08db-45fa-4fef-b1fd-2875f22d5c45\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" Feb 24 02:03:56.973530 master-0 kubenswrapper[7864]: I0224 02:03:56.973498 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6f7j\" (UniqueName: \"kubernetes.io/projected/cabdddba-5507-4e47-98ef-a00c6d0f305d-kube-api-access-h6f7j\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:03:56.973587 master-0 kubenswrapper[7864]: I0224 02:03:56.973555 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-etcd-client\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:56.973696 master-0 kubenswrapper[7864]: I0224 02:03:56.973652 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls\") pod \"dns-operator-8c7d49845-hxcn2\" (UID: \"2cb764f6-40f8-4e87-8be0-b9d7b0364201\") " pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" Feb 24 02:03:56.973768 master-0 kubenswrapper[7864]: I0224 02:03:56.973717 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn8hz\" (UniqueName: \"kubernetes.io/projected/e3a675b9-feaa-4456-b7b4-0cd3afc42a42-kube-api-access-nn8hz\") pod \"network-check-target-54b95\" (UID: \"e3a675b9-feaa-4456-b7b4-0cd3afc42a42\") " pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:56.973831 master-0 kubenswrapper[7864]: I0224 02:03:56.973800 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgjlz\" (UniqueName: \"kubernetes.io/projected/6320dbb5-b84d-4a57-8c65-fbed8421f84a-kube-api-access-pgjlz\") pod \"package-server-manager-5c75f78c8b-2hllb\" (UID: \"6320dbb5-b84d-4a57-8c65-fbed8421f84a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" Feb 24 02:03:56.973880 master-0 kubenswrapper[7864]: I0224 02:03:56.973853 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-cni-netd\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:56.973931 master-0 kubenswrapper[7864]: I0224 02:03:56.973904 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ssxg\" (UniqueName: \"kubernetes.io/projected/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-kube-api-access-2ssxg\") pod \"multus-admission-controller-5f98f4f8d5-dg77f\" (UID: \"dc3d08db-45fa-4fef-b1fd-2875f22d5c45\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" Feb 24 02:03:56.974002 master-0 kubenswrapper[7864]: I0224 02:03:56.973981 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-etcd-client\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:56.974034 master-0 kubenswrapper[7864]: I0224 02:03:56.973979 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nmd6\" (UniqueName: \"kubernetes.io/projected/70e2ba24-4871-4d1d-9935-156fdbeb2810-kube-api-access-4nmd6\") pod \"network-metrics-daemon-tntcf\" (UID: \"70e2ba24-4871-4d1d-9935-156fdbeb2810\") " pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:56.974074 master-0 kubenswrapper[7864]: I0224 02:03:56.974054 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:03:56.974105 master-0 kubenswrapper[7864]: I0224 02:03:56.974087 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-log-socket\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:56.974134 master-0 kubenswrapper[7864]: I0224 02:03:56.974112 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs\") pod \"network-metrics-daemon-tntcf\" (UID: \"70e2ba24-4871-4d1d-9935-156fdbeb2810\") " pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:56.974212 master-0 kubenswrapper[7864]: I0224 02:03:56.974182 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:03:56.974263 master-0 kubenswrapper[7864]: I0224 02:03:56.974238 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-run-ovn-kubernetes\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:56.974319 master-0 kubenswrapper[7864]: I0224 02:03:56.974293 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtf52\" (UniqueName: \"kubernetes.io/projected/9b5620d6-a5fe-45d7-b39e-8bed7f602a17-kube-api-access-jtf52\") pod \"service-ca-operator-c48c8bf7c-6fqkr\" (UID: \"9b5620d6-a5fe-45d7-b39e-8bed7f602a17\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr" Feb 24 02:03:56.974362 master-0 kubenswrapper[7864]: I0224 02:03:56.974340 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c92835f0-7f32-4584-8304-843d7979392a-serving-cert\") pod \"openshift-config-operator-6f47d587d6-ccrxg\" (UID: \"c92835f0-7f32-4584-8304-843d7979392a\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:03:56.974399 master-0 kubenswrapper[7864]: I0224 02:03:56.974386 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cabdddba-5507-4e47-98ef-a00c6d0f305d-config\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:03:56.974447 master-0 kubenswrapper[7864]: I0224 02:03:56.974401 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/db8d6627-394c-4087-bfa4-bf7580f6bb4b-images\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:03:56.974499 master-0 kubenswrapper[7864]: I0224 02:03:56.974423 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-run-k8s-cni-cncf-io\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:56.974567 master-0 kubenswrapper[7864]: I0224 02:03:56.974534 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpg26\" (UniqueName: \"kubernetes.io/projected/fcbda577-b943-4b5c-b041-948aece8e40f-kube-api-access-vpg26\") pod \"kube-storage-version-migrator-operator-fc889cfd5-xdws2\" (UID: \"fcbda577-b943-4b5c-b041-948aece8e40f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" Feb 24 02:03:56.974657 master-0 kubenswrapper[7864]: I0224 02:03:56.974634 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cabdddba-5507-4e47-98ef-a00c6d0f305d-config\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:03:56.974695 master-0 kubenswrapper[7864]: I0224 02:03:56.974664 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-cnibin\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:03:56.974738 master-0 kubenswrapper[7864]: I0224 02:03:56.974702 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c92835f0-7f32-4584-8304-843d7979392a-serving-cert\") pod \"openshift-config-operator-6f47d587d6-ccrxg\" (UID: \"c92835f0-7f32-4584-8304-843d7979392a\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:03:56.974782 master-0 kubenswrapper[7864]: I0224 02:03:56.974752 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-os-release\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:03:56.974841 master-0 kubenswrapper[7864]: I0224 02:03:56.974817 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-config\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:56.974891 master-0 kubenswrapper[7864]: I0224 02:03:56.974866 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-run-multus-certs\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:56.974938 master-0 kubenswrapper[7864]: I0224 02:03:56.974914 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:03:56.974989 master-0 kubenswrapper[7864]: I0224 02:03:56.974963 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/523033b8-4101-4a55-8320-55bef04ddaaf-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-bb22k\" (UID: \"523033b8-4101-4a55-8320-55bef04ddaaf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" Feb 24 02:03:56.975070 master-0 kubenswrapper[7864]: I0224 02:03:56.975020 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlg2j\" (UniqueName: \"kubernetes.io/projected/523033b8-4101-4a55-8320-55bef04ddaaf-kube-api-access-dlg2j\") pod \"ovnkube-control-plane-5d8dfcdc87-bb22k\" (UID: \"523033b8-4101-4a55-8320-55bef04ddaaf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" Feb 24 02:03:56.975143 master-0 kubenswrapper[7864]: I0224 02:03:56.975119 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-serving-cert\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:56.975247 master-0 kubenswrapper[7864]: I0224 02:03:56.975211 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc8jx\" (UniqueName: \"kubernetes.io/projected/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-kube-api-access-rc8jx\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:56.975352 master-0 kubenswrapper[7864]: I0224 02:03:56.975325 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/523033b8-4101-4a55-8320-55bef04ddaaf-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-bb22k\" (UID: \"523033b8-4101-4a55-8320-55bef04ddaaf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" Feb 24 02:03:56.975383 master-0 kubenswrapper[7864]: I0224 02:03:56.975219 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-config\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:56.975460 master-0 kubenswrapper[7864]: I0224 02:03:56.975433 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-serving-cert\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:56.975494 master-0 kubenswrapper[7864]: I0224 02:03:56.975436 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert\") pod \"catalog-operator-596f79dd6f-8cg5c\" (UID: \"12b89e05-a503-47aa-90b2-4d741e015b19\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:03:56.975551 master-0 kubenswrapper[7864]: I0224 02:03:56.975525 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f85222bf-f51a-4232-8db1-1e6ee593617b-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-2492q\" (UID: \"f85222bf-f51a-4232-8db1-1e6ee593617b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" Feb 24 02:03:56.975626 master-0 kubenswrapper[7864]: I0224 02:03:56.975601 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:03:56.975672 master-0 kubenswrapper[7864]: I0224 02:03:56.975649 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f72a322-2142-482a-9b0b-2ad890181d7a-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:03:56.975714 master-0 kubenswrapper[7864]: I0224 02:03:56.975692 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-ovnkube-script-lib\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:56.975789 master-0 kubenswrapper[7864]: I0224 02:03:56.975737 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f72a322-2142-482a-9b0b-2ad890181d7a-service-ca\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:03:56.975877 master-0 kubenswrapper[7864]: I0224 02:03:56.975853 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-5g6nc\" (UID: \"02f1d753-983a-4c4a-b1a0-560de173859a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:03:56.975919 master-0 kubenswrapper[7864]: I0224 02:03:56.975896 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-8cg5c\" (UID: \"12b89e05-a503-47aa-90b2-4d741e015b19\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:03:56.975999 master-0 kubenswrapper[7864]: I0224 02:03:56.975967 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-os-release\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:56.976454 master-0 kubenswrapper[7864]: I0224 02:03:56.976413 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-run-openvswitch\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:56.976562 master-0 kubenswrapper[7864]: I0224 02:03:56.976523 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f85222bf-f51a-4232-8db1-1e6ee593617b-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-2492q\" (UID: \"f85222bf-f51a-4232-8db1-1e6ee593617b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" Feb 24 02:03:56.976614 master-0 kubenswrapper[7864]: I0224 02:03:56.976142 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-5g6nc\" (UID: \"02f1d753-983a-4c4a-b1a0-560de173859a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:03:56.976655 master-0 kubenswrapper[7864]: I0224 02:03:56.976103 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f72a322-2142-482a-9b0b-2ad890181d7a-service-ca\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:03:56.976684 master-0 kubenswrapper[7864]: I0224 02:03:56.976339 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-8cg5c\" (UID: \"12b89e05-a503-47aa-90b2-4d741e015b19\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:03:56.976713 master-0 kubenswrapper[7864]: I0224 02:03:56.976557 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssz8p\" (UniqueName: \"kubernetes.io/projected/6a9ccd8e-d964-4c03-8ffc-51b464030c25-kube-api-access-ssz8p\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:56.976763 master-0 kubenswrapper[7864]: I0224 02:03:56.976739 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-var-lib-cni-bin\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:56.976808 master-0 kubenswrapper[7864]: I0224 02:03:56.976786 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-trusted-ca\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:03:56.977400 master-0 kubenswrapper[7864]: I0224 02:03:56.976864 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/523033b8-4101-4a55-8320-55bef04ddaaf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-bb22k\" (UID: \"523033b8-4101-4a55-8320-55bef04ddaaf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" Feb 24 02:03:56.977524 master-0 kubenswrapper[7864]: I0224 02:03:56.977149 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/523033b8-4101-4a55-8320-55bef04ddaaf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-bb22k\" (UID: \"523033b8-4101-4a55-8320-55bef04ddaaf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" Feb 24 02:03:56.977624 master-0 kubenswrapper[7864]: I0224 02:03:56.977352 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-trusted-ca\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:03:56.977667 master-0 kubenswrapper[7864]: I0224 02:03:56.977441 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-ovn-node-metrics-cert\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:56.977701 master-0 kubenswrapper[7864]: I0224 02:03:56.977672 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/3332acec-1553-4594-a903-a322399f6d9d-host-etc-kube\") pod \"network-operator-7d7db75979-drrqm\" (UID: \"3332acec-1553-4594-a903-a322399f6d9d\") " pod="openshift-network-operator/network-operator-7d7db75979-drrqm" Feb 24 02:03:56.977762 master-0 kubenswrapper[7864]: I0224 02:03:56.977720 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8rjx\" (UniqueName: \"kubernetes.io/projected/91d16f7b-390a-4d9d-99d6-cc8e210801d1-kube-api-access-b8rjx\") pod \"marketplace-operator-6f5488b997-4qf9p\" (UID: \"91d16f7b-390a-4d9d-99d6-cc8e210801d1\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:03:56.977798 master-0 kubenswrapper[7864]: I0224 02:03:56.977783 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f85222bf-f51a-4232-8db1-1e6ee593617b-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-2492q\" (UID: \"f85222bf-f51a-4232-8db1-1e6ee593617b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" Feb 24 02:03:56.977855 master-0 kubenswrapper[7864]: I0224 02:03:56.977826 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4f72a322-2142-482a-9b0b-2ad890181d7a-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:03:56.977980 master-0 kubenswrapper[7864]: I0224 02:03:56.977960 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cabdddba-5507-4e47-98ef-a00c6d0f305d-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:03:56.978198 master-0 kubenswrapper[7864]: I0224 02:03:56.978127 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cabdddba-5507-4e47-98ef-a00c6d0f305d-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:03:56.978242 master-0 kubenswrapper[7864]: I0224 02:03:56.978216 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-bound-sa-token\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:03:56.978334 master-0 kubenswrapper[7864]: I0224 02:03:56.978317 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7fgn\" (UID: \"7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn" Feb 24 02:03:56.978428 master-0 kubenswrapper[7864]: I0224 02:03:56.978411 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-var-lib-openvswitch\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:56.978502 master-0 kubenswrapper[7864]: I0224 02:03:56.978489 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a9ccd8e-d964-4c03-8ffc-51b464030c25-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:56.978639 master-0 kubenswrapper[7864]: I0224 02:03:56.978498 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cabdddba-5507-4e47-98ef-a00c6d0f305d-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:03:56.978680 master-0 kubenswrapper[7864]: I0224 02:03:56.978617 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-conf-dir\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:56.978743 master-0 kubenswrapper[7864]: I0224 02:03:56.978706 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cabdddba-5507-4e47-98ef-a00c6d0f305d-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:03:56.978786 master-0 kubenswrapper[7864]: I0224 02:03:56.978748 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7fgn\" (UID: \"7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn" Feb 24 02:03:56.978835 master-0 kubenswrapper[7864]: I0224 02:03:56.978804 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b36d8451-0fda-4d9d-a850-d05c8f847016-config\") pod \"openshift-apiserver-operator-8586dccc9b-sl5hz\" (UID: \"b36d8451-0fda-4d9d-a850-d05c8f847016\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" Feb 24 02:03:56.978934 master-0 kubenswrapper[7864]: I0224 02:03:56.978911 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a9ccd8e-d964-4c03-8ffc-51b464030c25-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:56.979168 master-0 kubenswrapper[7864]: I0224 02:03:56.979004 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c84dc269-43ae-4083-9998-a0b3c90bb681-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:03:56.979168 master-0 kubenswrapper[7864]: I0224 02:03:56.979067 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b5620d6-a5fe-45d7-b39e-8bed7f602a17-config\") pod \"service-ca-operator-c48c8bf7c-6fqkr\" (UID: \"9b5620d6-a5fe-45d7-b39e-8bed7f602a17\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr" Feb 24 02:03:56.979168 master-0 kubenswrapper[7864]: I0224 02:03:56.979103 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a02536a3-7d3e-4e74-9625-aefed518ec35-config\") pod \"kube-controller-manager-operator-7bcfbc574b-tl97n\" (UID: \"a02536a3-7d3e-4e74-9625-aefed518ec35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" Feb 24 02:03:56.979267 master-0 kubenswrapper[7864]: I0224 02:03:56.979160 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-node-log\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:56.979267 master-0 kubenswrapper[7864]: I0224 02:03:56.979230 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7fgn\" (UID: \"7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn" Feb 24 02:03:56.979322 master-0 kubenswrapper[7864]: I0224 02:03:56.979268 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-system-cni-dir\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:56.979322 master-0 kubenswrapper[7864]: I0224 02:03:56.979304 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-cnibin\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:56.979376 master-0 kubenswrapper[7864]: I0224 02:03:56.979329 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c84dc269-43ae-4083-9998-a0b3c90bb681-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:03:56.979376 master-0 kubenswrapper[7864]: I0224 02:03:56.979342 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbnd2\" (UniqueName: \"kubernetes.io/projected/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-kube-api-access-bbnd2\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:56.979426 master-0 kubenswrapper[7864]: I0224 02:03:56.979395 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b5620d6-a5fe-45d7-b39e-8bed7f602a17-serving-cert\") pod \"service-ca-operator-c48c8bf7c-6fqkr\" (UID: \"9b5620d6-a5fe-45d7-b39e-8bed7f602a17\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr" Feb 24 02:03:56.979482 master-0 kubenswrapper[7864]: I0224 02:03:56.979432 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-cni-dir\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:56.979520 master-0 kubenswrapper[7864]: I0224 02:03:56.979454 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a02536a3-7d3e-4e74-9625-aefed518ec35-config\") pod \"kube-controller-manager-operator-7bcfbc574b-tl97n\" (UID: \"a02536a3-7d3e-4e74-9625-aefed518ec35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" Feb 24 02:03:56.979520 master-0 kubenswrapper[7864]: I0224 02:03:56.979504 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79bl6\" (UniqueName: \"kubernetes.io/projected/303d5058-84df-40d1-a941-896b093ae470-kube-api-access-79bl6\") pod \"cluster-olm-operator-5bd7768f54-7wc6k\" (UID: \"303d5058-84df-40d1-a941-896b093ae470\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" Feb 24 02:03:56.979585 master-0 kubenswrapper[7864]: I0224 02:03:56.979509 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7fgn\" (UID: \"7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn" Feb 24 02:03:56.979585 master-0 kubenswrapper[7864]: I0224 02:03:56.979547 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-systemd-units\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:56.979636 master-0 kubenswrapper[7864]: I0224 02:03:56.979568 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-slash\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:56.979669 master-0 kubenswrapper[7864]: I0224 02:03:56.979644 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-run-netns\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:56.979695 master-0 kubenswrapper[7864]: I0224 02:03:56.979666 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/303d5058-84df-40d1-a941-896b093ae470-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-7wc6k\" (UID: \"303d5058-84df-40d1-a941-896b093ae470\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" Feb 24 02:03:56.979724 master-0 kubenswrapper[7864]: I0224 02:03:56.979693 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b5620d6-a5fe-45d7-b39e-8bed7f602a17-config\") pod \"service-ca-operator-c48c8bf7c-6fqkr\" (UID: \"9b5620d6-a5fe-45d7-b39e-8bed7f602a17\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr" Feb 24 02:03:56.979753 master-0 kubenswrapper[7864]: I0224 02:03:56.979714 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njjq8\" (UniqueName: \"kubernetes.io/projected/b36d8451-0fda-4d9d-a850-d05c8f847016-kube-api-access-njjq8\") pod \"openshift-apiserver-operator-8586dccc9b-sl5hz\" (UID: \"b36d8451-0fda-4d9d-a850-d05c8f847016\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" Feb 24 02:03:56.979784 master-0 kubenswrapper[7864]: I0224 02:03:56.979733 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b5620d6-a5fe-45d7-b39e-8bed7f602a17-serving-cert\") pod \"service-ca-operator-c48c8bf7c-6fqkr\" (UID: \"9b5620d6-a5fe-45d7-b39e-8bed7f602a17\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr" Feb 24 02:03:56.979784 master-0 kubenswrapper[7864]: I0224 02:03:56.979763 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8tttg\" (UID: \"f5463fbf-ac21-4058-9a3b-30d0e5ea31b7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" Feb 24 02:03:56.979838 master-0 kubenswrapper[7864]: I0224 02:03:56.979797 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 24 02:03:56.979864 master-0 kubenswrapper[7864]: I0224 02:03:56.979815 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b36d8451-0fda-4d9d-a850-d05c8f847016-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-sl5hz\" (UID: \"b36d8451-0fda-4d9d-a850-d05c8f847016\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" Feb 24 02:03:56.979864 master-0 kubenswrapper[7864]: I0224 02:03:56.979838 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/303d5058-84df-40d1-a941-896b093ae470-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-7wc6k\" (UID: \"303d5058-84df-40d1-a941-896b093ae470\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" Feb 24 02:03:56.979917 master-0 kubenswrapper[7864]: I0224 02:03:56.979864 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp8hv\" (UniqueName: \"kubernetes.io/projected/2cb764f6-40f8-4e87-8be0-b9d7b0364201-kube-api-access-sp8hv\") pod \"dns-operator-8c7d49845-hxcn2\" (UID: \"2cb764f6-40f8-4e87-8be0-b9d7b0364201\") " pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" Feb 24 02:03:56.979917 master-0 kubenswrapper[7864]: I0224 02:03:56.979897 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:56.979970 master-0 kubenswrapper[7864]: I0224 02:03:56.979924 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b36d8451-0fda-4d9d-a850-d05c8f847016-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-sl5hz\" (UID: \"b36d8451-0fda-4d9d-a850-d05c8f847016\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" Feb 24 02:03:56.979970 master-0 kubenswrapper[7864]: I0224 02:03:56.979925 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqqkv\" (UniqueName: \"kubernetes.io/projected/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-kube-api-access-dqqkv\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:56.980024 master-0 kubenswrapper[7864]: I0224 02:03:56.979969 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3332acec-1553-4594-a903-a322399f6d9d-metrics-tls\") pod \"network-operator-7d7db75979-drrqm\" (UID: \"3332acec-1553-4594-a903-a322399f6d9d\") " pod="openshift-network-operator/network-operator-7d7db75979-drrqm" Feb 24 02:03:56.980024 master-0 kubenswrapper[7864]: I0224 02:03:56.979982 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8tttg\" (UID: \"f5463fbf-ac21-4058-9a3b-30d0e5ea31b7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" Feb 24 02:03:56.980024 master-0 kubenswrapper[7864]: I0224 02:03:56.979998 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcbda577-b943-4b5c-b041-948aece8e40f-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-xdws2\" (UID: \"fcbda577-b943-4b5c-b041-948aece8e40f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" Feb 24 02:03:56.980101 master-0 kubenswrapper[7864]: I0224 02:03:56.980026 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcbda577-b943-4b5c-b041-948aece8e40f-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-xdws2\" (UID: \"fcbda577-b943-4b5c-b041-948aece8e40f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" Feb 24 02:03:56.980146 master-0 kubenswrapper[7864]: I0224 02:03:56.980126 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/303d5058-84df-40d1-a941-896b093ae470-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-7wc6k\" (UID: \"303d5058-84df-40d1-a941-896b093ae470\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" Feb 24 02:03:56.980187 master-0 kubenswrapper[7864]: I0224 02:03:56.980158 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c92835f0-7f32-4584-8304-843d7979392a-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-ccrxg\" (UID: \"c92835f0-7f32-4584-8304-843d7979392a\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:03:56.980214 master-0 kubenswrapper[7864]: I0224 02:03:56.980184 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcbda577-b943-4b5c-b041-948aece8e40f-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-xdws2\" (UID: \"fcbda577-b943-4b5c-b041-948aece8e40f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" Feb 24 02:03:56.980214 master-0 kubenswrapper[7864]: I0224 02:03:56.980184 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6qs2\" (UniqueName: \"kubernetes.io/projected/3332acec-1553-4594-a903-a322399f6d9d-kube-api-access-x6qs2\") pod \"network-operator-7d7db75979-drrqm\" (UID: \"3332acec-1553-4594-a903-a322399f6d9d\") " pod="openshift-network-operator/network-operator-7d7db75979-drrqm" Feb 24 02:03:56.980265 master-0 kubenswrapper[7864]: I0224 02:03:56.980207 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcbda577-b943-4b5c-b041-948aece8e40f-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-xdws2\" (UID: \"fcbda577-b943-4b5c-b041-948aece8e40f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" Feb 24 02:03:56.980330 master-0 kubenswrapper[7864]: I0224 02:03:56.980301 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c92835f0-7f32-4584-8304-843d7979392a-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-ccrxg\" (UID: \"c92835f0-7f32-4584-8304-843d7979392a\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:03:56.980394 master-0 kubenswrapper[7864]: I0224 02:03:56.980363 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-var-lib-cni-multus\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:56.980430 master-0 kubenswrapper[7864]: I0224 02:03:56.980412 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-4qf9p\" (UID: \"91d16f7b-390a-4d9d-99d6-cc8e210801d1\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:03:56.980485 master-0 kubenswrapper[7864]: I0224 02:03:56.980459 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qph4g\" (UniqueName: \"kubernetes.io/projected/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-kube-api-access-qph4g\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:03:56.980521 master-0 kubenswrapper[7864]: I0224 02:03:56.980464 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3332acec-1553-4594-a903-a322399f6d9d-metrics-tls\") pod \"network-operator-7d7db75979-drrqm\" (UID: \"3332acec-1553-4594-a903-a322399f6d9d\") " pod="openshift-network-operator/network-operator-7d7db75979-drrqm" Feb 24 02:03:56.980552 master-0 kubenswrapper[7864]: I0224 02:03:56.980508 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/303d5058-84df-40d1-a941-896b093ae470-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-7wc6k\" (UID: \"303d5058-84df-40d1-a941-896b093ae470\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" Feb 24 02:03:56.980552 master-0 kubenswrapper[7864]: I0224 02:03:56.980520 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/57811d07-ae8a-44b7-8efb-dafc5afad31e-cni-binary-copy\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:03:56.980628 master-0 kubenswrapper[7864]: I0224 02:03:56.980599 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/57811d07-ae8a-44b7-8efb-dafc5afad31e-whereabouts-configmap\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:03:56.980657 master-0 kubenswrapper[7864]: I0224 02:03:56.980642 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86pcb\" (UniqueName: \"kubernetes.io/projected/7b098bd4-5751-4b01-8409-0688fd29233e-kube-api-access-86pcb\") pod \"csi-snapshot-controller-operator-6fb4df594f-c95qc\" (UID: \"7b098bd4-5751-4b01-8409-0688fd29233e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-c95qc" Feb 24 02:03:56.980709 master-0 kubenswrapper[7864]: I0224 02:03:56.980681 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-kubelet\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:56.980748 master-0 kubenswrapper[7864]: I0224 02:03:56.980714 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/57811d07-ae8a-44b7-8efb-dafc5afad31e-cni-binary-copy\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:03:56.980748 master-0 kubenswrapper[7864]: I0224 02:03:56.980730 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-etcd-ca\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:56.980812 master-0 kubenswrapper[7864]: I0224 02:03:56.980787 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d8e20d47-aeb6-41bf-9715-c437beb8e9e4-host-slash\") pod \"iptables-alerter-rjbl5\" (UID: \"d8e20d47-aeb6-41bf-9715-c437beb8e9e4\") " pod="openshift-network-operator/iptables-alerter-rjbl5" Feb 24 02:03:56.980861 master-0 kubenswrapper[7864]: I0224 02:03:56.980836 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8tttg\" (UID: \"f5463fbf-ac21-4058-9a3b-30d0e5ea31b7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" Feb 24 02:03:56.980861 master-0 kubenswrapper[7864]: I0224 02:03:56.980852 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-etcd-ca\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:56.980921 master-0 kubenswrapper[7864]: I0224 02:03:56.980887 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-etc-kubernetes\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:56.980949 master-0 kubenswrapper[7864]: I0224 02:03:56.980927 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/adc1097b-c1ab-4f09-965d-1c819671475b-ovnkube-identity-cm\") pod \"network-node-identity-p5b6q\" (UID: \"adc1097b-c1ab-4f09-965d-1c819671475b\") " pod="openshift-network-node-identity/network-node-identity-p5b6q" Feb 24 02:03:56.980982 master-0 kubenswrapper[7864]: I0224 02:03:56.980961 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/57811d07-ae8a-44b7-8efb-dafc5afad31e-whereabouts-configmap\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:03:56.981033 master-0 kubenswrapper[7864]: I0224 02:03:56.980970 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-daemon-config\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:56.981108 master-0 kubenswrapper[7864]: I0224 02:03:56.981074 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4f72a322-2142-482a-9b0b-2ad890181d7a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:03:56.981177 master-0 kubenswrapper[7864]: I0224 02:03:56.981146 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert\") pod \"olm-operator-5499d7f7bb-5g6nc\" (UID: \"02f1d753-983a-4c4a-b1a0-560de173859a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:03:56.981244 master-0 kubenswrapper[7864]: I0224 02:03:56.981162 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b36d8451-0fda-4d9d-a850-d05c8f847016-config\") pod \"openshift-apiserver-operator-8586dccc9b-sl5hz\" (UID: \"b36d8451-0fda-4d9d-a850-d05c8f847016\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" Feb 24 02:03:56.981278 master-0 kubenswrapper[7864]: I0224 02:03:56.981255 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-daemon-config\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:56.981312 master-0 kubenswrapper[7864]: I0224 02:03:56.981217 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d8e20d47-aeb6-41bf-9715-c437beb8e9e4-iptables-alerter-script\") pod \"iptables-alerter-rjbl5\" (UID: \"d8e20d47-aeb6-41bf-9715-c437beb8e9e4\") " pod="openshift-network-operator/iptables-alerter-rjbl5" Feb 24 02:03:56.981362 master-0 kubenswrapper[7864]: I0224 02:03:56.981335 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv6t5\" (UniqueName: \"kubernetes.io/projected/d8e20d47-aeb6-41bf-9715-c437beb8e9e4-kube-api-access-qv6t5\") pod \"iptables-alerter-rjbl5\" (UID: \"d8e20d47-aeb6-41bf-9715-c437beb8e9e4\") " pod="openshift-network-operator/iptables-alerter-rjbl5" Feb 24 02:03:56.981414 master-0 kubenswrapper[7864]: I0224 02:03:56.981390 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:03:56.981464 master-0 kubenswrapper[7864]: I0224 02:03:56.981439 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/db8d6627-394c-4087-bfa4-bf7580f6bb4b-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:03:56.981511 master-0 kubenswrapper[7864]: I0224 02:03:56.981488 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-run-netns\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:56.981563 master-0 kubenswrapper[7864]: I0224 02:03:56.981541 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-var-lib-kubelet\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:56.981646 master-0 kubenswrapper[7864]: I0224 02:03:56.981623 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-system-cni-dir\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:03:56.981703 master-0 kubenswrapper[7864]: I0224 02:03:56.981673 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrmsh\" (UniqueName: \"kubernetes.io/projected/57811d07-ae8a-44b7-8efb-dafc5afad31e-kube-api-access-vrmsh\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:03:56.981771 master-0 kubenswrapper[7864]: I0224 02:03:56.981728 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8tttg\" (UID: \"f5463fbf-ac21-4058-9a3b-30d0e5ea31b7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" Feb 24 02:03:56.981822 master-0 kubenswrapper[7864]: I0224 02:03:56.981797 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:56.981865 master-0 kubenswrapper[7864]: I0224 02:03:56.981843 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cabdddba-5507-4e47-98ef-a00c6d0f305d-serving-cert\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:03:56.981927 master-0 kubenswrapper[7864]: I0224 02:03:56.981903 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqtld\" (UniqueName: \"kubernetes.io/projected/adc1097b-c1ab-4f09-965d-1c819671475b-kube-api-access-nqtld\") pod \"network-node-identity-p5b6q\" (UID: \"adc1097b-c1ab-4f09-965d-1c819671475b\") " pod="openshift-network-node-identity/network-node-identity-p5b6q" Feb 24 02:03:56.981987 master-0 kubenswrapper[7864]: I0224 02:03:56.981961 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8tttg\" (UID: \"f5463fbf-ac21-4058-9a3b-30d0e5ea31b7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" Feb 24 02:03:56.982213 master-0 kubenswrapper[7864]: I0224 02:03:56.981962 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f85222bf-f51a-4232-8db1-1e6ee593617b-config\") pod \"kube-apiserver-operator-5d87bf58c-2492q\" (UID: \"f85222bf-f51a-4232-8db1-1e6ee593617b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" Feb 24 02:03:56.982213 master-0 kubenswrapper[7864]: I0224 02:03:56.981969 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/db8d6627-394c-4087-bfa4-bf7580f6bb4b-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:03:56.982213 master-0 kubenswrapper[7864]: I0224 02:03:56.982018 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a02536a3-7d3e-4e74-9625-aefed518ec35-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-tl97n\" (UID: \"a02536a3-7d3e-4e74-9625-aefed518ec35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" Feb 24 02:03:56.982213 master-0 kubenswrapper[7864]: I0224 02:03:56.982011 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cabdddba-5507-4e47-98ef-a00c6d0f305d-serving-cert\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:03:56.982213 master-0 kubenswrapper[7864]: I0224 02:03:56.982046 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-socket-dir-parent\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:56.982213 master-0 kubenswrapper[7864]: I0224 02:03:56.982120 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/adc1097b-c1ab-4f09-965d-1c819671475b-webhook-cert\") pod \"network-node-identity-p5b6q\" (UID: \"adc1097b-c1ab-4f09-965d-1c819671475b\") " pod="openshift-network-node-identity/network-node-identity-p5b6q" Feb 24 02:03:56.982213 master-0 kubenswrapper[7864]: I0224 02:03:56.982163 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/523033b8-4101-4a55-8320-55bef04ddaaf-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-bb22k\" (UID: \"523033b8-4101-4a55-8320-55bef04ddaaf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" Feb 24 02:03:56.982538 master-0 kubenswrapper[7864]: I0224 02:03:56.982248 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f85222bf-f51a-4232-8db1-1e6ee593617b-config\") pod \"kube-apiserver-operator-5d87bf58c-2492q\" (UID: \"f85222bf-f51a-4232-8db1-1e6ee593617b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" Feb 24 02:03:56.982538 master-0 kubenswrapper[7864]: I0224 02:03:56.982377 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2b65\" (UniqueName: \"kubernetes.io/projected/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c-kube-api-access-n2b65\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7fgn\" (UID: \"7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn" Feb 24 02:03:56.982538 master-0 kubenswrapper[7864]: I0224 02:03:56.982420 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-cni-bin\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:56.982538 master-0 kubenswrapper[7864]: I0224 02:03:56.982461 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb52w\" (UniqueName: \"kubernetes.io/projected/02f1d753-983a-4c4a-b1a0-560de173859a-kube-api-access-mb52w\") pod \"olm-operator-5499d7f7bb-5g6nc\" (UID: \"02f1d753-983a-4c4a-b1a0-560de173859a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:03:56.982538 master-0 kubenswrapper[7864]: I0224 02:03:56.982481 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/523033b8-4101-4a55-8320-55bef04ddaaf-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-bb22k\" (UID: \"523033b8-4101-4a55-8320-55bef04ddaaf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" Feb 24 02:03:57.000288 master-0 kubenswrapper[7864]: I0224 02:03:57.000247 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 24 02:03:57.019887 master-0 kubenswrapper[7864]: I0224 02:03:57.019845 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 24 02:03:57.040429 master-0 kubenswrapper[7864]: I0224 02:03:57.040386 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 24 02:03:57.043155 master-0 kubenswrapper[7864]: I0224 02:03:57.043109 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/adc1097b-c1ab-4f09-965d-1c819671475b-webhook-cert\") pod \"network-node-identity-p5b6q\" (UID: \"adc1097b-c1ab-4f09-965d-1c819671475b\") " pod="openshift-network-node-identity/network-node-identity-p5b6q" Feb 24 02:03:57.059890 master-0 kubenswrapper[7864]: I0224 02:03:57.059843 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 24 02:03:57.061835 master-0 kubenswrapper[7864]: I0224 02:03:57.061762 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/adc1097b-c1ab-4f09-965d-1c819671475b-ovnkube-identity-cm\") pod \"network-node-identity-p5b6q\" (UID: \"adc1097b-c1ab-4f09-965d-1c819671475b\") " pod="openshift-network-node-identity/network-node-identity-p5b6q" Feb 24 02:03:57.079812 master-0 kubenswrapper[7864]: I0224 02:03:57.079769 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 24 02:03:57.082102 master-0 kubenswrapper[7864]: I0224 02:03:57.082062 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/adc1097b-c1ab-4f09-965d-1c819671475b-env-overrides\") pod \"network-node-identity-p5b6q\" (UID: \"adc1097b-c1ab-4f09-965d-1c819671475b\") " pod="openshift-network-node-identity/network-node-identity-p5b6q" Feb 24 02:03:57.083877 master-0 kubenswrapper[7864]: I0224 02:03:57.083796 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-etc-kubernetes\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:57.083877 master-0 kubenswrapper[7864]: I0224 02:03:57.083862 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4f72a322-2142-482a-9b0b-2ad890181d7a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:03:57.083945 master-0 kubenswrapper[7864]: I0224 02:03:57.083903 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert\") pod \"olm-operator-5499d7f7bb-5g6nc\" (UID: \"02f1d753-983a-4c4a-b1a0-560de173859a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:03:57.083975 master-0 kubenswrapper[7864]: I0224 02:03:57.083948 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-etc-kubernetes\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:57.084076 master-0 kubenswrapper[7864]: I0224 02:03:57.084031 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4f72a322-2142-482a-9b0b-2ad890181d7a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:03:57.084216 master-0 kubenswrapper[7864]: I0224 02:03:57.084179 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:03:57.084252 master-0 kubenswrapper[7864]: E0224 02:03:57.084126 7864 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 24 02:03:57.084252 master-0 kubenswrapper[7864]: I0224 02:03:57.084238 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-run-netns\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:57.084306 master-0 kubenswrapper[7864]: I0224 02:03:57.084282 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-var-lib-kubelet\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:57.084383 master-0 kubenswrapper[7864]: I0224 02:03:57.084342 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-run-netns\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:57.084415 master-0 kubenswrapper[7864]: I0224 02:03:57.084353 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-var-lib-kubelet\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:57.084415 master-0 kubenswrapper[7864]: E0224 02:03:57.084383 7864 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 24 02:03:57.084472 master-0 kubenswrapper[7864]: I0224 02:03:57.084429 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-system-cni-dir\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:03:57.084472 master-0 kubenswrapper[7864]: I0224 02:03:57.084384 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-system-cni-dir\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:03:57.084542 master-0 kubenswrapper[7864]: E0224 02:03:57.084519 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls podName:c84dc269-43ae-4083-9998-a0b3c90bb681 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:57.584487598 +0000 UTC m=+1.912141250 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-d7sx4" (UID: "c84dc269-43ae-4083-9998-a0b3c90bb681") : secret "image-registry-operator-tls" not found Feb 24 02:03:57.084608 master-0 kubenswrapper[7864]: I0224 02:03:57.084562 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:57.084717 master-0 kubenswrapper[7864]: I0224 02:03:57.084672 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-socket-dir-parent\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:57.084747 master-0 kubenswrapper[7864]: E0224 02:03:57.084705 7864 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 24 02:03:57.084775 master-0 kubenswrapper[7864]: I0224 02:03:57.084738 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-cni-bin\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.084821 master-0 kubenswrapper[7864]: E0224 02:03:57.084803 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert podName:6a9ccd8e-d964-4c03-8ffc-51b464030c25 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:57.584771886 +0000 UTC m=+1.912425548 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-8x6sd" (UID: "6a9ccd8e-d964-4c03-8ffc-51b464030c25") : secret "performance-addon-operator-webhook-cert" not found Feb 24 02:03:57.084853 master-0 kubenswrapper[7864]: I0224 02:03:57.084812 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-cni-bin\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.084889 master-0 kubenswrapper[7864]: I0224 02:03:57.084848 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-run-ovn\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.084915 master-0 kubenswrapper[7864]: I0224 02:03:57.084876 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-socket-dir-parent\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:57.084942 master-0 kubenswrapper[7864]: I0224 02:03:57.084908 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2hllb\" (UID: \"6320dbb5-b84d-4a57-8c65-fbed8421f84a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" Feb 24 02:03:57.084994 master-0 kubenswrapper[7864]: I0224 02:03:57.084892 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-run-ovn\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.084994 master-0 kubenswrapper[7864]: E0224 02:03:57.084945 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert podName:02f1d753-983a-4c4a-b1a0-560de173859a nodeName:}" failed. No retries permitted until 2026-02-24 02:03:57.58492835 +0000 UTC m=+1.912582012 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert") pod "olm-operator-5499d7f7bb-5g6nc" (UID: "02f1d753-983a-4c4a-b1a0-560de173859a") : secret "olm-operator-serving-cert" not found Feb 24 02:03:57.085059 master-0 kubenswrapper[7864]: E0224 02:03:57.085005 7864 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 24 02:03:57.085059 master-0 kubenswrapper[7864]: I0224 02:03:57.085021 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:57.085112 master-0 kubenswrapper[7864]: E0224 02:03:57.085090 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert podName:6320dbb5-b84d-4a57-8c65-fbed8421f84a nodeName:}" failed. No retries permitted until 2026-02-24 02:03:57.585060074 +0000 UTC m=+1.912713736 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-2hllb" (UID: "6320dbb5-b84d-4a57-8c65-fbed8421f84a") : secret "package-server-manager-serving-cert" not found Feb 24 02:03:57.085163 master-0 kubenswrapper[7864]: E0224 02:03:57.085136 7864 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 24 02:03:57.085226 master-0 kubenswrapper[7864]: E0224 02:03:57.085207 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls podName:6a9ccd8e-d964-4c03-8ffc-51b464030c25 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:57.585181637 +0000 UTC m=+1.912835299 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-8x6sd" (UID: "6a9ccd8e-d964-4c03-8ffc-51b464030c25") : secret "node-tuning-operator-tls" not found Feb 24 02:03:57.085262 master-0 kubenswrapper[7864]: I0224 02:03:57.085241 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-run-systemd\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.085309 master-0 kubenswrapper[7864]: I0224 02:03:57.085285 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-etc-openvswitch\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.085372 master-0 kubenswrapper[7864]: I0224 02:03:57.085347 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-etc-openvswitch\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.085472 master-0 kubenswrapper[7864]: I0224 02:03:57.085407 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-run-systemd\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.085472 master-0 kubenswrapper[7864]: I0224 02:03:57.085458 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.085546 master-0 kubenswrapper[7864]: I0224 02:03:57.085526 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.085586 master-0 kubenswrapper[7864]: I0224 02:03:57.085554 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-fkzdb\" (UID: \"f2e9cdff-8c15-43df-b8df-7fe3a73fda86\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:03:57.085655 master-0 kubenswrapper[7864]: I0224 02:03:57.085631 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:57.085684 master-0 kubenswrapper[7864]: E0224 02:03:57.085660 7864 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 24 02:03:57.085724 master-0 kubenswrapper[7864]: E0224 02:03:57.085711 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls podName:f2e9cdff-8c15-43df-b8df-7fe3a73fda86 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:57.585694782 +0000 UTC m=+1.913348434 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-fkzdb" (UID: "f2e9cdff-8c15-43df-b8df-7fe3a73fda86") : secret "cluster-monitoring-operator-tls" not found Feb 24 02:03:57.085763 master-0 kubenswrapper[7864]: I0224 02:03:57.085743 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-hostroot\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:57.085835 master-0 kubenswrapper[7864]: I0224 02:03:57.085816 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-dg77f\" (UID: \"dc3d08db-45fa-4fef-b1fd-2875f22d5c45\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" Feb 24 02:03:57.085898 master-0 kubenswrapper[7864]: E0224 02:03:57.085868 7864 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Feb 24 02:03:57.085985 master-0 kubenswrapper[7864]: E0224 02:03:57.085967 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert podName:7b4e3ba0-5194-4e20-8f12-dea4b67504fe nodeName:}" failed. No retries permitted until 2026-02-24 02:03:57.585932338 +0000 UTC m=+1.913586000 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert") pod "cluster-baremetal-operator-d6bb9bb76-k98fq" (UID: "7b4e3ba0-5194-4e20-8f12-dea4b67504fe") : secret "cluster-baremetal-webhook-server-cert" not found Feb 24 02:03:57.086023 master-0 kubenswrapper[7864]: E0224 02:03:57.085970 7864 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 24 02:03:57.086023 master-0 kubenswrapper[7864]: I0224 02:03:57.085999 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-hostroot\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:57.086023 master-0 kubenswrapper[7864]: I0224 02:03:57.085873 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls\") pod \"dns-operator-8c7d49845-hxcn2\" (UID: \"2cb764f6-40f8-4e87-8be0-b9d7b0364201\") " pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" Feb 24 02:03:57.086023 master-0 kubenswrapper[7864]: E0224 02:03:57.086032 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs podName:dc3d08db-45fa-4fef-b1fd-2875f22d5c45 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:57.586019681 +0000 UTC m=+1.913673343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-dg77f" (UID: "dc3d08db-45fa-4fef-b1fd-2875f22d5c45") : secret "multus-admission-controller-secret" not found Feb 24 02:03:57.086155 master-0 kubenswrapper[7864]: E0224 02:03:57.086075 7864 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 24 02:03:57.086155 master-0 kubenswrapper[7864]: I0224 02:03:57.086095 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn8hz\" (UniqueName: \"kubernetes.io/projected/e3a675b9-feaa-4456-b7b4-0cd3afc42a42-kube-api-access-nn8hz\") pod \"network-check-target-54b95\" (UID: \"e3a675b9-feaa-4456-b7b4-0cd3afc42a42\") " pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:57.086155 master-0 kubenswrapper[7864]: E0224 02:03:57.086153 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls podName:2cb764f6-40f8-4e87-8be0-b9d7b0364201 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:57.586136974 +0000 UTC m=+1.913790626 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls") pod "dns-operator-8c7d49845-hxcn2" (UID: "2cb764f6-40f8-4e87-8be0-b9d7b0364201") : secret "metrics-tls" not found Feb 24 02:03:57.086234 master-0 kubenswrapper[7864]: I0224 02:03:57.086214 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-cni-netd\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.086315 master-0 kubenswrapper[7864]: I0224 02:03:57.086296 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:03:57.086356 master-0 kubenswrapper[7864]: I0224 02:03:57.086337 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-log-socket\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.086423 master-0 kubenswrapper[7864]: I0224 02:03:57.086406 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-log-socket\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.086453 master-0 kubenswrapper[7864]: I0224 02:03:57.086408 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-cni-netd\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.086481 master-0 kubenswrapper[7864]: E0224 02:03:57.086451 7864 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Feb 24 02:03:57.086523 master-0 kubenswrapper[7864]: I0224 02:03:57.086475 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs\") pod \"network-metrics-daemon-tntcf\" (UID: \"70e2ba24-4871-4d1d-9935-156fdbeb2810\") " pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:57.086551 master-0 kubenswrapper[7864]: E0224 02:03:57.086533 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls podName:db8d6627-394c-4087-bfa4-bf7580f6bb4b nodeName:}" failed. No retries permitted until 2026-02-24 02:03:57.586516365 +0000 UTC m=+1.914170017 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls") pod "machine-config-operator-7f8c75f984-ffnq7" (UID: "db8d6627-394c-4087-bfa4-bf7580f6bb4b") : secret "mco-proxy-tls" not found Feb 24 02:03:57.086625 master-0 kubenswrapper[7864]: I0224 02:03:57.086596 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:03:57.086656 master-0 kubenswrapper[7864]: E0224 02:03:57.086621 7864 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 24 02:03:57.086683 master-0 kubenswrapper[7864]: I0224 02:03:57.086654 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-run-ovn-kubernetes\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.086711 master-0 kubenswrapper[7864]: E0224 02:03:57.086665 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs podName:70e2ba24-4871-4d1d-9935-156fdbeb2810 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:57.586651819 +0000 UTC m=+1.914305471 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs") pod "network-metrics-daemon-tntcf" (UID: "70e2ba24-4871-4d1d-9935-156fdbeb2810") : secret "metrics-daemon-secret" not found Feb 24 02:03:57.086781 master-0 kubenswrapper[7864]: I0224 02:03:57.086748 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-run-ovn-kubernetes\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.086815 master-0 kubenswrapper[7864]: E0224 02:03:57.086777 7864 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 24 02:03:57.086862 master-0 kubenswrapper[7864]: E0224 02:03:57.086843 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls podName:c3278a82-ee70-4d6c-9c96-f8cb1bcb9334 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:57.586825324 +0000 UTC m=+1.914478986 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls") pod "ingress-operator-6569778c84-6dlqb" (UID: "c3278a82-ee70-4d6c-9c96-f8cb1bcb9334") : secret "metrics-tls" not found Feb 24 02:03:57.086970 master-0 kubenswrapper[7864]: I0224 02:03:57.086944 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-run-k8s-cni-cncf-io\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:57.087002 master-0 kubenswrapper[7864]: I0224 02:03:57.086884 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-run-k8s-cni-cncf-io\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:57.087257 master-0 kubenswrapper[7864]: I0224 02:03:57.087214 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-cnibin\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:03:57.087331 master-0 kubenswrapper[7864]: I0224 02:03:57.087297 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-os-release\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:03:57.087331 master-0 kubenswrapper[7864]: I0224 02:03:57.087332 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-cnibin\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:03:57.087435 master-0 kubenswrapper[7864]: I0224 02:03:57.087392 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-run-multus-certs\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:57.087435 master-0 kubenswrapper[7864]: I0224 02:03:57.087411 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-os-release\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:03:57.087553 master-0 kubenswrapper[7864]: I0224 02:03:57.087445 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:03:57.087553 master-0 kubenswrapper[7864]: I0224 02:03:57.087451 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-run-multus-certs\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:57.087553 master-0 kubenswrapper[7864]: I0224 02:03:57.087504 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:03:57.087659 master-0 kubenswrapper[7864]: I0224 02:03:57.087549 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert\") pod \"catalog-operator-596f79dd6f-8cg5c\" (UID: \"12b89e05-a503-47aa-90b2-4d741e015b19\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:03:57.087659 master-0 kubenswrapper[7864]: I0224 02:03:57.087625 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:03:57.087713 master-0 kubenswrapper[7864]: E0224 02:03:57.087680 7864 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 24 02:03:57.087713 master-0 kubenswrapper[7864]: I0224 02:03:57.087692 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-os-release\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:57.087767 master-0 kubenswrapper[7864]: E0224 02:03:57.087736 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert podName:12b89e05-a503-47aa-90b2-4d741e015b19 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:57.587720399 +0000 UTC m=+1.915374061 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert") pod "catalog-operator-596f79dd6f-8cg5c" (UID: "12b89e05-a503-47aa-90b2-4d741e015b19") : secret "catalog-operator-serving-cert" not found Feb 24 02:03:57.087812 master-0 kubenswrapper[7864]: E0224 02:03:57.087791 7864 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 24 02:03:57.087840 master-0 kubenswrapper[7864]: I0224 02:03:57.087810 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-os-release\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:57.087894 master-0 kubenswrapper[7864]: E0224 02:03:57.087851 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert podName:4f72a322-2142-482a-9b0b-2ad890181d7a nodeName:}" failed. No retries permitted until 2026-02-24 02:03:57.587832562 +0000 UTC m=+1.915486224 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert") pod "cluster-version-operator-5cfd9759cf-v5tpt" (UID: "4f72a322-2142-482a-9b0b-2ad890181d7a") : secret "cluster-version-operator-serving-cert" not found Feb 24 02:03:57.087950 master-0 kubenswrapper[7864]: I0224 02:03:57.087887 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-run-openvswitch\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.087978 master-0 kubenswrapper[7864]: I0224 02:03:57.087952 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-var-lib-cni-bin\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:57.087978 master-0 kubenswrapper[7864]: I0224 02:03:57.087957 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-run-openvswitch\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.088052 master-0 kubenswrapper[7864]: I0224 02:03:57.088006 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/3332acec-1553-4594-a903-a322399f6d9d-host-etc-kube\") pod \"network-operator-7d7db75979-drrqm\" (UID: \"3332acec-1553-4594-a903-a322399f6d9d\") " pod="openshift-network-operator/network-operator-7d7db75979-drrqm" Feb 24 02:03:57.088126 master-0 kubenswrapper[7864]: I0224 02:03:57.088051 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-var-lib-cni-bin\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:57.088126 master-0 kubenswrapper[7864]: I0224 02:03:57.088078 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4f72a322-2142-482a-9b0b-2ad890181d7a-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:03:57.088233 master-0 kubenswrapper[7864]: I0224 02:03:57.088136 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-var-lib-openvswitch\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.088233 master-0 kubenswrapper[7864]: I0224 02:03:57.088179 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4f72a322-2142-482a-9b0b-2ad890181d7a-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:03:57.088233 master-0 kubenswrapper[7864]: I0224 02:03:57.088178 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-conf-dir\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:57.088345 master-0 kubenswrapper[7864]: I0224 02:03:57.088219 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-conf-dir\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:57.088345 master-0 kubenswrapper[7864]: I0224 02:03:57.088141 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/3332acec-1553-4594-a903-a322399f6d9d-host-etc-kube\") pod \"network-operator-7d7db75979-drrqm\" (UID: \"3332acec-1553-4594-a903-a322399f6d9d\") " pod="openshift-network-operator/network-operator-7d7db75979-drrqm" Feb 24 02:03:57.088345 master-0 kubenswrapper[7864]: I0224 02:03:57.088298 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-var-lib-openvswitch\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.088345 master-0 kubenswrapper[7864]: I0224 02:03:57.088314 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-node-log\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.088445 master-0 kubenswrapper[7864]: I0224 02:03:57.088404 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-node-log\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.088529 master-0 kubenswrapper[7864]: I0224 02:03:57.088492 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-system-cni-dir\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:57.088664 master-0 kubenswrapper[7864]: I0224 02:03:57.088569 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-cnibin\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:57.088769 master-0 kubenswrapper[7864]: I0224 02:03:57.088646 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-system-cni-dir\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:57.088800 master-0 kubenswrapper[7864]: I0224 02:03:57.088658 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-cnibin\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:57.088829 master-0 kubenswrapper[7864]: I0224 02:03:57.088738 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-cni-dir\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:57.088902 master-0 kubenswrapper[7864]: I0224 02:03:57.088864 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-systemd-units\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.088902 master-0 kubenswrapper[7864]: I0224 02:03:57.088890 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-cni-dir\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:57.088965 master-0 kubenswrapper[7864]: I0224 02:03:57.088924 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-slash\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.089013 master-0 kubenswrapper[7864]: I0224 02:03:57.088988 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-slash\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.089059 master-0 kubenswrapper[7864]: I0224 02:03:57.089031 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-run-netns\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.089108 master-0 kubenswrapper[7864]: I0224 02:03:57.089082 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-run-netns\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.089143 master-0 kubenswrapper[7864]: I0224 02:03:57.089045 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-systemd-units\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.089143 master-0 kubenswrapper[7864]: I0224 02:03:57.089125 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:57.089244 master-0 kubenswrapper[7864]: I0224 02:03:57.089220 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-var-lib-cni-multus\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:57.089273 master-0 kubenswrapper[7864]: E0224 02:03:57.089226 7864 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Feb 24 02:03:57.089302 master-0 kubenswrapper[7864]: I0224 02:03:57.089271 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-var-lib-cni-multus\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:57.089364 master-0 kubenswrapper[7864]: E0224 02:03:57.089309 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls podName:7b4e3ba0-5194-4e20-8f12-dea4b67504fe nodeName:}" failed. No retries permitted until 2026-02-24 02:03:57.589292913 +0000 UTC m=+1.916946575 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-d6bb9bb76-k98fq" (UID: "7b4e3ba0-5194-4e20-8f12-dea4b67504fe") : secret "cluster-baremetal-operator-tls" not found Feb 24 02:03:57.089364 master-0 kubenswrapper[7864]: I0224 02:03:57.089347 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-4qf9p\" (UID: \"91d16f7b-390a-4d9d-99d6-cc8e210801d1\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:03:57.089445 master-0 kubenswrapper[7864]: I0224 02:03:57.089420 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-kubelet\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.089491 master-0 kubenswrapper[7864]: I0224 02:03:57.089470 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d8e20d47-aeb6-41bf-9715-c437beb8e9e4-host-slash\") pod \"iptables-alerter-rjbl5\" (UID: \"d8e20d47-aeb6-41bf-9715-c437beb8e9e4\") " pod="openshift-network-operator/iptables-alerter-rjbl5" Feb 24 02:03:57.089544 master-0 kubenswrapper[7864]: E0224 02:03:57.089523 7864 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 24 02:03:57.089624 master-0 kubenswrapper[7864]: E0224 02:03:57.089609 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics podName:91d16f7b-390a-4d9d-99d6-cc8e210801d1 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:57.589591731 +0000 UTC m=+1.917245383 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-4qf9p" (UID: "91d16f7b-390a-4d9d-99d6-cc8e210801d1") : secret "marketplace-operator-metrics" not found Feb 24 02:03:57.089668 master-0 kubenswrapper[7864]: I0224 02:03:57.089606 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-kubelet\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.089668 master-0 kubenswrapper[7864]: I0224 02:03:57.089629 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d8e20d47-aeb6-41bf-9715-c437beb8e9e4-host-slash\") pod \"iptables-alerter-rjbl5\" (UID: \"d8e20d47-aeb6-41bf-9715-c437beb8e9e4\") " pod="openshift-network-operator/iptables-alerter-rjbl5" Feb 24 02:03:57.099128 master-0 kubenswrapper[7864]: I0224 02:03:57.099082 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 24 02:03:57.120528 master-0 kubenswrapper[7864]: I0224 02:03:57.120485 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 24 02:03:57.142271 master-0 kubenswrapper[7864]: I0224 02:03:57.142216 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 24 02:03:57.152514 master-0 kubenswrapper[7864]: I0224 02:03:57.152480 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d8e20d47-aeb6-41bf-9715-c437beb8e9e4-iptables-alerter-script\") pod \"iptables-alerter-rjbl5\" (UID: \"d8e20d47-aeb6-41bf-9715-c437beb8e9e4\") " pod="openshift-network-operator/iptables-alerter-rjbl5" Feb 24 02:03:57.159263 master-0 kubenswrapper[7864]: I0224 02:03:57.159230 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 24 02:03:57.169026 master-0 kubenswrapper[7864]: I0224 02:03:57.168974 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-ovn-node-metrics-cert\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.180270 master-0 kubenswrapper[7864]: I0224 02:03:57.180220 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 24 02:03:57.186736 master-0 kubenswrapper[7864]: I0224 02:03:57.186697 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-ovnkube-script-lib\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.216784 master-0 kubenswrapper[7864]: I0224 02:03:57.216739 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:03:57.223756 master-0 kubenswrapper[7864]: I0224 02:03:57.223725 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:03:57.226676 master-0 kubenswrapper[7864]: E0224 02:03:57.226630 7864 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:03:57.248338 master-0 kubenswrapper[7864]: E0224 02:03:57.248271 7864 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:03:57.267415 master-0 kubenswrapper[7864]: E0224 02:03:57.267373 7864 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 02:03:57.288608 master-0 kubenswrapper[7864]: W0224 02:03:57.288551 7864 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Feb 24 02:03:57.288671 master-0 kubenswrapper[7864]: E0224 02:03:57.288654 7864 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Feb 24 02:03:57.310059 master-0 kubenswrapper[7864]: E0224 02:03:57.310014 7864 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 02:03:57.338322 master-0 kubenswrapper[7864]: I0224 02:03:57.338222 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwjpw\" (UniqueName: \"kubernetes.io/projected/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-kube-api-access-pwjpw\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:03:57.359699 master-0 kubenswrapper[7864]: I0224 02:03:57.359660 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82hfh\" (UniqueName: \"kubernetes.io/projected/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-kube-api-access-82hfh\") pod \"cluster-monitoring-operator-6bb6d78bf-fkzdb\" (UID: \"f2e9cdff-8c15-43df-b8df-7fe3a73fda86\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:03:57.383203 master-0 kubenswrapper[7864]: I0224 02:03:57.383161 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6lsp\" (UniqueName: \"kubernetes.io/projected/db8d6627-394c-4087-bfa4-bf7580f6bb4b-kube-api-access-x6lsp\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:03:57.402006 master-0 kubenswrapper[7864]: I0224 02:03:57.401934 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twgrj\" (UniqueName: \"kubernetes.io/projected/12b89e05-a503-47aa-90b2-4d741e015b19-kube-api-access-twgrj\") pod \"catalog-operator-596f79dd6f-8cg5c\" (UID: \"12b89e05-a503-47aa-90b2-4d741e015b19\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:03:57.424601 master-0 kubenswrapper[7864]: I0224 02:03:57.424530 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sp95\" (UniqueName: \"kubernetes.io/projected/c84dc269-43ae-4083-9998-a0b3c90bb681-kube-api-access-9sp95\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:03:57.434024 master-0 kubenswrapper[7864]: I0224 02:03:57.433993 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c84dc269-43ae-4083-9998-a0b3c90bb681-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:03:57.443320 master-0 kubenswrapper[7864]: I0224 02:03:57.443290 7864 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 02:03:57.454482 master-0 kubenswrapper[7864]: I0224 02:03:57.454437 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6f7j\" (UniqueName: \"kubernetes.io/projected/cabdddba-5507-4e47-98ef-a00c6d0f305d-kube-api-access-h6f7j\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:03:57.474827 master-0 kubenswrapper[7864]: I0224 02:03:57.474775 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nwzm\" (UniqueName: \"kubernetes.io/projected/c92835f0-7f32-4584-8304-843d7979392a-kube-api-access-6nwzm\") pod \"openshift-config-operator-6f47d587d6-ccrxg\" (UID: \"c92835f0-7f32-4584-8304-843d7979392a\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:03:57.502487 master-0 kubenswrapper[7864]: I0224 02:03:57.502441 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgjlz\" (UniqueName: \"kubernetes.io/projected/6320dbb5-b84d-4a57-8c65-fbed8421f84a-kube-api-access-pgjlz\") pod \"package-server-manager-5c75f78c8b-2hllb\" (UID: \"6320dbb5-b84d-4a57-8c65-fbed8421f84a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" Feb 24 02:03:57.515022 master-0 kubenswrapper[7864]: I0224 02:03:57.514972 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ssxg\" (UniqueName: \"kubernetes.io/projected/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-kube-api-access-2ssxg\") pod \"multus-admission-controller-5f98f4f8d5-dg77f\" (UID: \"dc3d08db-45fa-4fef-b1fd-2875f22d5c45\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" Feb 24 02:03:57.532884 master-0 kubenswrapper[7864]: I0224 02:03:57.532833 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nmd6\" (UniqueName: \"kubernetes.io/projected/70e2ba24-4871-4d1d-9935-156fdbeb2810-kube-api-access-4nmd6\") pod \"network-metrics-daemon-tntcf\" (UID: \"70e2ba24-4871-4d1d-9935-156fdbeb2810\") " pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:57.561157 master-0 kubenswrapper[7864]: I0224 02:03:57.561106 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtf52\" (UniqueName: \"kubernetes.io/projected/9b5620d6-a5fe-45d7-b39e-8bed7f602a17-kube-api-access-jtf52\") pod \"service-ca-operator-c48c8bf7c-6fqkr\" (UID: \"9b5620d6-a5fe-45d7-b39e-8bed7f602a17\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr" Feb 24 02:03:57.581586 master-0 kubenswrapper[7864]: I0224 02:03:57.581528 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpg26\" (UniqueName: \"kubernetes.io/projected/fcbda577-b943-4b5c-b041-948aece8e40f-kube-api-access-vpg26\") pod \"kube-storage-version-migrator-operator-fc889cfd5-xdws2\" (UID: \"fcbda577-b943-4b5c-b041-948aece8e40f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" Feb 24 02:03:57.600166 master-0 kubenswrapper[7864]: I0224 02:03:57.599420 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:57.600166 master-0 kubenswrapper[7864]: I0224 02:03:57.599472 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-4qf9p\" (UID: \"91d16f7b-390a-4d9d-99d6-cc8e210801d1\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:03:57.600166 master-0 kubenswrapper[7864]: I0224 02:03:57.599504 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert\") pod \"olm-operator-5499d7f7bb-5g6nc\" (UID: \"02f1d753-983a-4c4a-b1a0-560de173859a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:03:57.600166 master-0 kubenswrapper[7864]: I0224 02:03:57.599537 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:03:57.600166 master-0 kubenswrapper[7864]: I0224 02:03:57.599556 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:57.600166 master-0 kubenswrapper[7864]: I0224 02:03:57.599627 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2hllb\" (UID: \"6320dbb5-b84d-4a57-8c65-fbed8421f84a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" Feb 24 02:03:57.600166 master-0 kubenswrapper[7864]: I0224 02:03:57.599649 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:57.600166 master-0 kubenswrapper[7864]: I0224 02:03:57.599670 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-fkzdb\" (UID: \"f2e9cdff-8c15-43df-b8df-7fe3a73fda86\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:03:57.600166 master-0 kubenswrapper[7864]: I0224 02:03:57.599687 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:57.600166 master-0 kubenswrapper[7864]: I0224 02:03:57.599707 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-dg77f\" (UID: \"dc3d08db-45fa-4fef-b1fd-2875f22d5c45\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" Feb 24 02:03:57.600166 master-0 kubenswrapper[7864]: I0224 02:03:57.599729 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls\") pod \"dns-operator-8c7d49845-hxcn2\" (UID: \"2cb764f6-40f8-4e87-8be0-b9d7b0364201\") " pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" Feb 24 02:03:57.600166 master-0 kubenswrapper[7864]: I0224 02:03:57.599733 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlg2j\" (UniqueName: \"kubernetes.io/projected/523033b8-4101-4a55-8320-55bef04ddaaf-kube-api-access-dlg2j\") pod \"ovnkube-control-plane-5d8dfcdc87-bb22k\" (UID: \"523033b8-4101-4a55-8320-55bef04ddaaf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" Feb 24 02:03:57.600166 master-0 kubenswrapper[7864]: E0224 02:03:57.599882 7864 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 24 02:03:57.600166 master-0 kubenswrapper[7864]: E0224 02:03:57.599925 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls podName:c84dc269-43ae-4083-9998-a0b3c90bb681 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:58.599912435 +0000 UTC m=+2.927566057 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-d7sx4" (UID: "c84dc269-43ae-4083-9998-a0b3c90bb681") : secret "image-registry-operator-tls" not found Feb 24 02:03:57.600166 master-0 kubenswrapper[7864]: E0224 02:03:57.599956 7864 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 24 02:03:57.600166 master-0 kubenswrapper[7864]: E0224 02:03:57.600022 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert podName:02f1d753-983a-4c4a-b1a0-560de173859a nodeName:}" failed. No retries permitted until 2026-02-24 02:03:58.599996787 +0000 UTC m=+2.927650409 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert") pod "olm-operator-5499d7f7bb-5g6nc" (UID: "02f1d753-983a-4c4a-b1a0-560de173859a") : secret "olm-operator-serving-cert" not found Feb 24 02:03:57.600166 master-0 kubenswrapper[7864]: I0224 02:03:57.599755 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:03:57.600166 master-0 kubenswrapper[7864]: I0224 02:03:57.600087 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:03:57.600166 master-0 kubenswrapper[7864]: I0224 02:03:57.600117 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs\") pod \"network-metrics-daemon-tntcf\" (UID: \"70e2ba24-4871-4d1d-9935-156fdbeb2810\") " pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:57.600166 master-0 kubenswrapper[7864]: I0224 02:03:57.600156 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:03:57.600166 master-0 kubenswrapper[7864]: I0224 02:03:57.600192 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert\") pod \"catalog-operator-596f79dd6f-8cg5c\" (UID: \"12b89e05-a503-47aa-90b2-4d741e015b19\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:03:57.601072 master-0 kubenswrapper[7864]: E0224 02:03:57.600331 7864 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 24 02:03:57.601072 master-0 kubenswrapper[7864]: E0224 02:03:57.600358 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert podName:12b89e05-a503-47aa-90b2-4d741e015b19 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:58.600350577 +0000 UTC m=+2.928004199 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert") pod "catalog-operator-596f79dd6f-8cg5c" (UID: "12b89e05-a503-47aa-90b2-4d741e015b19") : secret "catalog-operator-serving-cert" not found Feb 24 02:03:57.601072 master-0 kubenswrapper[7864]: E0224 02:03:57.600393 7864 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 24 02:03:57.601072 master-0 kubenswrapper[7864]: E0224 02:03:57.600409 7864 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Feb 24 02:03:57.601072 master-0 kubenswrapper[7864]: E0224 02:03:57.600440 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls podName:db8d6627-394c-4087-bfa4-bf7580f6bb4b nodeName:}" failed. No retries permitted until 2026-02-24 02:03:58.600425019 +0000 UTC m=+2.928078641 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls") pod "machine-config-operator-7f8c75f984-ffnq7" (UID: "db8d6627-394c-4087-bfa4-bf7580f6bb4b") : secret "mco-proxy-tls" not found Feb 24 02:03:57.601072 master-0 kubenswrapper[7864]: E0224 02:03:57.600458 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert podName:6a9ccd8e-d964-4c03-8ffc-51b464030c25 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:58.60044797 +0000 UTC m=+2.928101842 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-8x6sd" (UID: "6a9ccd8e-d964-4c03-8ffc-51b464030c25") : secret "performance-addon-operator-webhook-cert" not found Feb 24 02:03:57.601072 master-0 kubenswrapper[7864]: E0224 02:03:57.600472 7864 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 24 02:03:57.601072 master-0 kubenswrapper[7864]: E0224 02:03:57.600497 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert podName:6320dbb5-b84d-4a57-8c65-fbed8421f84a nodeName:}" failed. No retries permitted until 2026-02-24 02:03:58.600489711 +0000 UTC m=+2.928143333 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-2hllb" (UID: "6320dbb5-b84d-4a57-8c65-fbed8421f84a") : secret "package-server-manager-serving-cert" not found Feb 24 02:03:57.601072 master-0 kubenswrapper[7864]: E0224 02:03:57.600531 7864 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 24 02:03:57.601072 master-0 kubenswrapper[7864]: E0224 02:03:57.600550 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls podName:6a9ccd8e-d964-4c03-8ffc-51b464030c25 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:58.600544623 +0000 UTC m=+2.928198245 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-8x6sd" (UID: "6a9ccd8e-d964-4c03-8ffc-51b464030c25") : secret "node-tuning-operator-tls" not found Feb 24 02:03:57.601072 master-0 kubenswrapper[7864]: E0224 02:03:57.600598 7864 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 24 02:03:57.601072 master-0 kubenswrapper[7864]: E0224 02:03:57.600618 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls podName:f2e9cdff-8c15-43df-b8df-7fe3a73fda86 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:58.600610425 +0000 UTC m=+2.928264047 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-fkzdb" (UID: "f2e9cdff-8c15-43df-b8df-7fe3a73fda86") : secret "cluster-monitoring-operator-tls" not found Feb 24 02:03:57.601072 master-0 kubenswrapper[7864]: E0224 02:03:57.600652 7864 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Feb 24 02:03:57.601072 master-0 kubenswrapper[7864]: E0224 02:03:57.600668 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert podName:7b4e3ba0-5194-4e20-8f12-dea4b67504fe nodeName:}" failed. No retries permitted until 2026-02-24 02:03:58.600663616 +0000 UTC m=+2.928317238 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert") pod "cluster-baremetal-operator-d6bb9bb76-k98fq" (UID: "7b4e3ba0-5194-4e20-8f12-dea4b67504fe") : secret "cluster-baremetal-webhook-server-cert" not found Feb 24 02:03:57.601072 master-0 kubenswrapper[7864]: E0224 02:03:57.600701 7864 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 24 02:03:57.601072 master-0 kubenswrapper[7864]: E0224 02:03:57.600719 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs podName:dc3d08db-45fa-4fef-b1fd-2875f22d5c45 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:58.600712177 +0000 UTC m=+2.928365799 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-dg77f" (UID: "dc3d08db-45fa-4fef-b1fd-2875f22d5c45") : secret "multus-admission-controller-secret" not found Feb 24 02:03:57.601072 master-0 kubenswrapper[7864]: E0224 02:03:57.600764 7864 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 24 02:03:57.601072 master-0 kubenswrapper[7864]: E0224 02:03:57.600781 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls podName:2cb764f6-40f8-4e87-8be0-b9d7b0364201 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:58.600775149 +0000 UTC m=+2.928428771 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls") pod "dns-operator-8c7d49845-hxcn2" (UID: "2cb764f6-40f8-4e87-8be0-b9d7b0364201") : secret "metrics-tls" not found Feb 24 02:03:57.601072 master-0 kubenswrapper[7864]: E0224 02:03:57.600841 7864 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 24 02:03:57.601072 master-0 kubenswrapper[7864]: E0224 02:03:57.600861 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert podName:4f72a322-2142-482a-9b0b-2ad890181d7a nodeName:}" failed. No retries permitted until 2026-02-24 02:03:58.600853731 +0000 UTC m=+2.928507353 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert") pod "cluster-version-operator-5cfd9759cf-v5tpt" (UID: "4f72a322-2142-482a-9b0b-2ad890181d7a") : secret "cluster-version-operator-serving-cert" not found Feb 24 02:03:57.601072 master-0 kubenswrapper[7864]: E0224 02:03:57.600895 7864 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 24 02:03:57.601072 master-0 kubenswrapper[7864]: E0224 02:03:57.600912 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs podName:70e2ba24-4871-4d1d-9935-156fdbeb2810 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:58.600906513 +0000 UTC m=+2.928560135 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs") pod "network-metrics-daemon-tntcf" (UID: "70e2ba24-4871-4d1d-9935-156fdbeb2810") : secret "metrics-daemon-secret" not found Feb 24 02:03:57.601072 master-0 kubenswrapper[7864]: E0224 02:03:57.600945 7864 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Feb 24 02:03:57.601072 master-0 kubenswrapper[7864]: E0224 02:03:57.600962 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls podName:7b4e3ba0-5194-4e20-8f12-dea4b67504fe nodeName:}" failed. No retries permitted until 2026-02-24 02:03:58.600956944 +0000 UTC m=+2.928610566 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-d6bb9bb76-k98fq" (UID: "7b4e3ba0-5194-4e20-8f12-dea4b67504fe") : secret "cluster-baremetal-operator-tls" not found Feb 24 02:03:57.601072 master-0 kubenswrapper[7864]: E0224 02:03:57.600992 7864 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 24 02:03:57.601072 master-0 kubenswrapper[7864]: E0224 02:03:57.601008 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics podName:91d16f7b-390a-4d9d-99d6-cc8e210801d1 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:58.601003306 +0000 UTC m=+2.928656928 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-4qf9p" (UID: "91d16f7b-390a-4d9d-99d6-cc8e210801d1") : secret "marketplace-operator-metrics" not found Feb 24 02:03:57.601992 master-0 kubenswrapper[7864]: E0224 02:03:57.601385 7864 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 24 02:03:57.601992 master-0 kubenswrapper[7864]: E0224 02:03:57.601444 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls podName:c3278a82-ee70-4d6c-9c96-f8cb1bcb9334 nodeName:}" failed. No retries permitted until 2026-02-24 02:03:58.601430098 +0000 UTC m=+2.929083950 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls") pod "ingress-operator-6569778c84-6dlqb" (UID: "c3278a82-ee70-4d6c-9c96-f8cb1bcb9334") : secret "metrics-tls" not found Feb 24 02:03:57.618409 master-0 kubenswrapper[7864]: I0224 02:03:57.618368 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc8jx\" (UniqueName: \"kubernetes.io/projected/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-kube-api-access-rc8jx\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:03:57.640353 master-0 kubenswrapper[7864]: I0224 02:03:57.640297 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f72a322-2142-482a-9b0b-2ad890181d7a-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:03:57.656026 master-0 kubenswrapper[7864]: I0224 02:03:57.655980 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssz8p\" (UniqueName: \"kubernetes.io/projected/6a9ccd8e-d964-4c03-8ffc-51b464030c25-kube-api-access-ssz8p\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:57.682488 master-0 kubenswrapper[7864]: I0224 02:03:57.682392 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8rjx\" (UniqueName: \"kubernetes.io/projected/91d16f7b-390a-4d9d-99d6-cc8e210801d1-kube-api-access-b8rjx\") pod \"marketplace-operator-6f5488b997-4qf9p\" (UID: \"91d16f7b-390a-4d9d-99d6-cc8e210801d1\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:03:57.698591 master-0 kubenswrapper[7864]: I0224 02:03:57.698519 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-bound-sa-token\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:03:57.714190 master-0 kubenswrapper[7864]: I0224 02:03:57.714096 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f85222bf-f51a-4232-8db1-1e6ee593617b-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-2492q\" (UID: \"f85222bf-f51a-4232-8db1-1e6ee593617b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" Feb 24 02:03:57.732478 master-0 kubenswrapper[7864]: I0224 02:03:57.732405 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79bl6\" (UniqueName: \"kubernetes.io/projected/303d5058-84df-40d1-a941-896b093ae470-kube-api-access-79bl6\") pod \"cluster-olm-operator-5bd7768f54-7wc6k\" (UID: \"303d5058-84df-40d1-a941-896b093ae470\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" Feb 24 02:03:57.763520 master-0 kubenswrapper[7864]: I0224 02:03:57.763458 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njjq8\" (UniqueName: \"kubernetes.io/projected/b36d8451-0fda-4d9d-a850-d05c8f847016-kube-api-access-njjq8\") pod \"openshift-apiserver-operator-8586dccc9b-sl5hz\" (UID: \"b36d8451-0fda-4d9d-a850-d05c8f847016\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" Feb 24 02:03:57.774766 master-0 kubenswrapper[7864]: I0224 02:03:57.774712 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbnd2\" (UniqueName: \"kubernetes.io/projected/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-kube-api-access-bbnd2\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:03:57.800463 master-0 kubenswrapper[7864]: I0224 02:03:57.800402 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqqkv\" (UniqueName: \"kubernetes.io/projected/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-kube-api-access-dqqkv\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:57.817653 master-0 kubenswrapper[7864]: I0224 02:03:57.817609 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp8hv\" (UniqueName: \"kubernetes.io/projected/2cb764f6-40f8-4e87-8be0-b9d7b0364201-kube-api-access-sp8hv\") pod \"dns-operator-8c7d49845-hxcn2\" (UID: \"2cb764f6-40f8-4e87-8be0-b9d7b0364201\") " pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" Feb 24 02:03:57.835900 master-0 kubenswrapper[7864]: I0224 02:03:57.835839 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6qs2\" (UniqueName: \"kubernetes.io/projected/3332acec-1553-4594-a903-a322399f6d9d-kube-api-access-x6qs2\") pod \"network-operator-7d7db75979-drrqm\" (UID: \"3332acec-1553-4594-a903-a322399f6d9d\") " pod="openshift-network-operator/network-operator-7d7db75979-drrqm" Feb 24 02:03:57.862025 master-0 kubenswrapper[7864]: I0224 02:03:57.861427 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qph4g\" (UniqueName: \"kubernetes.io/projected/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-kube-api-access-qph4g\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:03:57.875215 master-0 kubenswrapper[7864]: I0224 02:03:57.875168 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86pcb\" (UniqueName: \"kubernetes.io/projected/7b098bd4-5751-4b01-8409-0688fd29233e-kube-api-access-86pcb\") pod \"csi-snapshot-controller-operator-6fb4df594f-c95qc\" (UID: \"7b098bd4-5751-4b01-8409-0688fd29233e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-c95qc" Feb 24 02:03:57.893903 master-0 kubenswrapper[7864]: I0224 02:03:57.893852 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8tttg\" (UID: \"f5463fbf-ac21-4058-9a3b-30d0e5ea31b7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" Feb 24 02:03:57.913868 master-0 kubenswrapper[7864]: I0224 02:03:57.913823 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv6t5\" (UniqueName: \"kubernetes.io/projected/d8e20d47-aeb6-41bf-9715-c437beb8e9e4-kube-api-access-qv6t5\") pod \"iptables-alerter-rjbl5\" (UID: \"d8e20d47-aeb6-41bf-9715-c437beb8e9e4\") " pod="openshift-network-operator/iptables-alerter-rjbl5" Feb 24 02:03:57.941040 master-0 kubenswrapper[7864]: I0224 02:03:57.941004 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrmsh\" (UniqueName: \"kubernetes.io/projected/57811d07-ae8a-44b7-8efb-dafc5afad31e-kube-api-access-vrmsh\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:03:57.960033 master-0 kubenswrapper[7864]: I0224 02:03:57.960006 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a02536a3-7d3e-4e74-9625-aefed518ec35-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-tl97n\" (UID: \"a02536a3-7d3e-4e74-9625-aefed518ec35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" Feb 24 02:03:57.970093 master-0 kubenswrapper[7864]: I0224 02:03:57.970057 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqtld\" (UniqueName: \"kubernetes.io/projected/adc1097b-c1ab-4f09-965d-1c819671475b-kube-api-access-nqtld\") pod \"network-node-identity-p5b6q\" (UID: \"adc1097b-c1ab-4f09-965d-1c819671475b\") " pod="openshift-network-node-identity/network-node-identity-p5b6q" Feb 24 02:03:57.990671 master-0 kubenswrapper[7864]: I0224 02:03:57.990632 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2b65\" (UniqueName: \"kubernetes.io/projected/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c-kube-api-access-n2b65\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7fgn\" (UID: \"7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn" Feb 24 02:03:57.998469 master-0 kubenswrapper[7864]: I0224 02:03:57.998419 7864 request.go:700] Waited for 1.015614095s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token Feb 24 02:03:58.020454 master-0 kubenswrapper[7864]: I0224 02:03:58.020397 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb52w\" (UniqueName: \"kubernetes.io/projected/02f1d753-983a-4c4a-b1a0-560de173859a-kube-api-access-mb52w\") pod \"olm-operator-5499d7f7bb-5g6nc\" (UID: \"02f1d753-983a-4c4a-b1a0-560de173859a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:03:58.034699 master-0 kubenswrapper[7864]: I0224 02:03:58.032822 7864 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 24 02:03:58.041013 master-0 kubenswrapper[7864]: I0224 02:03:58.040956 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn8hz\" (UniqueName: \"kubernetes.io/projected/e3a675b9-feaa-4456-b7b4-0cd3afc42a42-kube-api-access-nn8hz\") pod \"network-check-target-54b95\" (UID: \"e3a675b9-feaa-4456-b7b4-0cd3afc42a42\") " pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:58.044541 master-0 kubenswrapper[7864]: I0224 02:03:58.044463 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:03:58.513788 master-0 kubenswrapper[7864]: I0224 02:03:58.513053 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:03:58.614724 master-0 kubenswrapper[7864]: I0224 02:03:58.614644 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:58.614724 master-0 kubenswrapper[7864]: I0224 02:03:58.614714 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-4qf9p\" (UID: \"91d16f7b-390a-4d9d-99d6-cc8e210801d1\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:03:58.614939 master-0 kubenswrapper[7864]: I0224 02:03:58.614750 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert\") pod \"olm-operator-5499d7f7bb-5g6nc\" (UID: \"02f1d753-983a-4c4a-b1a0-560de173859a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:03:58.615009 master-0 kubenswrapper[7864]: E0224 02:03:58.614966 7864 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 24 02:03:58.615111 master-0 kubenswrapper[7864]: I0224 02:03:58.615059 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:03:58.615111 master-0 kubenswrapper[7864]: E0224 02:03:58.615082 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics podName:91d16f7b-390a-4d9d-99d6-cc8e210801d1 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:00.615051497 +0000 UTC m=+4.942705119 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-4qf9p" (UID: "91d16f7b-390a-4d9d-99d6-cc8e210801d1") : secret "marketplace-operator-metrics" not found Feb 24 02:03:58.615367 master-0 kubenswrapper[7864]: E0224 02:03:58.614986 7864 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Feb 24 02:03:58.615367 master-0 kubenswrapper[7864]: E0224 02:03:58.615179 7864 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 24 02:03:58.615367 master-0 kubenswrapper[7864]: E0224 02:03:58.615096 7864 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 24 02:03:58.615367 master-0 kubenswrapper[7864]: E0224 02:03:58.615233 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls podName:7b4e3ba0-5194-4e20-8f12-dea4b67504fe nodeName:}" failed. No retries permitted until 2026-02-24 02:04:00.615205012 +0000 UTC m=+4.942858674 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-d6bb9bb76-k98fq" (UID: "7b4e3ba0-5194-4e20-8f12-dea4b67504fe") : secret "cluster-baremetal-operator-tls" not found Feb 24 02:03:58.615367 master-0 kubenswrapper[7864]: E0224 02:03:58.615273 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert podName:02f1d753-983a-4c4a-b1a0-560de173859a nodeName:}" failed. No retries permitted until 2026-02-24 02:04:00.615253973 +0000 UTC m=+4.942907605 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert") pod "olm-operator-5499d7f7bb-5g6nc" (UID: "02f1d753-983a-4c4a-b1a0-560de173859a") : secret "olm-operator-serving-cert" not found Feb 24 02:03:58.615367 master-0 kubenswrapper[7864]: E0224 02:03:58.615291 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls podName:c84dc269-43ae-4083-9998-a0b3c90bb681 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:00.615283244 +0000 UTC m=+4.942936876 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-d7sx4" (UID: "c84dc269-43ae-4083-9998-a0b3c90bb681") : secret "image-registry-operator-tls" not found Feb 24 02:03:58.615367 master-0 kubenswrapper[7864]: I0224 02:03:58.615310 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:58.615367 master-0 kubenswrapper[7864]: I0224 02:03:58.615343 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2hllb\" (UID: \"6320dbb5-b84d-4a57-8c65-fbed8421f84a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" Feb 24 02:03:58.615367 master-0 kubenswrapper[7864]: I0224 02:03:58.615370 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:03:58.615836 master-0 kubenswrapper[7864]: I0224 02:03:58.615403 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-fkzdb\" (UID: \"f2e9cdff-8c15-43df-b8df-7fe3a73fda86\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:03:58.615836 master-0 kubenswrapper[7864]: I0224 02:03:58.615428 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:03:58.615836 master-0 kubenswrapper[7864]: I0224 02:03:58.615460 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-dg77f\" (UID: \"dc3d08db-45fa-4fef-b1fd-2875f22d5c45\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" Feb 24 02:03:58.615836 master-0 kubenswrapper[7864]: I0224 02:03:58.615490 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls\") pod \"dns-operator-8c7d49845-hxcn2\" (UID: \"2cb764f6-40f8-4e87-8be0-b9d7b0364201\") " pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" Feb 24 02:03:58.615836 master-0 kubenswrapper[7864]: E0224 02:03:58.615501 7864 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 24 02:03:58.615836 master-0 kubenswrapper[7864]: I0224 02:03:58.615519 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:03:58.615836 master-0 kubenswrapper[7864]: E0224 02:03:58.615538 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert podName:6320dbb5-b84d-4a57-8c65-fbed8421f84a nodeName:}" failed. No retries permitted until 2026-02-24 02:04:00.615530511 +0000 UTC m=+4.943184133 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-2hllb" (UID: "6320dbb5-b84d-4a57-8c65-fbed8421f84a") : secret "package-server-manager-serving-cert" not found Feb 24 02:03:58.615836 master-0 kubenswrapper[7864]: I0224 02:03:58.615588 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:03:58.615836 master-0 kubenswrapper[7864]: E0224 02:03:58.615605 7864 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Feb 24 02:03:58.615836 master-0 kubenswrapper[7864]: I0224 02:03:58.615625 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs\") pod \"network-metrics-daemon-tntcf\" (UID: \"70e2ba24-4871-4d1d-9935-156fdbeb2810\") " pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:03:58.615836 master-0 kubenswrapper[7864]: E0224 02:03:58.615640 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls podName:db8d6627-394c-4087-bfa4-bf7580f6bb4b nodeName:}" failed. No retries permitted until 2026-02-24 02:04:00.615629984 +0000 UTC m=+4.943283626 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls") pod "machine-config-operator-7f8c75f984-ffnq7" (UID: "db8d6627-394c-4087-bfa4-bf7580f6bb4b") : secret "mco-proxy-tls" not found Feb 24 02:03:58.615836 master-0 kubenswrapper[7864]: E0224 02:03:58.615712 7864 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 24 02:03:58.615836 master-0 kubenswrapper[7864]: E0224 02:03:58.615720 7864 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 24 02:03:58.615836 master-0 kubenswrapper[7864]: E0224 02:03:58.615740 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert podName:6a9ccd8e-d964-4c03-8ffc-51b464030c25 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:00.615733457 +0000 UTC m=+4.943387079 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-8x6sd" (UID: "6a9ccd8e-d964-4c03-8ffc-51b464030c25") : secret "performance-addon-operator-webhook-cert" not found Feb 24 02:03:58.615836 master-0 kubenswrapper[7864]: E0224 02:03:58.615758 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls podName:f2e9cdff-8c15-43df-b8df-7fe3a73fda86 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:00.615750657 +0000 UTC m=+4.943404279 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-fkzdb" (UID: "f2e9cdff-8c15-43df-b8df-7fe3a73fda86") : secret "cluster-monitoring-operator-tls" not found Feb 24 02:03:58.615836 master-0 kubenswrapper[7864]: E0224 02:03:58.615786 7864 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Feb 24 02:03:58.615836 master-0 kubenswrapper[7864]: E0224 02:03:58.615814 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert podName:7b4e3ba0-5194-4e20-8f12-dea4b67504fe nodeName:}" failed. No retries permitted until 2026-02-24 02:04:00.615805189 +0000 UTC m=+4.943458821 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert") pod "cluster-baremetal-operator-d6bb9bb76-k98fq" (UID: "7b4e3ba0-5194-4e20-8f12-dea4b67504fe") : secret "cluster-baremetal-webhook-server-cert" not found Feb 24 02:03:58.615836 master-0 kubenswrapper[7864]: E0224 02:03:58.615817 7864 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 24 02:03:58.615836 master-0 kubenswrapper[7864]: E0224 02:03:58.615842 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls podName:c3278a82-ee70-4d6c-9c96-f8cb1bcb9334 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:00.615835019 +0000 UTC m=+4.943488641 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls") pod "ingress-operator-6569778c84-6dlqb" (UID: "c3278a82-ee70-4d6c-9c96-f8cb1bcb9334") : secret "metrics-tls" not found Feb 24 02:03:58.616451 master-0 kubenswrapper[7864]: E0224 02:03:58.615874 7864 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 24 02:03:58.616451 master-0 kubenswrapper[7864]: E0224 02:03:58.615904 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs podName:dc3d08db-45fa-4fef-b1fd-2875f22d5c45 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:00.615896841 +0000 UTC m=+4.943550473 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-dg77f" (UID: "dc3d08db-45fa-4fef-b1fd-2875f22d5c45") : secret "multus-admission-controller-secret" not found Feb 24 02:03:58.616451 master-0 kubenswrapper[7864]: E0224 02:03:58.615904 7864 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 24 02:03:58.616451 master-0 kubenswrapper[7864]: E0224 02:03:58.615933 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls podName:6a9ccd8e-d964-4c03-8ffc-51b464030c25 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:00.615928192 +0000 UTC m=+4.943581814 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-8x6sd" (UID: "6a9ccd8e-d964-4c03-8ffc-51b464030c25") : secret "node-tuning-operator-tls" not found Feb 24 02:03:58.616451 master-0 kubenswrapper[7864]: I0224 02:03:58.615955 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:03:58.616451 master-0 kubenswrapper[7864]: E0224 02:03:58.615966 7864 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 24 02:03:58.616451 master-0 kubenswrapper[7864]: I0224 02:03:58.615984 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert\") pod \"catalog-operator-596f79dd6f-8cg5c\" (UID: \"12b89e05-a503-47aa-90b2-4d741e015b19\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:03:58.616451 master-0 kubenswrapper[7864]: E0224 02:03:58.615995 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls podName:2cb764f6-40f8-4e87-8be0-b9d7b0364201 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:00.615987584 +0000 UTC m=+4.943641216 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls") pod "dns-operator-8c7d49845-hxcn2" (UID: "2cb764f6-40f8-4e87-8be0-b9d7b0364201") : secret "metrics-tls" not found Feb 24 02:03:58.616451 master-0 kubenswrapper[7864]: E0224 02:03:58.615879 7864 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 24 02:03:58.616451 master-0 kubenswrapper[7864]: E0224 02:03:58.616038 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs podName:70e2ba24-4871-4d1d-9935-156fdbeb2810 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:00.616030445 +0000 UTC m=+4.943684077 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs") pod "network-metrics-daemon-tntcf" (UID: "70e2ba24-4871-4d1d-9935-156fdbeb2810") : secret "metrics-daemon-secret" not found Feb 24 02:03:58.616451 master-0 kubenswrapper[7864]: E0224 02:03:58.616093 7864 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 24 02:03:58.616451 master-0 kubenswrapper[7864]: E0224 02:03:58.616121 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert podName:4f72a322-2142-482a-9b0b-2ad890181d7a nodeName:}" failed. No retries permitted until 2026-02-24 02:04:00.616111547 +0000 UTC m=+4.943765179 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert") pod "cluster-version-operator-5cfd9759cf-v5tpt" (UID: "4f72a322-2142-482a-9b0b-2ad890181d7a") : secret "cluster-version-operator-serving-cert" not found Feb 24 02:03:58.616451 master-0 kubenswrapper[7864]: E0224 02:03:58.616135 7864 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 24 02:03:58.616451 master-0 kubenswrapper[7864]: E0224 02:03:58.616170 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert podName:12b89e05-a503-47aa-90b2-4d741e015b19 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:00.616161599 +0000 UTC m=+4.943815221 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert") pod "catalog-operator-596f79dd6f-8cg5c" (UID: "12b89e05-a503-47aa-90b2-4d741e015b19") : secret "catalog-operator-serving-cert" not found Feb 24 02:03:59.267105 master-0 kubenswrapper[7864]: E0224 02:03:59.267038 7864 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9" Feb 24 02:03:59.267906 master-0 kubenswrapper[7864]: E0224 02:03:59.267406 7864 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qv6t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-rjbl5_openshift-network-operator(d8e20d47-aeb6-41bf-9715-c437beb8e9e4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 02:03:59.268627 master-0 kubenswrapper[7864]: E0224 02:03:59.268556 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-network-operator/iptables-alerter-rjbl5" podUID="d8e20d47-aeb6-41bf-9715-c437beb8e9e4" Feb 24 02:03:59.534199 master-0 kubenswrapper[7864]: I0224 02:03:59.533525 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-controller-manager-master-0" podStartSLOduration=3.533488529 podStartE2EDuration="3.533488529s" podCreationTimestamp="2026-02-24 02:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:03:59.529034234 +0000 UTC m=+3.856687926" watchObservedRunningTime="2026-02-24 02:03:59.533488529 +0000 UTC m=+3.861142181" Feb 24 02:03:59.561119 master-0 kubenswrapper[7864]: I0224 02:03:59.560709 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-54b95"] Feb 24 02:03:59.972087 master-0 kubenswrapper[7864]: I0224 02:03:59.971061 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-54b95" event={"ID":"e3a675b9-feaa-4456-b7b4-0cd3afc42a42","Type":"ContainerStarted","Data":"ca461cd5846178a42e36d7c5be475acd0be7b72129a007ae8d0fef2ce6b0c63e"} Feb 24 02:03:59.972087 master-0 kubenswrapper[7864]: I0224 02:03:59.972033 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-54b95" event={"ID":"e3a675b9-feaa-4456-b7b4-0cd3afc42a42","Type":"ContainerStarted","Data":"3d94304059d808624e692a18999e46c1ed32aa07c16bb3ea5a63de6a687dd377"} Feb 24 02:03:59.974967 master-0 kubenswrapper[7864]: I0224 02:03:59.974908 7864 generic.go:334] "Generic (PLEG): container finished" podID="303d5058-84df-40d1-a941-896b093ae470" containerID="4d87ace597126f6a6c5b7ecfae7ff8d57f99cad256a801b2bb6027c85887bf7c" exitCode=0 Feb 24 02:03:59.975111 master-0 kubenswrapper[7864]: I0224 02:03:59.975064 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" event={"ID":"303d5058-84df-40d1-a941-896b093ae470","Type":"ContainerDied","Data":"4d87ace597126f6a6c5b7ecfae7ff8d57f99cad256a801b2bb6027c85887bf7c"} Feb 24 02:03:59.977811 master-0 kubenswrapper[7864]: I0224 02:03:59.977761 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" event={"ID":"f5463fbf-ac21-4058-9a3b-30d0e5ea31b7","Type":"ContainerStarted","Data":"68ada84fc11ef9772ab5e035538455741610ae1dd423fbf673e921c253973049"} Feb 24 02:03:59.984093 master-0 kubenswrapper[7864]: I0224 02:03:59.984040 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-c95qc" event={"ID":"7b098bd4-5751-4b01-8409-0688fd29233e","Type":"ContainerStarted","Data":"cd4d613347fcadf7d18c80092bf87aa7750bed8421d9d1f3c7ea9c90740b5523"} Feb 24 02:03:59.985403 master-0 kubenswrapper[7864]: I0224 02:03:59.985374 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" event={"ID":"cabdddba-5507-4e47-98ef-a00c6d0f305d","Type":"ContainerStarted","Data":"5667f053bf6054763921df7da9ea78c08243cf4ba68ece9b8b2d0028467d46ce"} Feb 24 02:03:59.991126 master-0 kubenswrapper[7864]: I0224 02:03:59.991082 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" event={"ID":"b36d8451-0fda-4d9d-a850-d05c8f847016","Type":"ContainerStarted","Data":"681796145fac101487f620de46e9725e65a37fa800e21480a31dfc70bdc40485"} Feb 24 02:03:59.996336 master-0 kubenswrapper[7864]: I0224 02:03:59.996311 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn" event={"ID":"7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c","Type":"ContainerStarted","Data":"e9d2b8b0026aada75f8d27003b5a7df3b3f0253b60da8ae01339ca6b74582705"} Feb 24 02:04:00.002472 master-0 kubenswrapper[7864]: I0224 02:04:00.002432 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" event={"ID":"fcbda577-b943-4b5c-b041-948aece8e40f","Type":"ContainerStarted","Data":"34c020b5f77acec103ffa53ff06b99869f8141239314c8cafabb68cd7a9b73bc"} Feb 24 02:04:00.011264 master-0 kubenswrapper[7864]: I0224 02:04:00.011224 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" event={"ID":"a02536a3-7d3e-4e74-9625-aefed518ec35","Type":"ContainerStarted","Data":"7511febd7f596c7a27d0ee29de7073df96d352c94420e8b91ab376f0f8ffbe84"} Feb 24 02:04:00.015116 master-0 kubenswrapper[7864]: I0224 02:04:00.015067 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" event={"ID":"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2","Type":"ContainerStarted","Data":"fcdcbf5a149a0b578453f994965bc5ee1ca152377a3fb51c3ba2512d342fe454"} Feb 24 02:04:00.023917 master-0 kubenswrapper[7864]: I0224 02:04:00.023889 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr" event={"ID":"9b5620d6-a5fe-45d7-b39e-8bed7f602a17","Type":"ContainerStarted","Data":"18e36dffd25cf50db60c55874ae5e83aa35faa4fa1dff2c477ec4899a01aa1f0"} Feb 24 02:04:00.027645 master-0 kubenswrapper[7864]: I0224 02:04:00.027608 7864 generic.go:334] "Generic (PLEG): container finished" podID="c92835f0-7f32-4584-8304-843d7979392a" containerID="ec5c26bb0883484781a82be8d7bf1a6eb78e1cb6c0192ee0fe34ebba8f9531c4" exitCode=0 Feb 24 02:04:00.027722 master-0 kubenswrapper[7864]: I0224 02:04:00.027654 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" event={"ID":"c92835f0-7f32-4584-8304-843d7979392a","Type":"ContainerDied","Data":"ec5c26bb0883484781a82be8d7bf1a6eb78e1cb6c0192ee0fe34ebba8f9531c4"} Feb 24 02:04:00.213436 master-0 kubenswrapper[7864]: I0224 02:04:00.213015 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:04:00.213436 master-0 kubenswrapper[7864]: I0224 02:04:00.213191 7864 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:04:00.217175 master-0 kubenswrapper[7864]: I0224 02:04:00.217139 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:04:00.425218 master-0 kubenswrapper[7864]: I0224 02:04:00.424966 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:04:00.479300 master-0 kubenswrapper[7864]: I0224 02:04:00.479210 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:04:00.653231 master-0 kubenswrapper[7864]: I0224 02:04:00.653156 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert\") pod \"olm-operator-5499d7f7bb-5g6nc\" (UID: \"02f1d753-983a-4c4a-b1a0-560de173859a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:04:00.653231 master-0 kubenswrapper[7864]: I0224 02:04:00.653224 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:04:00.653510 master-0 kubenswrapper[7864]: I0224 02:04:00.653252 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:04:00.653510 master-0 kubenswrapper[7864]: I0224 02:04:00.653305 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2hllb\" (UID: \"6320dbb5-b84d-4a57-8c65-fbed8421f84a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" Feb 24 02:04:00.653510 master-0 kubenswrapper[7864]: I0224 02:04:00.653330 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:04:00.653510 master-0 kubenswrapper[7864]: I0224 02:04:00.653353 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-fkzdb\" (UID: \"f2e9cdff-8c15-43df-b8df-7fe3a73fda86\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:04:00.653510 master-0 kubenswrapper[7864]: I0224 02:04:00.653373 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:04:00.653510 master-0 kubenswrapper[7864]: I0224 02:04:00.653392 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-dg77f\" (UID: \"dc3d08db-45fa-4fef-b1fd-2875f22d5c45\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" Feb 24 02:04:00.653510 master-0 kubenswrapper[7864]: I0224 02:04:00.653414 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls\") pod \"dns-operator-8c7d49845-hxcn2\" (UID: \"2cb764f6-40f8-4e87-8be0-b9d7b0364201\") " pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" Feb 24 02:04:00.653510 master-0 kubenswrapper[7864]: I0224 02:04:00.653433 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:04:00.653510 master-0 kubenswrapper[7864]: I0224 02:04:00.653452 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:04:00.653510 master-0 kubenswrapper[7864]: I0224 02:04:00.653470 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs\") pod \"network-metrics-daemon-tntcf\" (UID: \"70e2ba24-4871-4d1d-9935-156fdbeb2810\") " pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:04:00.653510 master-0 kubenswrapper[7864]: I0224 02:04:00.653494 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:04:00.653510 master-0 kubenswrapper[7864]: I0224 02:04:00.653513 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert\") pod \"catalog-operator-596f79dd6f-8cg5c\" (UID: \"12b89e05-a503-47aa-90b2-4d741e015b19\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:04:00.653972 master-0 kubenswrapper[7864]: I0224 02:04:00.653545 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:04:00.653972 master-0 kubenswrapper[7864]: I0224 02:04:00.653584 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-4qf9p\" (UID: \"91d16f7b-390a-4d9d-99d6-cc8e210801d1\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:04:00.653972 master-0 kubenswrapper[7864]: E0224 02:04:00.653703 7864 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 24 02:04:00.653972 master-0 kubenswrapper[7864]: E0224 02:04:00.653761 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics podName:91d16f7b-390a-4d9d-99d6-cc8e210801d1 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:04.6537446 +0000 UTC m=+8.981398222 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-4qf9p" (UID: "91d16f7b-390a-4d9d-99d6-cc8e210801d1") : secret "marketplace-operator-metrics" not found Feb 24 02:04:00.653972 master-0 kubenswrapper[7864]: E0224 02:04:00.653825 7864 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 24 02:04:00.653972 master-0 kubenswrapper[7864]: E0224 02:04:00.653849 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert podName:02f1d753-983a-4c4a-b1a0-560de173859a nodeName:}" failed. No retries permitted until 2026-02-24 02:04:04.653840513 +0000 UTC m=+8.981494135 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert") pod "olm-operator-5499d7f7bb-5g6nc" (UID: "02f1d753-983a-4c4a-b1a0-560de173859a") : secret "olm-operator-serving-cert" not found Feb 24 02:04:00.653972 master-0 kubenswrapper[7864]: E0224 02:04:00.653885 7864 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 24 02:04:00.653972 master-0 kubenswrapper[7864]: E0224 02:04:00.653902 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls podName:c84dc269-43ae-4083-9998-a0b3c90bb681 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:04.653896715 +0000 UTC m=+8.981550337 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-d7sx4" (UID: "c84dc269-43ae-4083-9998-a0b3c90bb681") : secret "image-registry-operator-tls" not found Feb 24 02:04:00.653972 master-0 kubenswrapper[7864]: E0224 02:04:00.653935 7864 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 24 02:04:00.653972 master-0 kubenswrapper[7864]: E0224 02:04:00.653952 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert podName:6a9ccd8e-d964-4c03-8ffc-51b464030c25 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:04.653946676 +0000 UTC m=+8.981600298 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-8x6sd" (UID: "6a9ccd8e-d964-4c03-8ffc-51b464030c25") : secret "performance-addon-operator-webhook-cert" not found Feb 24 02:04:00.654459 master-0 kubenswrapper[7864]: E0224 02:04:00.654032 7864 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 24 02:04:00.654459 master-0 kubenswrapper[7864]: E0224 02:04:00.654056 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert podName:6320dbb5-b84d-4a57-8c65-fbed8421f84a nodeName:}" failed. No retries permitted until 2026-02-24 02:04:04.654048399 +0000 UTC m=+8.981702021 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-2hllb" (UID: "6320dbb5-b84d-4a57-8c65-fbed8421f84a") : secret "package-server-manager-serving-cert" not found Feb 24 02:04:00.654459 master-0 kubenswrapper[7864]: E0224 02:04:00.654099 7864 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 24 02:04:00.654459 master-0 kubenswrapper[7864]: E0224 02:04:00.654119 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls podName:6a9ccd8e-d964-4c03-8ffc-51b464030c25 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:04.654111081 +0000 UTC m=+8.981764703 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-8x6sd" (UID: "6a9ccd8e-d964-4c03-8ffc-51b464030c25") : secret "node-tuning-operator-tls" not found Feb 24 02:04:00.654459 master-0 kubenswrapper[7864]: E0224 02:04:00.654151 7864 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 24 02:04:00.654459 master-0 kubenswrapper[7864]: E0224 02:04:00.654167 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls podName:f2e9cdff-8c15-43df-b8df-7fe3a73fda86 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:04.654162372 +0000 UTC m=+8.981815994 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-fkzdb" (UID: "f2e9cdff-8c15-43df-b8df-7fe3a73fda86") : secret "cluster-monitoring-operator-tls" not found Feb 24 02:04:00.654459 master-0 kubenswrapper[7864]: E0224 02:04:00.654200 7864 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Feb 24 02:04:00.654459 master-0 kubenswrapper[7864]: E0224 02:04:00.654221 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert podName:7b4e3ba0-5194-4e20-8f12-dea4b67504fe nodeName:}" failed. No retries permitted until 2026-02-24 02:04:04.654215214 +0000 UTC m=+8.981868836 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert") pod "cluster-baremetal-operator-d6bb9bb76-k98fq" (UID: "7b4e3ba0-5194-4e20-8f12-dea4b67504fe") : secret "cluster-baremetal-webhook-server-cert" not found Feb 24 02:04:00.654459 master-0 kubenswrapper[7864]: E0224 02:04:00.654252 7864 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 24 02:04:00.654459 master-0 kubenswrapper[7864]: E0224 02:04:00.654267 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs podName:dc3d08db-45fa-4fef-b1fd-2875f22d5c45 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:04.654262705 +0000 UTC m=+8.981916327 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-dg77f" (UID: "dc3d08db-45fa-4fef-b1fd-2875f22d5c45") : secret "multus-admission-controller-secret" not found Feb 24 02:04:00.654459 master-0 kubenswrapper[7864]: E0224 02:04:00.654298 7864 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 24 02:04:00.654459 master-0 kubenswrapper[7864]: E0224 02:04:00.654314 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls podName:2cb764f6-40f8-4e87-8be0-b9d7b0364201 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:04.654309756 +0000 UTC m=+8.981963378 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls") pod "dns-operator-8c7d49845-hxcn2" (UID: "2cb764f6-40f8-4e87-8be0-b9d7b0364201") : secret "metrics-tls" not found Feb 24 02:04:00.654459 master-0 kubenswrapper[7864]: E0224 02:04:00.654342 7864 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Feb 24 02:04:00.654459 master-0 kubenswrapper[7864]: E0224 02:04:00.654360 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls podName:db8d6627-394c-4087-bfa4-bf7580f6bb4b nodeName:}" failed. No retries permitted until 2026-02-24 02:04:04.654356038 +0000 UTC m=+8.982009650 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls") pod "machine-config-operator-7f8c75f984-ffnq7" (UID: "db8d6627-394c-4087-bfa4-bf7580f6bb4b") : secret "mco-proxy-tls" not found Feb 24 02:04:00.654459 master-0 kubenswrapper[7864]: E0224 02:04:00.654389 7864 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 24 02:04:00.654459 master-0 kubenswrapper[7864]: E0224 02:04:00.654404 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls podName:c3278a82-ee70-4d6c-9c96-f8cb1bcb9334 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:04.654399529 +0000 UTC m=+8.982053151 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls") pod "ingress-operator-6569778c84-6dlqb" (UID: "c3278a82-ee70-4d6c-9c96-f8cb1bcb9334") : secret "metrics-tls" not found Feb 24 02:04:00.654459 master-0 kubenswrapper[7864]: E0224 02:04:00.654436 7864 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 24 02:04:00.654459 master-0 kubenswrapper[7864]: E0224 02:04:00.654451 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs podName:70e2ba24-4871-4d1d-9935-156fdbeb2810 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:04.65444624 +0000 UTC m=+8.982099862 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs") pod "network-metrics-daemon-tntcf" (UID: "70e2ba24-4871-4d1d-9935-156fdbeb2810") : secret "metrics-daemon-secret" not found Feb 24 02:04:00.654459 master-0 kubenswrapper[7864]: E0224 02:04:00.654480 7864 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 24 02:04:00.655254 master-0 kubenswrapper[7864]: E0224 02:04:00.654497 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert podName:4f72a322-2142-482a-9b0b-2ad890181d7a nodeName:}" failed. No retries permitted until 2026-02-24 02:04:04.654491831 +0000 UTC m=+8.982145453 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert") pod "cluster-version-operator-5cfd9759cf-v5tpt" (UID: "4f72a322-2142-482a-9b0b-2ad890181d7a") : secret "cluster-version-operator-serving-cert" not found Feb 24 02:04:00.655254 master-0 kubenswrapper[7864]: E0224 02:04:00.654526 7864 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 24 02:04:00.655254 master-0 kubenswrapper[7864]: E0224 02:04:00.654542 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert podName:12b89e05-a503-47aa-90b2-4d741e015b19 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:04.654537703 +0000 UTC m=+8.982191325 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert") pod "catalog-operator-596f79dd6f-8cg5c" (UID: "12b89e05-a503-47aa-90b2-4d741e015b19") : secret "catalog-operator-serving-cert" not found Feb 24 02:04:00.655254 master-0 kubenswrapper[7864]: E0224 02:04:00.654589 7864 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Feb 24 02:04:00.655254 master-0 kubenswrapper[7864]: E0224 02:04:00.654607 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls podName:7b4e3ba0-5194-4e20-8f12-dea4b67504fe nodeName:}" failed. No retries permitted until 2026-02-24 02:04:04.654601584 +0000 UTC m=+8.982255206 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-d6bb9bb76-k98fq" (UID: "7b4e3ba0-5194-4e20-8f12-dea4b67504fe") : secret "cluster-baremetal-operator-tls" not found Feb 24 02:04:00.930654 master-0 kubenswrapper[7864]: I0224 02:04:00.930295 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x"] Feb 24 02:04:00.930872 master-0 kubenswrapper[7864]: E0224 02:04:00.930771 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fa1462b-8f1c-4a77-9c1c-f0f79910737f" containerName="assisted-installer-controller" Feb 24 02:04:00.930872 master-0 kubenswrapper[7864]: I0224 02:04:00.930786 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fa1462b-8f1c-4a77-9c1c-f0f79910737f" containerName="assisted-installer-controller" Feb 24 02:04:00.930872 master-0 kubenswrapper[7864]: E0224 02:04:00.930800 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8e1e397-edd4-4278-b3ea-25fe829de509" containerName="prober" Feb 24 02:04:00.930872 master-0 kubenswrapper[7864]: I0224 02:04:00.930806 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8e1e397-edd4-4278-b3ea-25fe829de509" containerName="prober" Feb 24 02:04:00.930872 master-0 kubenswrapper[7864]: I0224 02:04:00.930855 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8e1e397-edd4-4278-b3ea-25fe829de509" containerName="prober" Feb 24 02:04:00.930872 master-0 kubenswrapper[7864]: I0224 02:04:00.930866 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fa1462b-8f1c-4a77-9c1c-f0f79910737f" containerName="assisted-installer-controller" Feb 24 02:04:00.931597 master-0 kubenswrapper[7864]: I0224 02:04:00.931158 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" Feb 24 02:04:00.933073 master-0 kubenswrapper[7864]: I0224 02:04:00.933029 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5c85bff57-t5rgn"] Feb 24 02:04:00.933646 master-0 kubenswrapper[7864]: I0224 02:04:00.933614 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-t5rgn" Feb 24 02:04:00.937322 master-0 kubenswrapper[7864]: I0224 02:04:00.936237 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 24 02:04:00.937322 master-0 kubenswrapper[7864]: I0224 02:04:00.937281 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 24 02:04:00.946310 master-0 kubenswrapper[7864]: I0224 02:04:00.946268 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x"] Feb 24 02:04:00.946399 master-0 kubenswrapper[7864]: I0224 02:04:00.946315 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5c85bff57-t5rgn"] Feb 24 02:04:01.049680 master-0 kubenswrapper[7864]: I0224 02:04:01.049641 7864 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:04:01.049680 master-0 kubenswrapper[7864]: I0224 02:04:01.049666 7864 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:04:01.050178 master-0 kubenswrapper[7864]: I0224 02:04:01.050159 7864 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:04:01.062991 master-0 kubenswrapper[7864]: I0224 02:04:01.060961 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8742\" (UniqueName: \"kubernetes.io/projected/f807f33c-8132-48a8-ab12-4b54c1cd2b10-kube-api-access-g8742\") pod \"migrator-5c85bff57-t5rgn\" (UID: \"f807f33c-8132-48a8-ab12-4b54c1cd2b10\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-t5rgn" Feb 24 02:04:01.062991 master-0 kubenswrapper[7864]: I0224 02:04:01.060997 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7lsb\" (UniqueName: \"kubernetes.io/projected/f6e7b773-7ecd-4a5c-8bef-d672f371e7e5-kube-api-access-q7lsb\") pod \"csi-snapshot-controller-6847bb4785-8l58x\" (UID: \"f6e7b773-7ecd-4a5c-8bef-d672f371e7e5\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" Feb 24 02:04:01.165308 master-0 kubenswrapper[7864]: I0224 02:04:01.162637 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8742\" (UniqueName: \"kubernetes.io/projected/f807f33c-8132-48a8-ab12-4b54c1cd2b10-kube-api-access-g8742\") pod \"migrator-5c85bff57-t5rgn\" (UID: \"f807f33c-8132-48a8-ab12-4b54c1cd2b10\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-t5rgn" Feb 24 02:04:01.165308 master-0 kubenswrapper[7864]: I0224 02:04:01.162680 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7lsb\" (UniqueName: \"kubernetes.io/projected/f6e7b773-7ecd-4a5c-8bef-d672f371e7e5-kube-api-access-q7lsb\") pod \"csi-snapshot-controller-6847bb4785-8l58x\" (UID: \"f6e7b773-7ecd-4a5c-8bef-d672f371e7e5\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" Feb 24 02:04:01.184638 master-0 kubenswrapper[7864]: I0224 02:04:01.184069 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:04:01.191629 master-0 kubenswrapper[7864]: I0224 02:04:01.191566 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7lsb\" (UniqueName: \"kubernetes.io/projected/f6e7b773-7ecd-4a5c-8bef-d672f371e7e5-kube-api-access-q7lsb\") pod \"csi-snapshot-controller-6847bb4785-8l58x\" (UID: \"f6e7b773-7ecd-4a5c-8bef-d672f371e7e5\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" Feb 24 02:04:01.194329 master-0 kubenswrapper[7864]: I0224 02:04:01.194278 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8742\" (UniqueName: \"kubernetes.io/projected/f807f33c-8132-48a8-ab12-4b54c1cd2b10-kube-api-access-g8742\") pod \"migrator-5c85bff57-t5rgn\" (UID: \"f807f33c-8132-48a8-ab12-4b54c1cd2b10\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-t5rgn" Feb 24 02:04:01.264472 master-0 kubenswrapper[7864]: I0224 02:04:01.264395 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" Feb 24 02:04:01.274981 master-0 kubenswrapper[7864]: I0224 02:04:01.274942 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-t5rgn" Feb 24 02:04:01.469287 master-0 kubenswrapper[7864]: I0224 02:04:01.468809 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5c85bff57-t5rgn"] Feb 24 02:04:01.484230 master-0 kubenswrapper[7864]: I0224 02:04:01.484192 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x"] Feb 24 02:04:01.488148 master-0 kubenswrapper[7864]: W0224 02:04:01.488085 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf807f33c_8132_48a8_ab12_4b54c1cd2b10.slice/crio-90a0d6bae4f861a78e1bdfe5f47cce060c508d92cdd797bd0fed3982c351779c WatchSource:0}: Error finding container 90a0d6bae4f861a78e1bdfe5f47cce060c508d92cdd797bd0fed3982c351779c: Status 404 returned error can't find the container with id 90a0d6bae4f861a78e1bdfe5f47cce060c508d92cdd797bd0fed3982c351779c Feb 24 02:04:01.489068 master-0 kubenswrapper[7864]: W0224 02:04:01.489035 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6e7b773_7ecd_4a5c_8bef_d672f371e7e5.slice/crio-e7c778232fad4af52f47c31c73f233a65718cb5d7849085291ba01455710c481 WatchSource:0}: Error finding container e7c778232fad4af52f47c31c73f233a65718cb5d7849085291ba01455710c481: Status 404 returned error can't find the container with id e7c778232fad4af52f47c31c73f233a65718cb5d7849085291ba01455710c481 Feb 24 02:04:01.588277 master-0 kubenswrapper[7864]: I0224 02:04:01.587309 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:04:01.594088 master-0 kubenswrapper[7864]: I0224 02:04:01.593844 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:04:01.901064 master-0 kubenswrapper[7864]: I0224 02:04:01.900938 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2"] Feb 24 02:04:01.901512 master-0 kubenswrapper[7864]: I0224 02:04:01.901456 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2" Feb 24 02:04:01.907449 master-0 kubenswrapper[7864]: I0224 02:04:01.907424 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 02:04:01.907548 master-0 kubenswrapper[7864]: I0224 02:04:01.907456 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 02:04:01.907628 master-0 kubenswrapper[7864]: I0224 02:04:01.907605 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 02:04:01.907844 master-0 kubenswrapper[7864]: I0224 02:04:01.907826 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 02:04:01.907943 master-0 kubenswrapper[7864]: I0224 02:04:01.907915 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 02:04:01.913019 master-0 kubenswrapper[7864]: I0224 02:04:01.912966 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 02:04:01.914719 master-0 kubenswrapper[7864]: I0224 02:04:01.914686 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2"] Feb 24 02:04:01.974665 master-0 kubenswrapper[7864]: I0224 02:04:01.974561 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-config\") pod \"controller-manager-6c9b8f4d95-whcm2\" (UID: \"2c4b40e8-db78-49ed-99c1-46945964bba6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2" Feb 24 02:04:01.974975 master-0 kubenswrapper[7864]: I0224 02:04:01.974774 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-proxy-ca-bundles\") pod \"controller-manager-6c9b8f4d95-whcm2\" (UID: \"2c4b40e8-db78-49ed-99c1-46945964bba6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2" Feb 24 02:04:01.975023 master-0 kubenswrapper[7864]: I0224 02:04:01.974993 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c4b40e8-db78-49ed-99c1-46945964bba6-serving-cert\") pod \"controller-manager-6c9b8f4d95-whcm2\" (UID: \"2c4b40e8-db78-49ed-99c1-46945964bba6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2" Feb 24 02:04:01.975163 master-0 kubenswrapper[7864]: I0224 02:04:01.975122 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5zxk\" (UniqueName: \"kubernetes.io/projected/2c4b40e8-db78-49ed-99c1-46945964bba6-kube-api-access-d5zxk\") pod \"controller-manager-6c9b8f4d95-whcm2\" (UID: \"2c4b40e8-db78-49ed-99c1-46945964bba6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2" Feb 24 02:04:01.975364 master-0 kubenswrapper[7864]: I0224 02:04:01.975325 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-client-ca\") pod \"controller-manager-6c9b8f4d95-whcm2\" (UID: \"2c4b40e8-db78-49ed-99c1-46945964bba6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2" Feb 24 02:04:02.056234 master-0 kubenswrapper[7864]: I0224 02:04:02.056145 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" event={"ID":"f6e7b773-7ecd-4a5c-8bef-d672f371e7e5","Type":"ContainerStarted","Data":"e7c778232fad4af52f47c31c73f233a65718cb5d7849085291ba01455710c481"} Feb 24 02:04:02.057370 master-0 kubenswrapper[7864]: I0224 02:04:02.057330 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-t5rgn" event={"ID":"f807f33c-8132-48a8-ab12-4b54c1cd2b10","Type":"ContainerStarted","Data":"90a0d6bae4f861a78e1bdfe5f47cce060c508d92cdd797bd0fed3982c351779c"} Feb 24 02:04:02.076628 master-0 kubenswrapper[7864]: I0224 02:04:02.076560 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5zxk\" (UniqueName: \"kubernetes.io/projected/2c4b40e8-db78-49ed-99c1-46945964bba6-kube-api-access-d5zxk\") pod \"controller-manager-6c9b8f4d95-whcm2\" (UID: \"2c4b40e8-db78-49ed-99c1-46945964bba6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2" Feb 24 02:04:02.076781 master-0 kubenswrapper[7864]: I0224 02:04:02.076744 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-client-ca\") pod \"controller-manager-6c9b8f4d95-whcm2\" (UID: \"2c4b40e8-db78-49ed-99c1-46945964bba6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2" Feb 24 02:04:02.076982 master-0 kubenswrapper[7864]: I0224 02:04:02.076838 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-config\") pod \"controller-manager-6c9b8f4d95-whcm2\" (UID: \"2c4b40e8-db78-49ed-99c1-46945964bba6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2" Feb 24 02:04:02.076982 master-0 kubenswrapper[7864]: I0224 02:04:02.076872 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-proxy-ca-bundles\") pod \"controller-manager-6c9b8f4d95-whcm2\" (UID: \"2c4b40e8-db78-49ed-99c1-46945964bba6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2" Feb 24 02:04:02.076982 master-0 kubenswrapper[7864]: I0224 02:04:02.076936 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c4b40e8-db78-49ed-99c1-46945964bba6-serving-cert\") pod \"controller-manager-6c9b8f4d95-whcm2\" (UID: \"2c4b40e8-db78-49ed-99c1-46945964bba6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2" Feb 24 02:04:02.077247 master-0 kubenswrapper[7864]: E0224 02:04:02.077131 7864 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 24 02:04:02.077247 master-0 kubenswrapper[7864]: E0224 02:04:02.077203 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c4b40e8-db78-49ed-99c1-46945964bba6-serving-cert podName:2c4b40e8-db78-49ed-99c1-46945964bba6 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:02.577180075 +0000 UTC m=+6.904833737 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2c4b40e8-db78-49ed-99c1-46945964bba6-serving-cert") pod "controller-manager-6c9b8f4d95-whcm2" (UID: "2c4b40e8-db78-49ed-99c1-46945964bba6") : secret "serving-cert" not found Feb 24 02:04:02.077973 master-0 kubenswrapper[7864]: E0224 02:04:02.077937 7864 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 24 02:04:02.078043 master-0 kubenswrapper[7864]: E0224 02:04:02.078006 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-client-ca podName:2c4b40e8-db78-49ed-99c1-46945964bba6 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:02.577989938 +0000 UTC m=+6.905643590 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-client-ca") pod "controller-manager-6c9b8f4d95-whcm2" (UID: "2c4b40e8-db78-49ed-99c1-46945964bba6") : configmap "client-ca" not found Feb 24 02:04:02.078087 master-0 kubenswrapper[7864]: E0224 02:04:02.078053 7864 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Feb 24 02:04:02.078129 master-0 kubenswrapper[7864]: E0224 02:04:02.078089 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-config podName:2c4b40e8-db78-49ed-99c1-46945964bba6 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:02.57807559 +0000 UTC m=+6.905729252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-config") pod "controller-manager-6c9b8f4d95-whcm2" (UID: "2c4b40e8-db78-49ed-99c1-46945964bba6") : configmap "config" not found Feb 24 02:04:02.078175 master-0 kubenswrapper[7864]: E0224 02:04:02.078130 7864 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Feb 24 02:04:02.078175 master-0 kubenswrapper[7864]: E0224 02:04:02.078165 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-proxy-ca-bundles podName:2c4b40e8-db78-49ed-99c1-46945964bba6 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:02.578154562 +0000 UTC m=+6.905808214 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-proxy-ca-bundles") pod "controller-manager-6c9b8f4d95-whcm2" (UID: "2c4b40e8-db78-49ed-99c1-46945964bba6") : configmap "openshift-global-ca" not found Feb 24 02:04:02.109742 master-0 kubenswrapper[7864]: I0224 02:04:02.109692 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5zxk\" (UniqueName: \"kubernetes.io/projected/2c4b40e8-db78-49ed-99c1-46945964bba6-kube-api-access-d5zxk\") pod \"controller-manager-6c9b8f4d95-whcm2\" (UID: \"2c4b40e8-db78-49ed-99c1-46945964bba6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2" Feb 24 02:04:02.503341 master-0 kubenswrapper[7864]: I0224 02:04:02.503258 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:04:02.515760 master-0 kubenswrapper[7864]: I0224 02:04:02.515706 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:04:02.583412 master-0 kubenswrapper[7864]: I0224 02:04:02.583332 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-client-ca\") pod \"controller-manager-6c9b8f4d95-whcm2\" (UID: \"2c4b40e8-db78-49ed-99c1-46945964bba6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2" Feb 24 02:04:02.583653 master-0 kubenswrapper[7864]: E0224 02:04:02.583511 7864 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 24 02:04:02.583758 master-0 kubenswrapper[7864]: E0224 02:04:02.583657 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-client-ca podName:2c4b40e8-db78-49ed-99c1-46945964bba6 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:03.58362757 +0000 UTC m=+7.911281232 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-client-ca") pod "controller-manager-6c9b8f4d95-whcm2" (UID: "2c4b40e8-db78-49ed-99c1-46945964bba6") : configmap "client-ca" not found Feb 24 02:04:02.583758 master-0 kubenswrapper[7864]: I0224 02:04:02.583738 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-config\") pod \"controller-manager-6c9b8f4d95-whcm2\" (UID: \"2c4b40e8-db78-49ed-99c1-46945964bba6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2" Feb 24 02:04:02.583906 master-0 kubenswrapper[7864]: I0224 02:04:02.583776 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-proxy-ca-bundles\") pod \"controller-manager-6c9b8f4d95-whcm2\" (UID: \"2c4b40e8-db78-49ed-99c1-46945964bba6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2" Feb 24 02:04:02.583906 master-0 kubenswrapper[7864]: E0224 02:04:02.583894 7864 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Feb 24 02:04:02.584458 master-0 kubenswrapper[7864]: E0224 02:04:02.583902 7864 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Feb 24 02:04:02.584458 master-0 kubenswrapper[7864]: E0224 02:04:02.583936 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-proxy-ca-bundles podName:2c4b40e8-db78-49ed-99c1-46945964bba6 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:03.583923048 +0000 UTC m=+7.911576710 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-proxy-ca-bundles") pod "controller-manager-6c9b8f4d95-whcm2" (UID: "2c4b40e8-db78-49ed-99c1-46945964bba6") : configmap "openshift-global-ca" not found Feb 24 02:04:02.584458 master-0 kubenswrapper[7864]: E0224 02:04:02.583971 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-config podName:2c4b40e8-db78-49ed-99c1-46945964bba6 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:03.583951239 +0000 UTC m=+7.911604891 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-config") pod "controller-manager-6c9b8f4d95-whcm2" (UID: "2c4b40e8-db78-49ed-99c1-46945964bba6") : configmap "config" not found Feb 24 02:04:02.584458 master-0 kubenswrapper[7864]: I0224 02:04:02.584005 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c4b40e8-db78-49ed-99c1-46945964bba6-serving-cert\") pod \"controller-manager-6c9b8f4d95-whcm2\" (UID: \"2c4b40e8-db78-49ed-99c1-46945964bba6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2" Feb 24 02:04:02.584458 master-0 kubenswrapper[7864]: E0224 02:04:02.584207 7864 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 24 02:04:02.584458 master-0 kubenswrapper[7864]: E0224 02:04:02.584252 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c4b40e8-db78-49ed-99c1-46945964bba6-serving-cert podName:2c4b40e8-db78-49ed-99c1-46945964bba6 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:03.584239857 +0000 UTC m=+7.911893519 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2c4b40e8-db78-49ed-99c1-46945964bba6-serving-cert") pod "controller-manager-6c9b8f4d95-whcm2" (UID: "2c4b40e8-db78-49ed-99c1-46945964bba6") : secret "serving-cert" not found Feb 24 02:04:03.176653 master-0 kubenswrapper[7864]: I0224 02:04:03.066473 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:04:03.176653 master-0 kubenswrapper[7864]: I0224 02:04:03.139463 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:04:03.248554 master-0 kubenswrapper[7864]: I0224 02:04:03.247351 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q"] Feb 24 02:04:03.248554 master-0 kubenswrapper[7864]: I0224 02:04:03.247907 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q" Feb 24 02:04:03.250672 master-0 kubenswrapper[7864]: I0224 02:04:03.250600 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2"] Feb 24 02:04:03.251214 master-0 kubenswrapper[7864]: E0224 02:04:03.251160 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2" podUID="2c4b40e8-db78-49ed-99c1-46945964bba6" Feb 24 02:04:03.263400 master-0 kubenswrapper[7864]: I0224 02:04:03.257593 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 02:04:03.263400 master-0 kubenswrapper[7864]: I0224 02:04:03.257909 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 02:04:03.263400 master-0 kubenswrapper[7864]: I0224 02:04:03.258036 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 02:04:03.263400 master-0 kubenswrapper[7864]: I0224 02:04:03.258146 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 02:04:03.263400 master-0 kubenswrapper[7864]: I0224 02:04:03.258275 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 02:04:03.263400 master-0 kubenswrapper[7864]: I0224 02:04:03.263344 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q"] Feb 24 02:04:03.299595 master-0 kubenswrapper[7864]: I0224 02:04:03.299467 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhgqz\" (UniqueName: \"kubernetes.io/projected/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-kube-api-access-zhgqz\") pod \"route-controller-manager-9786ffb6f-5tj2q\" (UID: \"562a4f9a-d27d-4b88-8ab2-92b2cb0277b3\") " pod="openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q" Feb 24 02:04:03.299595 master-0 kubenswrapper[7864]: I0224 02:04:03.299519 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-client-ca\") pod \"route-controller-manager-9786ffb6f-5tj2q\" (UID: \"562a4f9a-d27d-4b88-8ab2-92b2cb0277b3\") " pod="openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q" Feb 24 02:04:03.299836 master-0 kubenswrapper[7864]: I0224 02:04:03.299715 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-serving-cert\") pod \"route-controller-manager-9786ffb6f-5tj2q\" (UID: \"562a4f9a-d27d-4b88-8ab2-92b2cb0277b3\") " pod="openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q" Feb 24 02:04:03.299836 master-0 kubenswrapper[7864]: I0224 02:04:03.299755 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-config\") pod \"route-controller-manager-9786ffb6f-5tj2q\" (UID: \"562a4f9a-d27d-4b88-8ab2-92b2cb0277b3\") " pod="openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q" Feb 24 02:04:03.401041 master-0 kubenswrapper[7864]: I0224 02:04:03.400971 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-serving-cert\") pod \"route-controller-manager-9786ffb6f-5tj2q\" (UID: \"562a4f9a-d27d-4b88-8ab2-92b2cb0277b3\") " pod="openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q" Feb 24 02:04:03.401272 master-0 kubenswrapper[7864]: E0224 02:04:03.401210 7864 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 24 02:04:03.402064 master-0 kubenswrapper[7864]: E0224 02:04:03.401319 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-serving-cert podName:562a4f9a-d27d-4b88-8ab2-92b2cb0277b3 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:03.901293075 +0000 UTC m=+8.228946697 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-serving-cert") pod "route-controller-manager-9786ffb6f-5tj2q" (UID: "562a4f9a-d27d-4b88-8ab2-92b2cb0277b3") : secret "serving-cert" not found Feb 24 02:04:03.402064 master-0 kubenswrapper[7864]: I0224 02:04:03.401472 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-config\") pod \"route-controller-manager-9786ffb6f-5tj2q\" (UID: \"562a4f9a-d27d-4b88-8ab2-92b2cb0277b3\") " pod="openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q" Feb 24 02:04:03.402064 master-0 kubenswrapper[7864]: I0224 02:04:03.401521 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhgqz\" (UniqueName: \"kubernetes.io/projected/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-kube-api-access-zhgqz\") pod \"route-controller-manager-9786ffb6f-5tj2q\" (UID: \"562a4f9a-d27d-4b88-8ab2-92b2cb0277b3\") " pod="openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q" Feb 24 02:04:03.402064 master-0 kubenswrapper[7864]: I0224 02:04:03.401547 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-client-ca\") pod \"route-controller-manager-9786ffb6f-5tj2q\" (UID: \"562a4f9a-d27d-4b88-8ab2-92b2cb0277b3\") " pod="openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q" Feb 24 02:04:03.402064 master-0 kubenswrapper[7864]: E0224 02:04:03.401626 7864 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 24 02:04:03.402064 master-0 kubenswrapper[7864]: E0224 02:04:03.401654 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-client-ca podName:562a4f9a-d27d-4b88-8ab2-92b2cb0277b3 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:03.901647415 +0000 UTC m=+8.229301037 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-client-ca") pod "route-controller-manager-9786ffb6f-5tj2q" (UID: "562a4f9a-d27d-4b88-8ab2-92b2cb0277b3") : configmap "client-ca" not found Feb 24 02:04:03.402502 master-0 kubenswrapper[7864]: I0224 02:04:03.402469 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-config\") pod \"route-controller-manager-9786ffb6f-5tj2q\" (UID: \"562a4f9a-d27d-4b88-8ab2-92b2cb0277b3\") " pod="openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q" Feb 24 02:04:03.424239 master-0 kubenswrapper[7864]: I0224 02:04:03.424205 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhgqz\" (UniqueName: \"kubernetes.io/projected/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-kube-api-access-zhgqz\") pod \"route-controller-manager-9786ffb6f-5tj2q\" (UID: \"562a4f9a-d27d-4b88-8ab2-92b2cb0277b3\") " pod="openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q" Feb 24 02:04:03.598540 master-0 kubenswrapper[7864]: I0224 02:04:03.597255 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-576b4d78bd-nqcs2"] Feb 24 02:04:03.598540 master-0 kubenswrapper[7864]: I0224 02:04:03.597757 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-576b4d78bd-nqcs2" Feb 24 02:04:03.600432 master-0 kubenswrapper[7864]: I0224 02:04:03.600153 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 24 02:04:03.600432 master-0 kubenswrapper[7864]: I0224 02:04:03.600349 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 24 02:04:03.600859 master-0 kubenswrapper[7864]: I0224 02:04:03.600772 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 24 02:04:03.602243 master-0 kubenswrapper[7864]: I0224 02:04:03.602103 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 24 02:04:03.603838 master-0 kubenswrapper[7864]: I0224 02:04:03.603774 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-client-ca\") pod \"controller-manager-6c9b8f4d95-whcm2\" (UID: \"2c4b40e8-db78-49ed-99c1-46945964bba6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2" Feb 24 02:04:03.603940 master-0 kubenswrapper[7864]: I0224 02:04:03.603915 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-config\") pod \"controller-manager-6c9b8f4d95-whcm2\" (UID: \"2c4b40e8-db78-49ed-99c1-46945964bba6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2" Feb 24 02:04:03.603979 master-0 kubenswrapper[7864]: I0224 02:04:03.603959 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-proxy-ca-bundles\") pod \"controller-manager-6c9b8f4d95-whcm2\" (UID: \"2c4b40e8-db78-49ed-99c1-46945964bba6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2" Feb 24 02:04:03.604067 master-0 kubenswrapper[7864]: I0224 02:04:03.604035 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c4b40e8-db78-49ed-99c1-46945964bba6-serving-cert\") pod \"controller-manager-6c9b8f4d95-whcm2\" (UID: \"2c4b40e8-db78-49ed-99c1-46945964bba6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2" Feb 24 02:04:03.604618 master-0 kubenswrapper[7864]: E0224 02:04:03.604507 7864 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 24 02:04:03.604677 master-0 kubenswrapper[7864]: E0224 02:04:03.604624 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c4b40e8-db78-49ed-99c1-46945964bba6-serving-cert podName:2c4b40e8-db78-49ed-99c1-46945964bba6 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:05.604596567 +0000 UTC m=+9.932250219 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2c4b40e8-db78-49ed-99c1-46945964bba6-serving-cert") pod "controller-manager-6c9b8f4d95-whcm2" (UID: "2c4b40e8-db78-49ed-99c1-46945964bba6") : secret "serving-cert" not found Feb 24 02:04:03.604729 master-0 kubenswrapper[7864]: E0224 02:04:03.604657 7864 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 24 02:04:03.604795 master-0 kubenswrapper[7864]: E0224 02:04:03.604767 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-client-ca podName:2c4b40e8-db78-49ed-99c1-46945964bba6 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:05.604738741 +0000 UTC m=+9.932392373 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-client-ca") pod "controller-manager-6c9b8f4d95-whcm2" (UID: "2c4b40e8-db78-49ed-99c1-46945964bba6") : configmap "client-ca" not found Feb 24 02:04:03.606300 master-0 kubenswrapper[7864]: I0224 02:04:03.606257 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-config\") pod \"controller-manager-6c9b8f4d95-whcm2\" (UID: \"2c4b40e8-db78-49ed-99c1-46945964bba6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2" Feb 24 02:04:03.606366 master-0 kubenswrapper[7864]: I0224 02:04:03.606261 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-proxy-ca-bundles\") pod \"controller-manager-6c9b8f4d95-whcm2\" (UID: \"2c4b40e8-db78-49ed-99c1-46945964bba6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2" Feb 24 02:04:03.609162 master-0 kubenswrapper[7864]: I0224 02:04:03.609126 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-576b4d78bd-nqcs2"] Feb 24 02:04:03.705493 master-0 kubenswrapper[7864]: I0224 02:04:03.705362 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c6153510-452b-4726-8b63-8cc894daa168-signing-key\") pod \"service-ca-576b4d78bd-nqcs2\" (UID: \"c6153510-452b-4726-8b63-8cc894daa168\") " pod="openshift-service-ca/service-ca-576b4d78bd-nqcs2" Feb 24 02:04:03.705744 master-0 kubenswrapper[7864]: I0224 02:04:03.705653 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c6153510-452b-4726-8b63-8cc894daa168-signing-cabundle\") pod \"service-ca-576b4d78bd-nqcs2\" (UID: \"c6153510-452b-4726-8b63-8cc894daa168\") " pod="openshift-service-ca/service-ca-576b4d78bd-nqcs2" Feb 24 02:04:03.705887 master-0 kubenswrapper[7864]: I0224 02:04:03.705841 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxz8j\" (UniqueName: \"kubernetes.io/projected/c6153510-452b-4726-8b63-8cc894daa168-kube-api-access-lxz8j\") pod \"service-ca-576b4d78bd-nqcs2\" (UID: \"c6153510-452b-4726-8b63-8cc894daa168\") " pod="openshift-service-ca/service-ca-576b4d78bd-nqcs2" Feb 24 02:04:03.808611 master-0 kubenswrapper[7864]: I0224 02:04:03.808488 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c6153510-452b-4726-8b63-8cc894daa168-signing-cabundle\") pod \"service-ca-576b4d78bd-nqcs2\" (UID: \"c6153510-452b-4726-8b63-8cc894daa168\") " pod="openshift-service-ca/service-ca-576b4d78bd-nqcs2" Feb 24 02:04:03.809676 master-0 kubenswrapper[7864]: I0224 02:04:03.808718 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxz8j\" (UniqueName: \"kubernetes.io/projected/c6153510-452b-4726-8b63-8cc894daa168-kube-api-access-lxz8j\") pod \"service-ca-576b4d78bd-nqcs2\" (UID: \"c6153510-452b-4726-8b63-8cc894daa168\") " pod="openshift-service-ca/service-ca-576b4d78bd-nqcs2" Feb 24 02:04:03.809676 master-0 kubenswrapper[7864]: I0224 02:04:03.809101 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c6153510-452b-4726-8b63-8cc894daa168-signing-key\") pod \"service-ca-576b4d78bd-nqcs2\" (UID: \"c6153510-452b-4726-8b63-8cc894daa168\") " pod="openshift-service-ca/service-ca-576b4d78bd-nqcs2" Feb 24 02:04:03.810224 master-0 kubenswrapper[7864]: I0224 02:04:03.810159 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c6153510-452b-4726-8b63-8cc894daa168-signing-cabundle\") pod \"service-ca-576b4d78bd-nqcs2\" (UID: \"c6153510-452b-4726-8b63-8cc894daa168\") " pod="openshift-service-ca/service-ca-576b4d78bd-nqcs2" Feb 24 02:04:03.819752 master-0 kubenswrapper[7864]: I0224 02:04:03.819683 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c6153510-452b-4726-8b63-8cc894daa168-signing-key\") pod \"service-ca-576b4d78bd-nqcs2\" (UID: \"c6153510-452b-4726-8b63-8cc894daa168\") " pod="openshift-service-ca/service-ca-576b4d78bd-nqcs2" Feb 24 02:04:03.841337 master-0 kubenswrapper[7864]: I0224 02:04:03.841281 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxz8j\" (UniqueName: \"kubernetes.io/projected/c6153510-452b-4726-8b63-8cc894daa168-kube-api-access-lxz8j\") pod \"service-ca-576b4d78bd-nqcs2\" (UID: \"c6153510-452b-4726-8b63-8cc894daa168\") " pod="openshift-service-ca/service-ca-576b4d78bd-nqcs2" Feb 24 02:04:03.911056 master-0 kubenswrapper[7864]: I0224 02:04:03.910971 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-serving-cert\") pod \"route-controller-manager-9786ffb6f-5tj2q\" (UID: \"562a4f9a-d27d-4b88-8ab2-92b2cb0277b3\") " pod="openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q" Feb 24 02:04:03.911172 master-0 kubenswrapper[7864]: I0224 02:04:03.911117 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-client-ca\") pod \"route-controller-manager-9786ffb6f-5tj2q\" (UID: \"562a4f9a-d27d-4b88-8ab2-92b2cb0277b3\") " pod="openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q" Feb 24 02:04:03.911538 master-0 kubenswrapper[7864]: E0224 02:04:03.911502 7864 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 24 02:04:03.911688 master-0 kubenswrapper[7864]: E0224 02:04:03.911660 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-client-ca podName:562a4f9a-d27d-4b88-8ab2-92b2cb0277b3 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:04.911612428 +0000 UTC m=+9.239266060 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-client-ca") pod "route-controller-manager-9786ffb6f-5tj2q" (UID: "562a4f9a-d27d-4b88-8ab2-92b2cb0277b3") : configmap "client-ca" not found Feb 24 02:04:03.912399 master-0 kubenswrapper[7864]: E0224 02:04:03.912334 7864 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 24 02:04:03.912602 master-0 kubenswrapper[7864]: E0224 02:04:03.912492 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-serving-cert podName:562a4f9a-d27d-4b88-8ab2-92b2cb0277b3 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:04.912454012 +0000 UTC m=+9.240107834 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-serving-cert") pod "route-controller-manager-9786ffb6f-5tj2q" (UID: "562a4f9a-d27d-4b88-8ab2-92b2cb0277b3") : secret "serving-cert" not found Feb 24 02:04:03.926067 master-0 kubenswrapper[7864]: I0224 02:04:03.925943 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-576b4d78bd-nqcs2" Feb 24 02:04:04.082242 master-0 kubenswrapper[7864]: I0224 02:04:04.081857 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" event={"ID":"c92835f0-7f32-4584-8304-843d7979392a","Type":"ContainerStarted","Data":"07b9433470f2cae90108f994623de6a108abe146e8addc319cfc6c6ef422b361"} Feb 24 02:04:04.082913 master-0 kubenswrapper[7864]: I0224 02:04:04.081984 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2" Feb 24 02:04:04.083160 master-0 kubenswrapper[7864]: I0224 02:04:04.083129 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:04:04.162174 master-0 kubenswrapper[7864]: I0224 02:04:04.161955 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2" Feb 24 02:04:04.172154 master-0 kubenswrapper[7864]: I0224 02:04:04.172103 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-576b4d78bd-nqcs2"] Feb 24 02:04:04.180767 master-0 kubenswrapper[7864]: W0224 02:04:04.180706 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6153510_452b_4726_8b63_8cc894daa168.slice/crio-586631f1005e0eec9e04637dd3347ca45f3a799902b12b7ee4c09257fef0aee9 WatchSource:0}: Error finding container 586631f1005e0eec9e04637dd3347ca45f3a799902b12b7ee4c09257fef0aee9: Status 404 returned error can't find the container with id 586631f1005e0eec9e04637dd3347ca45f3a799902b12b7ee4c09257fef0aee9 Feb 24 02:04:04.216267 master-0 kubenswrapper[7864]: I0224 02:04:04.216149 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-proxy-ca-bundles\") pod \"2c4b40e8-db78-49ed-99c1-46945964bba6\" (UID: \"2c4b40e8-db78-49ed-99c1-46945964bba6\") " Feb 24 02:04:04.216382 master-0 kubenswrapper[7864]: I0224 02:04:04.216294 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5zxk\" (UniqueName: \"kubernetes.io/projected/2c4b40e8-db78-49ed-99c1-46945964bba6-kube-api-access-d5zxk\") pod \"2c4b40e8-db78-49ed-99c1-46945964bba6\" (UID: \"2c4b40e8-db78-49ed-99c1-46945964bba6\") " Feb 24 02:04:04.216607 master-0 kubenswrapper[7864]: I0224 02:04:04.216511 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-config\") pod \"2c4b40e8-db78-49ed-99c1-46945964bba6\" (UID: \"2c4b40e8-db78-49ed-99c1-46945964bba6\") " Feb 24 02:04:04.216946 master-0 kubenswrapper[7864]: I0224 02:04:04.216901 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2c4b40e8-db78-49ed-99c1-46945964bba6" (UID: "2c4b40e8-db78-49ed-99c1-46945964bba6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:04:04.223344 master-0 kubenswrapper[7864]: I0224 02:04:04.223222 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c4b40e8-db78-49ed-99c1-46945964bba6-kube-api-access-d5zxk" (OuterVolumeSpecName: "kube-api-access-d5zxk") pod "2c4b40e8-db78-49ed-99c1-46945964bba6" (UID: "2c4b40e8-db78-49ed-99c1-46945964bba6"). InnerVolumeSpecName "kube-api-access-d5zxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:04:04.226326 master-0 kubenswrapper[7864]: I0224 02:04:04.225820 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-config" (OuterVolumeSpecName: "config") pod "2c4b40e8-db78-49ed-99c1-46945964bba6" (UID: "2c4b40e8-db78-49ed-99c1-46945964bba6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:04:04.226326 master-0 kubenswrapper[7864]: I0224 02:04:04.225949 7864 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5zxk\" (UniqueName: \"kubernetes.io/projected/2c4b40e8-db78-49ed-99c1-46945964bba6-kube-api-access-d5zxk\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:04.226326 master-0 kubenswrapper[7864]: I0224 02:04:04.225979 7864 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:04.226326 master-0 kubenswrapper[7864]: I0224 02:04:04.226002 7864 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:04.735420 master-0 kubenswrapper[7864]: I0224 02:04:04.735348 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert\") pod \"olm-operator-5499d7f7bb-5g6nc\" (UID: \"02f1d753-983a-4c4a-b1a0-560de173859a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:04:04.735420 master-0 kubenswrapper[7864]: I0224 02:04:04.735411 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:04:04.735420 master-0 kubenswrapper[7864]: I0224 02:04:04.735436 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.735614 7864 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.735725 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert podName:02f1d753-983a-4c4a-b1a0-560de173859a nodeName:}" failed. No retries permitted until 2026-02-24 02:04:12.735692313 +0000 UTC m=+17.063345935 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert") pod "olm-operator-5499d7f7bb-5g6nc" (UID: "02f1d753-983a-4c4a-b1a0-560de173859a") : secret "olm-operator-serving-cert" not found Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: I0224 02:04:04.735802 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2hllb\" (UID: \"6320dbb5-b84d-4a57-8c65-fbed8421f84a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: I0224 02:04:04.735841 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.735907 7864 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.735927 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls podName:6a9ccd8e-d964-4c03-8ffc-51b464030c25 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:12.735920379 +0000 UTC m=+17.063574001 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-8x6sd" (UID: "6a9ccd8e-d964-4c03-8ffc-51b464030c25") : secret "node-tuning-operator-tls" not found Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.735976 7864 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.736039 7864 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.736049 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert podName:6a9ccd8e-d964-4c03-8ffc-51b464030c25 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:12.736031482 +0000 UTC m=+17.063685104 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-8x6sd" (UID: "6a9ccd8e-d964-4c03-8ffc-51b464030c25") : secret "performance-addon-operator-webhook-cert" not found Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.736064 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls podName:c84dc269-43ae-4083-9998-a0b3c90bb681 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:12.736058473 +0000 UTC m=+17.063712085 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-d7sx4" (UID: "c84dc269-43ae-4083-9998-a0b3c90bb681") : secret "image-registry-operator-tls" not found Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.736073 7864 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: I0224 02:04:04.736148 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-fkzdb\" (UID: \"f2e9cdff-8c15-43df-b8df-7fe3a73fda86\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.736194 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert podName:6320dbb5-b84d-4a57-8c65-fbed8421f84a nodeName:}" failed. No retries permitted until 2026-02-24 02:04:12.736157686 +0000 UTC m=+17.063811318 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-2hllb" (UID: "6320dbb5-b84d-4a57-8c65-fbed8421f84a") : secret "package-server-manager-serving-cert" not found Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.736196 7864 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.736232 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls podName:f2e9cdff-8c15-43df-b8df-7fe3a73fda86 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:12.736224758 +0000 UTC m=+17.063878380 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-fkzdb" (UID: "f2e9cdff-8c15-43df-b8df-7fe3a73fda86") : secret "cluster-monitoring-operator-tls" not found Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: I0224 02:04:04.736235 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: I0224 02:04:04.736302 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-dg77f\" (UID: \"dc3d08db-45fa-4fef-b1fd-2875f22d5c45\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: I0224 02:04:04.736333 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls\") pod \"dns-operator-8c7d49845-hxcn2\" (UID: \"2cb764f6-40f8-4e87-8be0-b9d7b0364201\") " pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.736348 7864 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: I0224 02:04:04.736360 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.736369 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert podName:7b4e3ba0-5194-4e20-8f12-dea4b67504fe nodeName:}" failed. No retries permitted until 2026-02-24 02:04:12.736363432 +0000 UTC m=+17.064017054 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert") pod "cluster-baremetal-operator-d6bb9bb76-k98fq" (UID: "7b4e3ba0-5194-4e20-8f12-dea4b67504fe") : secret "cluster-baremetal-webhook-server-cert" not found Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: I0224 02:04:04.736401 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs\") pod \"network-metrics-daemon-tntcf\" (UID: \"70e2ba24-4871-4d1d-9935-156fdbeb2810\") " pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: I0224 02:04:04.736428 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.736405 7864 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.736473 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs podName:dc3d08db-45fa-4fef-b1fd-2875f22d5c45 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:12.736468085 +0000 UTC m=+17.064121707 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-dg77f" (UID: "dc3d08db-45fa-4fef-b1fd-2875f22d5c45") : secret "multus-admission-controller-secret" not found Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.736436 7864 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.736508 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls podName:2cb764f6-40f8-4e87-8be0-b9d7b0364201 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:12.736498566 +0000 UTC m=+17.064152208 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls") pod "dns-operator-8c7d49845-hxcn2" (UID: "2cb764f6-40f8-4e87-8be0-b9d7b0364201") : secret "metrics-tls" not found Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.736521 7864 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.736541 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs podName:70e2ba24-4871-4d1d-9935-156fdbeb2810 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:12.736534287 +0000 UTC m=+17.064187909 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs") pod "network-metrics-daemon-tntcf" (UID: "70e2ba24-4871-4d1d-9935-156fdbeb2810") : secret "metrics-daemon-secret" not found Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: I0224 02:04:04.736529 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert\") pod \"catalog-operator-596f79dd6f-8cg5c\" (UID: \"12b89e05-a503-47aa-90b2-4d741e015b19\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.736616 7864 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.736672 7864 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.736684 7864 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.736695 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert podName:12b89e05-a503-47aa-90b2-4d741e015b19 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:12.736689831 +0000 UTC m=+17.064343453 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert") pod "catalog-operator-596f79dd6f-8cg5c" (UID: "12b89e05-a503-47aa-90b2-4d741e015b19") : secret "catalog-operator-serving-cert" not found Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.736728 7864 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.736739 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls podName:db8d6627-394c-4087-bfa4-bf7580f6bb4b nodeName:}" failed. No retries permitted until 2026-02-24 02:04:12.736703241 +0000 UTC m=+17.064356953 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls") pod "machine-config-operator-7f8c75f984-ffnq7" (UID: "db8d6627-394c-4087-bfa4-bf7580f6bb4b") : secret "mco-proxy-tls" not found Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.736780 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert podName:4f72a322-2142-482a-9b0b-2ad890181d7a nodeName:}" failed. No retries permitted until 2026-02-24 02:04:12.736761933 +0000 UTC m=+17.064415835 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert") pod "cluster-version-operator-5cfd9759cf-v5tpt" (UID: "4f72a322-2142-482a-9b0b-2ad890181d7a") : secret "cluster-version-operator-serving-cert" not found Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.736815 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls podName:c3278a82-ee70-4d6c-9c96-f8cb1bcb9334 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:12.736799174 +0000 UTC m=+17.064452936 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls") pod "ingress-operator-6569778c84-6dlqb" (UID: "c3278a82-ee70-4d6c-9c96-f8cb1bcb9334") : secret "metrics-tls" not found Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: I0224 02:04:04.736626 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: I0224 02:04:04.736974 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: I0224 02:04:04.737029 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-4qf9p\" (UID: \"91d16f7b-390a-4d9d-99d6-cc8e210801d1\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.737212 7864 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.737262 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics podName:91d16f7b-390a-4d9d-99d6-cc8e210801d1 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:12.737247777 +0000 UTC m=+17.064901439 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-4qf9p" (UID: "91d16f7b-390a-4d9d-99d6-cc8e210801d1") : secret "marketplace-operator-metrics" not found Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.737333 7864 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Feb 24 02:04:04.738714 master-0 kubenswrapper[7864]: E0224 02:04:04.737368 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls podName:7b4e3ba0-5194-4e20-8f12-dea4b67504fe nodeName:}" failed. No retries permitted until 2026-02-24 02:04:12.73735669 +0000 UTC m=+17.065010342 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-d6bb9bb76-k98fq" (UID: "7b4e3ba0-5194-4e20-8f12-dea4b67504fe") : secret "cluster-baremetal-operator-tls" not found Feb 24 02:04:04.940004 master-0 kubenswrapper[7864]: I0224 02:04:04.939909 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-serving-cert\") pod \"route-controller-manager-9786ffb6f-5tj2q\" (UID: \"562a4f9a-d27d-4b88-8ab2-92b2cb0277b3\") " pod="openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q" Feb 24 02:04:04.941236 master-0 kubenswrapper[7864]: I0224 02:04:04.940077 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-client-ca\") pod \"route-controller-manager-9786ffb6f-5tj2q\" (UID: \"562a4f9a-d27d-4b88-8ab2-92b2cb0277b3\") " pod="openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q" Feb 24 02:04:04.941236 master-0 kubenswrapper[7864]: E0224 02:04:04.940095 7864 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 24 02:04:04.941236 master-0 kubenswrapper[7864]: E0224 02:04:04.940184 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-serving-cert podName:562a4f9a-d27d-4b88-8ab2-92b2cb0277b3 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:06.940163108 +0000 UTC m=+11.267816730 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-serving-cert") pod "route-controller-manager-9786ffb6f-5tj2q" (UID: "562a4f9a-d27d-4b88-8ab2-92b2cb0277b3") : secret "serving-cert" not found Feb 24 02:04:04.941236 master-0 kubenswrapper[7864]: E0224 02:04:04.940309 7864 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 24 02:04:04.941236 master-0 kubenswrapper[7864]: E0224 02:04:04.940400 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-client-ca podName:562a4f9a-d27d-4b88-8ab2-92b2cb0277b3 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:06.940372924 +0000 UTC m=+11.268026586 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-client-ca") pod "route-controller-manager-9786ffb6f-5tj2q" (UID: "562a4f9a-d27d-4b88-8ab2-92b2cb0277b3") : configmap "client-ca" not found Feb 24 02:04:05.087268 master-0 kubenswrapper[7864]: I0224 02:04:05.087108 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-nqcs2" event={"ID":"c6153510-452b-4726-8b63-8cc894daa168","Type":"ContainerStarted","Data":"cc397ba4850c1884796fd99f77165bb7223cb379b9ddec9b0da649d7c4abf92f"} Feb 24 02:04:05.087268 master-0 kubenswrapper[7864]: I0224 02:04:05.087167 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-nqcs2" event={"ID":"c6153510-452b-4726-8b63-8cc894daa168","Type":"ContainerStarted","Data":"586631f1005e0eec9e04637dd3347ca45f3a799902b12b7ee4c09257fef0aee9"} Feb 24 02:04:05.089328 master-0 kubenswrapper[7864]: I0224 02:04:05.089280 7864 generic.go:334] "Generic (PLEG): container finished" podID="303d5058-84df-40d1-a941-896b093ae470" containerID="3765f830d9a5fe9077b8e56d63e0f2189d75d32a461453b1f0db5a0b05c13f47" exitCode=0 Feb 24 02:04:05.089442 master-0 kubenswrapper[7864]: I0224 02:04:05.089354 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2" Feb 24 02:04:05.090102 master-0 kubenswrapper[7864]: I0224 02:04:05.089982 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" event={"ID":"303d5058-84df-40d1-a941-896b093ae470","Type":"ContainerDied","Data":"3765f830d9a5fe9077b8e56d63e0f2189d75d32a461453b1f0db5a0b05c13f47"} Feb 24 02:04:05.103729 master-0 kubenswrapper[7864]: I0224 02:04:05.103627 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-576b4d78bd-nqcs2" podStartSLOduration=2.103592801 podStartE2EDuration="2.103592801s" podCreationTimestamp="2026-02-24 02:04:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:04:05.102823769 +0000 UTC m=+9.430477391" watchObservedRunningTime="2026-02-24 02:04:05.103592801 +0000 UTC m=+9.431246463" Feb 24 02:04:05.158875 master-0 kubenswrapper[7864]: I0224 02:04:05.158234 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:04:05.158875 master-0 kubenswrapper[7864]: I0224 02:04:05.158738 7864 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:04:05.158875 master-0 kubenswrapper[7864]: I0224 02:04:05.158747 7864 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:04:05.170384 master-0 kubenswrapper[7864]: I0224 02:04:05.166225 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2"] Feb 24 02:04:05.174086 master-0 kubenswrapper[7864]: I0224 02:04:05.173823 7864 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6c9b8f4d95-whcm2"] Feb 24 02:04:05.193226 master-0 kubenswrapper[7864]: I0224 02:04:05.192380 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:04:05.252594 master-0 kubenswrapper[7864]: I0224 02:04:05.250738 7864 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c4b40e8-db78-49ed-99c1-46945964bba6-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:05.252594 master-0 kubenswrapper[7864]: I0224 02:04:05.250778 7864 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c4b40e8-db78-49ed-99c1-46945964bba6-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:05.730555 master-0 kubenswrapper[7864]: I0224 02:04:05.730481 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b75dfd574-s72zx"] Feb 24 02:04:05.731298 master-0 kubenswrapper[7864]: I0224 02:04:05.731254 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b75dfd574-s72zx" Feb 24 02:04:05.737221 master-0 kubenswrapper[7864]: I0224 02:04:05.736202 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 02:04:05.737221 master-0 kubenswrapper[7864]: I0224 02:04:05.736458 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 02:04:05.737221 master-0 kubenswrapper[7864]: I0224 02:04:05.737096 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 02:04:05.739122 master-0 kubenswrapper[7864]: I0224 02:04:05.738414 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 02:04:05.739122 master-0 kubenswrapper[7864]: I0224 02:04:05.738925 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 02:04:05.746293 master-0 kubenswrapper[7864]: I0224 02:04:05.746210 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 02:04:05.766602 master-0 kubenswrapper[7864]: I0224 02:04:05.748635 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b75dfd574-s72zx"] Feb 24 02:04:05.795887 master-0 kubenswrapper[7864]: I0224 02:04:05.795836 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-proxy-ca-bundles\") pod \"controller-manager-5b75dfd574-s72zx\" (UID: \"a3096e84-fed3-48ad-ab9f-6d51e941cb2a\") " pod="openshift-controller-manager/controller-manager-5b75dfd574-s72zx" Feb 24 02:04:05.795887 master-0 kubenswrapper[7864]: I0224 02:04:05.795894 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-serving-cert\") pod \"controller-manager-5b75dfd574-s72zx\" (UID: \"a3096e84-fed3-48ad-ab9f-6d51e941cb2a\") " pod="openshift-controller-manager/controller-manager-5b75dfd574-s72zx" Feb 24 02:04:05.796175 master-0 kubenswrapper[7864]: I0224 02:04:05.795926 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-config\") pod \"controller-manager-5b75dfd574-s72zx\" (UID: \"a3096e84-fed3-48ad-ab9f-6d51e941cb2a\") " pod="openshift-controller-manager/controller-manager-5b75dfd574-s72zx" Feb 24 02:04:05.796175 master-0 kubenswrapper[7864]: I0224 02:04:05.795995 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrn8d\" (UniqueName: \"kubernetes.io/projected/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-kube-api-access-nrn8d\") pod \"controller-manager-5b75dfd574-s72zx\" (UID: \"a3096e84-fed3-48ad-ab9f-6d51e941cb2a\") " pod="openshift-controller-manager/controller-manager-5b75dfd574-s72zx" Feb 24 02:04:05.796175 master-0 kubenswrapper[7864]: I0224 02:04:05.796107 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-client-ca\") pod \"controller-manager-5b75dfd574-s72zx\" (UID: \"a3096e84-fed3-48ad-ab9f-6d51e941cb2a\") " pod="openshift-controller-manager/controller-manager-5b75dfd574-s72zx" Feb 24 02:04:05.894807 master-0 kubenswrapper[7864]: I0224 02:04:05.894755 7864 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c4b40e8-db78-49ed-99c1-46945964bba6" path="/var/lib/kubelet/pods/2c4b40e8-db78-49ed-99c1-46945964bba6/volumes" Feb 24 02:04:05.897269 master-0 kubenswrapper[7864]: I0224 02:04:05.897219 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrn8d\" (UniqueName: \"kubernetes.io/projected/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-kube-api-access-nrn8d\") pod \"controller-manager-5b75dfd574-s72zx\" (UID: \"a3096e84-fed3-48ad-ab9f-6d51e941cb2a\") " pod="openshift-controller-manager/controller-manager-5b75dfd574-s72zx" Feb 24 02:04:05.897368 master-0 kubenswrapper[7864]: I0224 02:04:05.897350 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-client-ca\") pod \"controller-manager-5b75dfd574-s72zx\" (UID: \"a3096e84-fed3-48ad-ab9f-6d51e941cb2a\") " pod="openshift-controller-manager/controller-manager-5b75dfd574-s72zx" Feb 24 02:04:05.897868 master-0 kubenswrapper[7864]: I0224 02:04:05.897441 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-proxy-ca-bundles\") pod \"controller-manager-5b75dfd574-s72zx\" (UID: \"a3096e84-fed3-48ad-ab9f-6d51e941cb2a\") " pod="openshift-controller-manager/controller-manager-5b75dfd574-s72zx" Feb 24 02:04:05.897868 master-0 kubenswrapper[7864]: I0224 02:04:05.897477 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-serving-cert\") pod \"controller-manager-5b75dfd574-s72zx\" (UID: \"a3096e84-fed3-48ad-ab9f-6d51e941cb2a\") " pod="openshift-controller-manager/controller-manager-5b75dfd574-s72zx" Feb 24 02:04:05.897868 master-0 kubenswrapper[7864]: I0224 02:04:05.897495 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-config\") pod \"controller-manager-5b75dfd574-s72zx\" (UID: \"a3096e84-fed3-48ad-ab9f-6d51e941cb2a\") " pod="openshift-controller-manager/controller-manager-5b75dfd574-s72zx" Feb 24 02:04:05.898724 master-0 kubenswrapper[7864]: I0224 02:04:05.898702 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-config\") pod \"controller-manager-5b75dfd574-s72zx\" (UID: \"a3096e84-fed3-48ad-ab9f-6d51e941cb2a\") " pod="openshift-controller-manager/controller-manager-5b75dfd574-s72zx" Feb 24 02:04:05.899160 master-0 kubenswrapper[7864]: E0224 02:04:05.899062 7864 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 24 02:04:05.899160 master-0 kubenswrapper[7864]: E0224 02:04:05.899112 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-client-ca podName:a3096e84-fed3-48ad-ab9f-6d51e941cb2a nodeName:}" failed. No retries permitted until 2026-02-24 02:04:06.399098724 +0000 UTC m=+10.726752336 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-client-ca") pod "controller-manager-5b75dfd574-s72zx" (UID: "a3096e84-fed3-48ad-ab9f-6d51e941cb2a") : configmap "client-ca" not found Feb 24 02:04:05.899881 master-0 kubenswrapper[7864]: E0224 02:04:05.899754 7864 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 24 02:04:05.899881 master-0 kubenswrapper[7864]: E0224 02:04:05.899837 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-serving-cert podName:a3096e84-fed3-48ad-ab9f-6d51e941cb2a nodeName:}" failed. No retries permitted until 2026-02-24 02:04:06.399816584 +0000 UTC m=+10.727470206 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-serving-cert") pod "controller-manager-5b75dfd574-s72zx" (UID: "a3096e84-fed3-48ad-ab9f-6d51e941cb2a") : secret "serving-cert" not found Feb 24 02:04:05.900399 master-0 kubenswrapper[7864]: I0224 02:04:05.900108 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-proxy-ca-bundles\") pod \"controller-manager-5b75dfd574-s72zx\" (UID: \"a3096e84-fed3-48ad-ab9f-6d51e941cb2a\") " pod="openshift-controller-manager/controller-manager-5b75dfd574-s72zx" Feb 24 02:04:05.940252 master-0 kubenswrapper[7864]: I0224 02:04:05.939862 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrn8d\" (UniqueName: \"kubernetes.io/projected/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-kube-api-access-nrn8d\") pod \"controller-manager-5b75dfd574-s72zx\" (UID: \"a3096e84-fed3-48ad-ab9f-6d51e941cb2a\") " pod="openshift-controller-manager/controller-manager-5b75dfd574-s72zx" Feb 24 02:04:06.095203 master-0 kubenswrapper[7864]: I0224 02:04:06.095143 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" event={"ID":"f6e7b773-7ecd-4a5c-8bef-d672f371e7e5","Type":"ContainerStarted","Data":"1dd68f4f64e0c62e01d0497cf59111173fe627d06971140a305a4032c20cc485"} Feb 24 02:04:06.099746 master-0 kubenswrapper[7864]: I0224 02:04:06.099373 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-t5rgn" event={"ID":"f807f33c-8132-48a8-ab12-4b54c1cd2b10","Type":"ContainerStarted","Data":"b349be3a51fa8e9742ffa9ffec1fca593b97c110bfc1659b3565ff20080159a9"} Feb 24 02:04:06.099746 master-0 kubenswrapper[7864]: I0224 02:04:06.099481 7864 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:04:06.118397 master-0 kubenswrapper[7864]: I0224 02:04:06.117563 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" podStartSLOduration=1.727346823 podStartE2EDuration="6.117529401s" podCreationTimestamp="2026-02-24 02:04:00 +0000 UTC" firstStartedPulling="2026-02-24 02:04:01.491519698 +0000 UTC m=+5.819173320" lastFinishedPulling="2026-02-24 02:04:05.881702276 +0000 UTC m=+10.209355898" observedRunningTime="2026-02-24 02:04:06.116534253 +0000 UTC m=+10.444187895" watchObservedRunningTime="2026-02-24 02:04:06.117529401 +0000 UTC m=+10.445183053" Feb 24 02:04:06.408652 master-0 kubenswrapper[7864]: I0224 02:04:06.408492 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-serving-cert\") pod \"controller-manager-5b75dfd574-s72zx\" (UID: \"a3096e84-fed3-48ad-ab9f-6d51e941cb2a\") " pod="openshift-controller-manager/controller-manager-5b75dfd574-s72zx" Feb 24 02:04:06.408986 master-0 kubenswrapper[7864]: E0224 02:04:06.408896 7864 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 24 02:04:06.409053 master-0 kubenswrapper[7864]: I0224 02:04:06.409022 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-client-ca\") pod \"controller-manager-5b75dfd574-s72zx\" (UID: \"a3096e84-fed3-48ad-ab9f-6d51e941cb2a\") " pod="openshift-controller-manager/controller-manager-5b75dfd574-s72zx" Feb 24 02:04:06.409110 master-0 kubenswrapper[7864]: E0224 02:04:06.409062 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-serving-cert podName:a3096e84-fed3-48ad-ab9f-6d51e941cb2a nodeName:}" failed. No retries permitted until 2026-02-24 02:04:07.409022887 +0000 UTC m=+11.736676539 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-serving-cert") pod "controller-manager-5b75dfd574-s72zx" (UID: "a3096e84-fed3-48ad-ab9f-6d51e941cb2a") : secret "serving-cert" not found Feb 24 02:04:06.409271 master-0 kubenswrapper[7864]: E0224 02:04:06.409204 7864 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 24 02:04:06.409398 master-0 kubenswrapper[7864]: E0224 02:04:06.409364 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-client-ca podName:a3096e84-fed3-48ad-ab9f-6d51e941cb2a nodeName:}" failed. No retries permitted until 2026-02-24 02:04:07.409323575 +0000 UTC m=+11.736977227 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-client-ca") pod "controller-manager-5b75dfd574-s72zx" (UID: "a3096e84-fed3-48ad-ab9f-6d51e941cb2a") : configmap "client-ca" not found Feb 24 02:04:07.014328 master-0 kubenswrapper[7864]: I0224 02:04:07.014269 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-serving-cert\") pod \"route-controller-manager-9786ffb6f-5tj2q\" (UID: \"562a4f9a-d27d-4b88-8ab2-92b2cb0277b3\") " pod="openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q" Feb 24 02:04:07.014328 master-0 kubenswrapper[7864]: I0224 02:04:07.014357 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-client-ca\") pod \"route-controller-manager-9786ffb6f-5tj2q\" (UID: \"562a4f9a-d27d-4b88-8ab2-92b2cb0277b3\") " pod="openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q" Feb 24 02:04:07.017663 master-0 kubenswrapper[7864]: E0224 02:04:07.014520 7864 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 24 02:04:07.017663 master-0 kubenswrapper[7864]: E0224 02:04:07.014688 7864 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 24 02:04:07.017663 master-0 kubenswrapper[7864]: E0224 02:04:07.014775 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-client-ca podName:562a4f9a-d27d-4b88-8ab2-92b2cb0277b3 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:11.014748677 +0000 UTC m=+15.342402309 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-client-ca") pod "route-controller-manager-9786ffb6f-5tj2q" (UID: "562a4f9a-d27d-4b88-8ab2-92b2cb0277b3") : configmap "client-ca" not found Feb 24 02:04:07.017663 master-0 kubenswrapper[7864]: E0224 02:04:07.014909 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-serving-cert podName:562a4f9a-d27d-4b88-8ab2-92b2cb0277b3 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:11.01487374 +0000 UTC m=+15.342527612 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-serving-cert") pod "route-controller-manager-9786ffb6f-5tj2q" (UID: "562a4f9a-d27d-4b88-8ab2-92b2cb0277b3") : secret "serving-cert" not found Feb 24 02:04:07.108045 master-0 kubenswrapper[7864]: I0224 02:04:07.107962 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-t5rgn" event={"ID":"f807f33c-8132-48a8-ab12-4b54c1cd2b10","Type":"ContainerStarted","Data":"edd50eb47e10b09f1c0c971bc402155dbd7033b3e0dee0c8f6cc4bb8c1175ca2"} Feb 24 02:04:07.129052 master-0 kubenswrapper[7864]: I0224 02:04:07.128934 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-t5rgn" podStartSLOduration=2.7643856700000002 podStartE2EDuration="7.128906169s" podCreationTimestamp="2026-02-24 02:04:00 +0000 UTC" firstStartedPulling="2026-02-24 02:04:01.491036584 +0000 UTC m=+5.818690196" lastFinishedPulling="2026-02-24 02:04:05.855557073 +0000 UTC m=+10.183210695" observedRunningTime="2026-02-24 02:04:07.126916643 +0000 UTC m=+11.454570375" watchObservedRunningTime="2026-02-24 02:04:07.128906169 +0000 UTC m=+11.456559801" Feb 24 02:04:07.425046 master-0 kubenswrapper[7864]: I0224 02:04:07.420182 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-client-ca\") pod \"controller-manager-5b75dfd574-s72zx\" (UID: \"a3096e84-fed3-48ad-ab9f-6d51e941cb2a\") " pod="openshift-controller-manager/controller-manager-5b75dfd574-s72zx" Feb 24 02:04:07.425046 master-0 kubenswrapper[7864]: I0224 02:04:07.420320 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-serving-cert\") pod \"controller-manager-5b75dfd574-s72zx\" (UID: \"a3096e84-fed3-48ad-ab9f-6d51e941cb2a\") " pod="openshift-controller-manager/controller-manager-5b75dfd574-s72zx" Feb 24 02:04:07.425046 master-0 kubenswrapper[7864]: E0224 02:04:07.420570 7864 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 24 02:04:07.425046 master-0 kubenswrapper[7864]: E0224 02:04:07.420733 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-client-ca podName:a3096e84-fed3-48ad-ab9f-6d51e941cb2a nodeName:}" failed. No retries permitted until 2026-02-24 02:04:09.420698403 +0000 UTC m=+13.748352065 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-client-ca") pod "controller-manager-5b75dfd574-s72zx" (UID: "a3096e84-fed3-48ad-ab9f-6d51e941cb2a") : configmap "client-ca" not found Feb 24 02:04:07.425046 master-0 kubenswrapper[7864]: E0224 02:04:07.421878 7864 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 24 02:04:07.425046 master-0 kubenswrapper[7864]: E0224 02:04:07.421998 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-serving-cert podName:a3096e84-fed3-48ad-ab9f-6d51e941cb2a nodeName:}" failed. No retries permitted until 2026-02-24 02:04:09.421968389 +0000 UTC m=+13.749622051 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-serving-cert") pod "controller-manager-5b75dfd574-s72zx" (UID: "a3096e84-fed3-48ad-ab9f-6d51e941cb2a") : secret "serving-cert" not found Feb 24 02:04:07.746080 master-0 kubenswrapper[7864]: I0224 02:04:07.746007 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:04:09.123607 master-0 kubenswrapper[7864]: I0224 02:04:09.119839 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" event={"ID":"303d5058-84df-40d1-a941-896b093ae470","Type":"ContainerStarted","Data":"8930b4416260ff6550582e0ac717e48f996f0ee753ab29009f1d6eec95a046f6"} Feb 24 02:04:09.460432 master-0 kubenswrapper[7864]: I0224 02:04:09.460363 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-serving-cert\") pod \"controller-manager-5b75dfd574-s72zx\" (UID: \"a3096e84-fed3-48ad-ab9f-6d51e941cb2a\") " pod="openshift-controller-manager/controller-manager-5b75dfd574-s72zx" Feb 24 02:04:09.460820 master-0 kubenswrapper[7864]: I0224 02:04:09.460762 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-client-ca\") pod \"controller-manager-5b75dfd574-s72zx\" (UID: \"a3096e84-fed3-48ad-ab9f-6d51e941cb2a\") " pod="openshift-controller-manager/controller-manager-5b75dfd574-s72zx" Feb 24 02:04:09.461026 master-0 kubenswrapper[7864]: E0224 02:04:09.460961 7864 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 24 02:04:09.461088 master-0 kubenswrapper[7864]: E0224 02:04:09.461063 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-client-ca podName:a3096e84-fed3-48ad-ab9f-6d51e941cb2a nodeName:}" failed. No retries permitted until 2026-02-24 02:04:13.461037441 +0000 UTC m=+17.788691093 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-client-ca") pod "controller-manager-5b75dfd574-s72zx" (UID: "a3096e84-fed3-48ad-ab9f-6d51e941cb2a") : configmap "client-ca" not found Feb 24 02:04:09.468910 master-0 kubenswrapper[7864]: I0224 02:04:09.468842 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-serving-cert\") pod \"controller-manager-5b75dfd574-s72zx\" (UID: \"a3096e84-fed3-48ad-ab9f-6d51e941cb2a\") " pod="openshift-controller-manager/controller-manager-5b75dfd574-s72zx" Feb 24 02:04:10.126877 master-0 kubenswrapper[7864]: I0224 02:04:10.126775 7864 generic.go:334] "Generic (PLEG): container finished" podID="c92835f0-7f32-4584-8304-843d7979392a" containerID="07b9433470f2cae90108f994623de6a108abe146e8addc319cfc6c6ef422b361" exitCode=0 Feb 24 02:04:10.126877 master-0 kubenswrapper[7864]: I0224 02:04:10.126846 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" event={"ID":"c92835f0-7f32-4584-8304-843d7979392a","Type":"ContainerDied","Data":"07b9433470f2cae90108f994623de6a108abe146e8addc319cfc6c6ef422b361"} Feb 24 02:04:10.128048 master-0 kubenswrapper[7864]: I0224 02:04:10.127428 7864 scope.go:117] "RemoveContainer" containerID="07b9433470f2cae90108f994623de6a108abe146e8addc319cfc6c6ef422b361" Feb 24 02:04:10.738874 master-0 kubenswrapper[7864]: I0224 02:04:10.738778 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:04:11.094479 master-0 kubenswrapper[7864]: I0224 02:04:11.094351 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-serving-cert\") pod \"route-controller-manager-9786ffb6f-5tj2q\" (UID: \"562a4f9a-d27d-4b88-8ab2-92b2cb0277b3\") " pod="openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q" Feb 24 02:04:11.094479 master-0 kubenswrapper[7864]: I0224 02:04:11.094443 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-client-ca\") pod \"route-controller-manager-9786ffb6f-5tj2q\" (UID: \"562a4f9a-d27d-4b88-8ab2-92b2cb0277b3\") " pod="openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q" Feb 24 02:04:11.094743 master-0 kubenswrapper[7864]: E0224 02:04:11.094587 7864 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 24 02:04:11.094743 master-0 kubenswrapper[7864]: E0224 02:04:11.094668 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-client-ca podName:562a4f9a-d27d-4b88-8ab2-92b2cb0277b3 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:19.094645575 +0000 UTC m=+23.422299207 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-client-ca") pod "route-controller-manager-9786ffb6f-5tj2q" (UID: "562a4f9a-d27d-4b88-8ab2-92b2cb0277b3") : configmap "client-ca" not found Feb 24 02:04:11.094743 master-0 kubenswrapper[7864]: E0224 02:04:11.094660 7864 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 24 02:04:11.094875 master-0 kubenswrapper[7864]: E0224 02:04:11.094811 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-serving-cert podName:562a4f9a-d27d-4b88-8ab2-92b2cb0277b3 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:19.09477188 +0000 UTC m=+23.422425532 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-serving-cert") pod "route-controller-manager-9786ffb6f-5tj2q" (UID: "562a4f9a-d27d-4b88-8ab2-92b2cb0277b3") : secret "serving-cert" not found Feb 24 02:04:11.142048 master-0 kubenswrapper[7864]: I0224 02:04:11.141987 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" event={"ID":"c92835f0-7f32-4584-8304-843d7979392a","Type":"ContainerStarted","Data":"31574f7edd04f096f094395019ad492b65bb9b76a514603c20ad3eb658100f5f"} Feb 24 02:04:11.142947 master-0 kubenswrapper[7864]: I0224 02:04:11.142309 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:04:12.453032 master-0 kubenswrapper[7864]: I0224 02:04:12.452642 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-7f665d79f-x624m"] Feb 24 02:04:12.454948 master-0 kubenswrapper[7864]: I0224 02:04:12.454234 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.458926 master-0 kubenswrapper[7864]: I0224 02:04:12.458866 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Feb 24 02:04:12.459884 master-0 kubenswrapper[7864]: I0224 02:04:12.459831 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 24 02:04:12.463372 master-0 kubenswrapper[7864]: I0224 02:04:12.463308 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 24 02:04:12.464445 master-0 kubenswrapper[7864]: I0224 02:04:12.464360 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 24 02:04:12.468685 master-0 kubenswrapper[7864]: I0224 02:04:12.465665 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 24 02:04:12.468685 master-0 kubenswrapper[7864]: I0224 02:04:12.465960 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 24 02:04:12.468685 master-0 kubenswrapper[7864]: I0224 02:04:12.465996 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 24 02:04:12.468685 master-0 kubenswrapper[7864]: I0224 02:04:12.466174 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Feb 24 02:04:12.468685 master-0 kubenswrapper[7864]: I0224 02:04:12.466294 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 24 02:04:12.477182 master-0 kubenswrapper[7864]: I0224 02:04:12.477121 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 24 02:04:12.491982 master-0 kubenswrapper[7864]: I0224 02:04:12.491909 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-7f665d79f-x624m"] Feb 24 02:04:12.520762 master-0 kubenswrapper[7864]: I0224 02:04:12.520522 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-audit\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.520762 master-0 kubenswrapper[7864]: I0224 02:04:12.520620 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/90b909c3-266d-4252-848d-30d45d2effd1-encryption-config\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.521184 master-0 kubenswrapper[7864]: I0224 02:04:12.520856 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-image-import-ca\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.521184 master-0 kubenswrapper[7864]: I0224 02:04:12.520933 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw9nk\" (UniqueName: \"kubernetes.io/projected/90b909c3-266d-4252-848d-30d45d2effd1-kube-api-access-nw9nk\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.521184 master-0 kubenswrapper[7864]: I0224 02:04:12.521011 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-config\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.521184 master-0 kubenswrapper[7864]: I0224 02:04:12.521068 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-etcd-serving-ca\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.521184 master-0 kubenswrapper[7864]: I0224 02:04:12.521129 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-trusted-ca-bundle\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.521478 master-0 kubenswrapper[7864]: I0224 02:04:12.521284 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90b909c3-266d-4252-848d-30d45d2effd1-serving-cert\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.521478 master-0 kubenswrapper[7864]: I0224 02:04:12.521316 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/90b909c3-266d-4252-848d-30d45d2effd1-audit-dir\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.521478 master-0 kubenswrapper[7864]: I0224 02:04:12.521349 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/90b909c3-266d-4252-848d-30d45d2effd1-node-pullsecrets\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.521478 master-0 kubenswrapper[7864]: I0224 02:04:12.521380 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/90b909c3-266d-4252-848d-30d45d2effd1-etcd-client\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.623334 master-0 kubenswrapper[7864]: I0224 02:04:12.623234 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-config\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.623334 master-0 kubenswrapper[7864]: I0224 02:04:12.623333 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-etcd-serving-ca\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.623783 master-0 kubenswrapper[7864]: I0224 02:04:12.623405 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-trusted-ca-bundle\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.623783 master-0 kubenswrapper[7864]: I0224 02:04:12.623671 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/90b909c3-266d-4252-848d-30d45d2effd1-etcd-client\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.623783 master-0 kubenswrapper[7864]: I0224 02:04:12.623708 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90b909c3-266d-4252-848d-30d45d2effd1-serving-cert\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.624458 master-0 kubenswrapper[7864]: I0224 02:04:12.624137 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/90b909c3-266d-4252-848d-30d45d2effd1-audit-dir\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.624458 master-0 kubenswrapper[7864]: I0224 02:04:12.624222 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/90b909c3-266d-4252-848d-30d45d2effd1-node-pullsecrets\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.624458 master-0 kubenswrapper[7864]: I0224 02:04:12.624296 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/90b909c3-266d-4252-848d-30d45d2effd1-audit-dir\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.624458 master-0 kubenswrapper[7864]: I0224 02:04:12.624375 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-audit\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.624458 master-0 kubenswrapper[7864]: I0224 02:04:12.624406 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/90b909c3-266d-4252-848d-30d45d2effd1-node-pullsecrets\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.624458 master-0 kubenswrapper[7864]: I0224 02:04:12.624416 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/90b909c3-266d-4252-848d-30d45d2effd1-encryption-config\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.624914 master-0 kubenswrapper[7864]: E0224 02:04:12.624468 7864 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 24 02:04:12.624914 master-0 kubenswrapper[7864]: E0224 02:04:12.624606 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90b909c3-266d-4252-848d-30d45d2effd1-serving-cert podName:90b909c3-266d-4252-848d-30d45d2effd1 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:13.124552789 +0000 UTC m=+17.452206451 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/90b909c3-266d-4252-848d-30d45d2effd1-serving-cert") pod "apiserver-7f665d79f-x624m" (UID: "90b909c3-266d-4252-848d-30d45d2effd1") : secret "serving-cert" not found Feb 24 02:04:12.624914 master-0 kubenswrapper[7864]: I0224 02:04:12.624644 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-config\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.624914 master-0 kubenswrapper[7864]: E0224 02:04:12.624863 7864 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 24 02:04:12.625168 master-0 kubenswrapper[7864]: E0224 02:04:12.625044 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-audit podName:90b909c3-266d-4252-848d-30d45d2effd1 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:13.125013665 +0000 UTC m=+17.452667327 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-audit") pod "apiserver-7f665d79f-x624m" (UID: "90b909c3-266d-4252-848d-30d45d2effd1") : configmap "audit-0" not found Feb 24 02:04:12.625168 master-0 kubenswrapper[7864]: I0224 02:04:12.625086 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-image-import-ca\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.625317 master-0 kubenswrapper[7864]: I0224 02:04:12.625182 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw9nk\" (UniqueName: \"kubernetes.io/projected/90b909c3-266d-4252-848d-30d45d2effd1-kube-api-access-nw9nk\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.626430 master-0 kubenswrapper[7864]: I0224 02:04:12.626256 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-trusted-ca-bundle\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.626430 master-0 kubenswrapper[7864]: I0224 02:04:12.626324 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-etcd-serving-ca\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.626430 master-0 kubenswrapper[7864]: I0224 02:04:12.626354 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-image-import-ca\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.633391 master-0 kubenswrapper[7864]: I0224 02:04:12.633319 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/90b909c3-266d-4252-848d-30d45d2effd1-encryption-config\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.633703 master-0 kubenswrapper[7864]: I0224 02:04:12.633645 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/90b909c3-266d-4252-848d-30d45d2effd1-etcd-client\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.661081 master-0 kubenswrapper[7864]: I0224 02:04:12.661002 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw9nk\" (UniqueName: \"kubernetes.io/projected/90b909c3-266d-4252-848d-30d45d2effd1-kube-api-access-nw9nk\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:12.828027 master-0 kubenswrapper[7864]: I0224 02:04:12.827910 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:04:12.828027 master-0 kubenswrapper[7864]: I0224 02:04:12.827983 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-fkzdb\" (UID: \"f2e9cdff-8c15-43df-b8df-7fe3a73fda86\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:04:12.828027 master-0 kubenswrapper[7864]: I0224 02:04:12.828019 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:04:12.828445 master-0 kubenswrapper[7864]: I0224 02:04:12.828366 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-dg77f\" (UID: \"dc3d08db-45fa-4fef-b1fd-2875f22d5c45\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" Feb 24 02:04:12.828786 master-0 kubenswrapper[7864]: E0224 02:04:12.828722 7864 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 24 02:04:12.828831 master-0 kubenswrapper[7864]: I0224 02:04:12.828760 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls\") pod \"dns-operator-8c7d49845-hxcn2\" (UID: \"2cb764f6-40f8-4e87-8be0-b9d7b0364201\") " pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" Feb 24 02:04:12.828923 master-0 kubenswrapper[7864]: E0224 02:04:12.828894 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls podName:f2e9cdff-8c15-43df-b8df-7fe3a73fda86 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:28.828843402 +0000 UTC m=+33.156497054 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-fkzdb" (UID: "f2e9cdff-8c15-43df-b8df-7fe3a73fda86") : secret "cluster-monitoring-operator-tls" not found Feb 24 02:04:12.828983 master-0 kubenswrapper[7864]: E0224 02:04:12.828948 7864 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 24 02:04:12.829014 master-0 kubenswrapper[7864]: I0224 02:04:12.828978 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:04:12.829116 master-0 kubenswrapper[7864]: E0224 02:04:12.829078 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs podName:dc3d08db-45fa-4fef-b1fd-2875f22d5c45 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:28.829037409 +0000 UTC m=+33.156691071 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-dg77f" (UID: "dc3d08db-45fa-4fef-b1fd-2875f22d5c45") : secret "multus-admission-controller-secret" not found Feb 24 02:04:12.829270 master-0 kubenswrapper[7864]: I0224 02:04:12.829233 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:04:12.829312 master-0 kubenswrapper[7864]: E0224 02:04:12.829265 7864 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Feb 24 02:04:12.829410 master-0 kubenswrapper[7864]: E0224 02:04:12.829385 7864 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 24 02:04:12.829446 master-0 kubenswrapper[7864]: I0224 02:04:12.829300 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs\") pod \"network-metrics-daemon-tntcf\" (UID: \"70e2ba24-4871-4d1d-9935-156fdbeb2810\") " pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:04:12.829446 master-0 kubenswrapper[7864]: E0224 02:04:12.829434 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls podName:db8d6627-394c-4087-bfa4-bf7580f6bb4b nodeName:}" failed. No retries permitted until 2026-02-24 02:04:28.829390272 +0000 UTC m=+33.157044134 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls") pod "machine-config-operator-7f8c75f984-ffnq7" (UID: "db8d6627-394c-4087-bfa4-bf7580f6bb4b") : secret "mco-proxy-tls" not found Feb 24 02:04:12.829546 master-0 kubenswrapper[7864]: E0224 02:04:12.829521 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs podName:70e2ba24-4871-4d1d-9935-156fdbeb2810 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:28.829504546 +0000 UTC m=+33.157158208 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs") pod "network-metrics-daemon-tntcf" (UID: "70e2ba24-4871-4d1d-9935-156fdbeb2810") : secret "metrics-daemon-secret" not found Feb 24 02:04:12.829622 master-0 kubenswrapper[7864]: I0224 02:04:12.829594 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:04:12.829684 master-0 kubenswrapper[7864]: I0224 02:04:12.829658 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert\") pod \"catalog-operator-596f79dd6f-8cg5c\" (UID: \"12b89e05-a503-47aa-90b2-4d741e015b19\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:04:12.829872 master-0 kubenswrapper[7864]: E0224 02:04:12.829824 7864 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 24 02:04:12.829916 master-0 kubenswrapper[7864]: I0224 02:04:12.829866 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:04:12.829987 master-0 kubenswrapper[7864]: E0224 02:04:12.829967 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert podName:12b89e05-a503-47aa-90b2-4d741e015b19 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:28.829926241 +0000 UTC m=+33.157579893 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert") pod "catalog-operator-596f79dd6f-8cg5c" (UID: "12b89e05-a503-47aa-90b2-4d741e015b19") : secret "catalog-operator-serving-cert" not found Feb 24 02:04:12.830173 master-0 kubenswrapper[7864]: I0224 02:04:12.830140 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-4qf9p\" (UID: \"91d16f7b-390a-4d9d-99d6-cc8e210801d1\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:04:12.830256 master-0 kubenswrapper[7864]: I0224 02:04:12.830231 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert\") pod \"olm-operator-5499d7f7bb-5g6nc\" (UID: \"02f1d753-983a-4c4a-b1a0-560de173859a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:04:12.830315 master-0 kubenswrapper[7864]: I0224 02:04:12.830292 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:04:12.830875 master-0 kubenswrapper[7864]: I0224 02:04:12.830349 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:04:12.830875 master-0 kubenswrapper[7864]: E0224 02:04:12.830302 7864 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 24 02:04:12.830875 master-0 kubenswrapper[7864]: E0224 02:04:12.830395 7864 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 24 02:04:12.830875 master-0 kubenswrapper[7864]: I0224 02:04:12.830423 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2hllb\" (UID: \"6320dbb5-b84d-4a57-8c65-fbed8421f84a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" Feb 24 02:04:12.830875 master-0 kubenswrapper[7864]: E0224 02:04:12.830451 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert podName:02f1d753-983a-4c4a-b1a0-560de173859a nodeName:}" failed. No retries permitted until 2026-02-24 02:04:28.83043597 +0000 UTC m=+33.158089632 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert") pod "olm-operator-5499d7f7bb-5g6nc" (UID: "02f1d753-983a-4c4a-b1a0-560de173859a") : secret "olm-operator-serving-cert" not found Feb 24 02:04:12.830875 master-0 kubenswrapper[7864]: E0224 02:04:12.830527 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics podName:91d16f7b-390a-4d9d-99d6-cc8e210801d1 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:28.830511293 +0000 UTC m=+33.158164955 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-4qf9p" (UID: "91d16f7b-390a-4d9d-99d6-cc8e210801d1") : secret "marketplace-operator-metrics" not found Feb 24 02:04:12.830875 master-0 kubenswrapper[7864]: E0224 02:04:12.830562 7864 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 24 02:04:12.830875 master-0 kubenswrapper[7864]: E0224 02:04:12.830848 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert podName:6320dbb5-b84d-4a57-8c65-fbed8421f84a nodeName:}" failed. No retries permitted until 2026-02-24 02:04:28.830802693 +0000 UTC m=+33.158456345 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-2hllb" (UID: "6320dbb5-b84d-4a57-8c65-fbed8421f84a") : secret "package-server-manager-serving-cert" not found Feb 24 02:04:12.833651 master-0 kubenswrapper[7864]: I0224 02:04:12.833558 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:04:12.842649 master-0 kubenswrapper[7864]: I0224 02:04:12.840160 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:04:12.842649 master-0 kubenswrapper[7864]: I0224 02:04:12.840338 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:04:12.842649 master-0 kubenswrapper[7864]: I0224 02:04:12.840393 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:04:12.842649 master-0 kubenswrapper[7864]: I0224 02:04:12.840472 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls\") pod \"dns-operator-8c7d49845-hxcn2\" (UID: \"2cb764f6-40f8-4e87-8be0-b9d7b0364201\") " pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" Feb 24 02:04:12.842649 master-0 kubenswrapper[7864]: I0224 02:04:12.840686 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-v5tpt\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:04:12.842649 master-0 kubenswrapper[7864]: I0224 02:04:12.840709 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:04:12.842649 master-0 kubenswrapper[7864]: I0224 02:04:12.840854 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:04:13.038759 master-0 kubenswrapper[7864]: I0224 02:04:13.038442 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:04:13.039225 master-0 kubenswrapper[7864]: I0224 02:04:13.039164 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:04:13.039354 master-0 kubenswrapper[7864]: I0224 02:04:13.039170 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:04:13.050817 master-0 kubenswrapper[7864]: I0224 02:04:13.040039 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" Feb 24 02:04:13.050817 master-0 kubenswrapper[7864]: I0224 02:04:13.040539 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:04:13.050817 master-0 kubenswrapper[7864]: I0224 02:04:13.042030 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:04:13.124233 master-0 kubenswrapper[7864]: W0224 02:04:13.124145 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f72a322_2142_482a_9b0b_2ad890181d7a.slice/crio-7240cd722a282af2e89b1de235fe3e28c9e5681ff6d5937b5f928d0aa4e3ea83 WatchSource:0}: Error finding container 7240cd722a282af2e89b1de235fe3e28c9e5681ff6d5937b5f928d0aa4e3ea83: Status 404 returned error can't find the container with id 7240cd722a282af2e89b1de235fe3e28c9e5681ff6d5937b5f928d0aa4e3ea83 Feb 24 02:04:13.135831 master-0 kubenswrapper[7864]: I0224 02:04:13.135783 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90b909c3-266d-4252-848d-30d45d2effd1-serving-cert\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:13.135959 master-0 kubenswrapper[7864]: I0224 02:04:13.135875 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-audit\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:13.136029 master-0 kubenswrapper[7864]: E0224 02:04:13.135992 7864 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 24 02:04:13.136095 master-0 kubenswrapper[7864]: E0224 02:04:13.136072 7864 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 24 02:04:13.136164 master-0 kubenswrapper[7864]: E0224 02:04:13.136078 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90b909c3-266d-4252-848d-30d45d2effd1-serving-cert podName:90b909c3-266d-4252-848d-30d45d2effd1 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:14.136049466 +0000 UTC m=+18.463703098 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/90b909c3-266d-4252-848d-30d45d2effd1-serving-cert") pod "apiserver-7f665d79f-x624m" (UID: "90b909c3-266d-4252-848d-30d45d2effd1") : secret "serving-cert" not found Feb 24 02:04:13.136237 master-0 kubenswrapper[7864]: E0224 02:04:13.136182 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-audit podName:90b909c3-266d-4252-848d-30d45d2effd1 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:14.136155639 +0000 UTC m=+18.463809271 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-audit") pod "apiserver-7f665d79f-x624m" (UID: "90b909c3-266d-4252-848d-30d45d2effd1") : configmap "audit-0" not found Feb 24 02:04:13.152171 master-0 kubenswrapper[7864]: I0224 02:04:13.152112 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" event={"ID":"4f72a322-2142-482a-9b0b-2ad890181d7a","Type":"ContainerStarted","Data":"7240cd722a282af2e89b1de235fe3e28c9e5681ff6d5937b5f928d0aa4e3ea83"} Feb 24 02:04:13.405058 master-0 kubenswrapper[7864]: I0224 02:04:13.404688 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq"] Feb 24 02:04:13.412881 master-0 kubenswrapper[7864]: W0224 02:04:13.412820 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b4e3ba0_5194_4e20_8f12_dea4b67504fe.slice/crio-7ed7554e0b6eb88f1840ed2eee6ab3bddc21f230340186b0439d63f9a885eb31 WatchSource:0}: Error finding container 7ed7554e0b6eb88f1840ed2eee6ab3bddc21f230340186b0439d63f9a885eb31: Status 404 returned error can't find the container with id 7ed7554e0b6eb88f1840ed2eee6ab3bddc21f230340186b0439d63f9a885eb31 Feb 24 02:04:13.422407 master-0 kubenswrapper[7864]: I0224 02:04:13.422354 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4"] Feb 24 02:04:13.423298 master-0 kubenswrapper[7864]: I0224 02:04:13.423273 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd"] Feb 24 02:04:13.429993 master-0 kubenswrapper[7864]: W0224 02:04:13.426559 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc84dc269_43ae_4083_9998_a0b3c90bb681.slice/crio-1902deada2be96bfa5d915252c7df17f18da007080acab9c3aa02ba85365b1cc WatchSource:0}: Error finding container 1902deada2be96bfa5d915252c7df17f18da007080acab9c3aa02ba85365b1cc: Status 404 returned error can't find the container with id 1902deada2be96bfa5d915252c7df17f18da007080acab9c3aa02ba85365b1cc Feb 24 02:04:13.429993 master-0 kubenswrapper[7864]: I0224 02:04:13.427900 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6569778c84-6dlqb"] Feb 24 02:04:13.446694 master-0 kubenswrapper[7864]: I0224 02:04:13.446645 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-8c7d49845-hxcn2"] Feb 24 02:04:13.531826 master-0 kubenswrapper[7864]: W0224 02:04:13.531752 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cb764f6_40f8_4e87_8be0_b9d7b0364201.slice/crio-583b21f55c4eaab72f3731e41c571ee4872bace9012dd0a496219d9d98220f85 WatchSource:0}: Error finding container 583b21f55c4eaab72f3731e41c571ee4872bace9012dd0a496219d9d98220f85: Status 404 returned error can't find the container with id 583b21f55c4eaab72f3731e41c571ee4872bace9012dd0a496219d9d98220f85 Feb 24 02:04:13.552718 master-0 kubenswrapper[7864]: I0224 02:04:13.552631 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-client-ca\") pod \"controller-manager-5b75dfd574-s72zx\" (UID: \"a3096e84-fed3-48ad-ab9f-6d51e941cb2a\") " pod="openshift-controller-manager/controller-manager-5b75dfd574-s72zx" Feb 24 02:04:13.552908 master-0 kubenswrapper[7864]: E0224 02:04:13.552831 7864 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 24 02:04:13.552993 master-0 kubenswrapper[7864]: E0224 02:04:13.552974 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-client-ca podName:a3096e84-fed3-48ad-ab9f-6d51e941cb2a nodeName:}" failed. No retries permitted until 2026-02-24 02:04:21.552943484 +0000 UTC m=+25.880597106 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-client-ca") pod "controller-manager-5b75dfd574-s72zx" (UID: "a3096e84-fed3-48ad-ab9f-6d51e941cb2a") : configmap "client-ca" not found Feb 24 02:04:13.744911 master-0 kubenswrapper[7864]: I0224 02:04:13.744838 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:04:14.163870 master-0 kubenswrapper[7864]: I0224 02:04:14.163825 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90b909c3-266d-4252-848d-30d45d2effd1-serving-cert\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:14.164058 master-0 kubenswrapper[7864]: I0224 02:04:14.163883 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-audit\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:14.164172 master-0 kubenswrapper[7864]: E0224 02:04:14.164124 7864 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 24 02:04:14.164252 master-0 kubenswrapper[7864]: E0224 02:04:14.164226 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90b909c3-266d-4252-848d-30d45d2effd1-serving-cert podName:90b909c3-266d-4252-848d-30d45d2effd1 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:16.164203526 +0000 UTC m=+20.491857148 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/90b909c3-266d-4252-848d-30d45d2effd1-serving-cert") pod "apiserver-7f665d79f-x624m" (UID: "90b909c3-266d-4252-848d-30d45d2effd1") : secret "serving-cert" not found Feb 24 02:04:14.164514 master-0 kubenswrapper[7864]: E0224 02:04:14.164489 7864 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 24 02:04:14.164585 master-0 kubenswrapper[7864]: E0224 02:04:14.164563 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-audit podName:90b909c3-266d-4252-848d-30d45d2effd1 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:16.164542499 +0000 UTC m=+20.492196121 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-audit") pod "apiserver-7f665d79f-x624m" (UID: "90b909c3-266d-4252-848d-30d45d2effd1") : configmap "audit-0" not found Feb 24 02:04:14.169211 master-0 kubenswrapper[7864]: I0224 02:04:14.169169 7864 generic.go:334] "Generic (PLEG): container finished" podID="303d5058-84df-40d1-a941-896b093ae470" containerID="8930b4416260ff6550582e0ac717e48f996f0ee753ab29009f1d6eec95a046f6" exitCode=0 Feb 24 02:04:14.169307 master-0 kubenswrapper[7864]: I0224 02:04:14.169205 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" event={"ID":"303d5058-84df-40d1-a941-896b093ae470","Type":"ContainerDied","Data":"8930b4416260ff6550582e0ac717e48f996f0ee753ab29009f1d6eec95a046f6"} Feb 24 02:04:14.169734 master-0 kubenswrapper[7864]: I0224 02:04:14.169712 7864 scope.go:117] "RemoveContainer" containerID="8930b4416260ff6550582e0ac717e48f996f0ee753ab29009f1d6eec95a046f6" Feb 24 02:04:14.171119 master-0 kubenswrapper[7864]: I0224 02:04:14.171075 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" event={"ID":"2cb764f6-40f8-4e87-8be0-b9d7b0364201","Type":"ContainerStarted","Data":"583b21f55c4eaab72f3731e41c571ee4872bace9012dd0a496219d9d98220f85"} Feb 24 02:04:14.172437 master-0 kubenswrapper[7864]: I0224 02:04:14.172329 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" event={"ID":"c84dc269-43ae-4083-9998-a0b3c90bb681","Type":"ContainerStarted","Data":"1902deada2be96bfa5d915252c7df17f18da007080acab9c3aa02ba85365b1cc"} Feb 24 02:04:14.173863 master-0 kubenswrapper[7864]: I0224 02:04:14.173824 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" event={"ID":"6a9ccd8e-d964-4c03-8ffc-51b464030c25","Type":"ContainerStarted","Data":"56b5dc5b3e9740ae05d95dc7b2a84307e363cddd956bef52b197b1f840f462b7"} Feb 24 02:04:14.175057 master-0 kubenswrapper[7864]: I0224 02:04:14.175009 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" event={"ID":"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334","Type":"ContainerStarted","Data":"68a30a55ed2f979625e18a77c39c55f0bd820b511f058e5d010e556725054ded"} Feb 24 02:04:14.175981 master-0 kubenswrapper[7864]: I0224 02:04:14.175948 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" event={"ID":"7b4e3ba0-5194-4e20-8f12-dea4b67504fe","Type":"ContainerStarted","Data":"7ed7554e0b6eb88f1840ed2eee6ab3bddc21f230340186b0439d63f9a885eb31"} Feb 24 02:04:15.187920 master-0 kubenswrapper[7864]: I0224 02:04:15.187566 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" event={"ID":"303d5058-84df-40d1-a941-896b093ae470","Type":"ContainerStarted","Data":"af016520fee3befe328cfcacf1e61661d632e7bab3a83e84890042b8512881e1"} Feb 24 02:04:16.029314 master-0 kubenswrapper[7864]: I0224 02:04:16.029190 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-7f665d79f-x624m"] Feb 24 02:04:16.030653 master-0 kubenswrapper[7864]: E0224 02:04:16.029975 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-7f665d79f-x624m" podUID="90b909c3-266d-4252-848d-30d45d2effd1" Feb 24 02:04:16.194902 master-0 kubenswrapper[7864]: I0224 02:04:16.194837 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90b909c3-266d-4252-848d-30d45d2effd1-serving-cert\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:16.194902 master-0 kubenswrapper[7864]: I0224 02:04:16.194888 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-audit\") pod \"apiserver-7f665d79f-x624m\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:16.195720 master-0 kubenswrapper[7864]: E0224 02:04:16.195112 7864 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 24 02:04:16.195720 master-0 kubenswrapper[7864]: I0224 02:04:16.195144 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:16.195720 master-0 kubenswrapper[7864]: E0224 02:04:16.195204 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90b909c3-266d-4252-848d-30d45d2effd1-serving-cert podName:90b909c3-266d-4252-848d-30d45d2effd1 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:20.195178829 +0000 UTC m=+24.522832451 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/90b909c3-266d-4252-848d-30d45d2effd1-serving-cert") pod "apiserver-7f665d79f-x624m" (UID: "90b909c3-266d-4252-848d-30d45d2effd1") : secret "serving-cert" not found Feb 24 02:04:16.195720 master-0 kubenswrapper[7864]: I0224 02:04:16.195484 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-rjbl5" event={"ID":"d8e20d47-aeb6-41bf-9715-c437beb8e9e4","Type":"ContainerStarted","Data":"e3388d1f93809d1412794f7fce092cae4e044368882706df0d8c690d58cc928d"} Feb 24 02:04:16.195720 master-0 kubenswrapper[7864]: E0224 02:04:16.195688 7864 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 24 02:04:16.195720 master-0 kubenswrapper[7864]: E0224 02:04:16.195718 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-audit podName:90b909c3-266d-4252-848d-30d45d2effd1 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:20.195711298 +0000 UTC m=+24.523364920 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-audit") pod "apiserver-7f665d79f-x624m" (UID: "90b909c3-266d-4252-848d-30d45d2effd1") : configmap "audit-0" not found Feb 24 02:04:16.208358 master-0 kubenswrapper[7864]: I0224 02:04:16.207055 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:16.297732 master-0 kubenswrapper[7864]: I0224 02:04:16.297196 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/90b909c3-266d-4252-848d-30d45d2effd1-etcd-client\") pod \"90b909c3-266d-4252-848d-30d45d2effd1\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " Feb 24 02:04:16.297732 master-0 kubenswrapper[7864]: I0224 02:04:16.297705 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw9nk\" (UniqueName: \"kubernetes.io/projected/90b909c3-266d-4252-848d-30d45d2effd1-kube-api-access-nw9nk\") pod \"90b909c3-266d-4252-848d-30d45d2effd1\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " Feb 24 02:04:16.297954 master-0 kubenswrapper[7864]: I0224 02:04:16.297940 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-config\") pod \"90b909c3-266d-4252-848d-30d45d2effd1\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " Feb 24 02:04:16.298022 master-0 kubenswrapper[7864]: I0224 02:04:16.298001 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/90b909c3-266d-4252-848d-30d45d2effd1-encryption-config\") pod \"90b909c3-266d-4252-848d-30d45d2effd1\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " Feb 24 02:04:16.299130 master-0 kubenswrapper[7864]: I0224 02:04:16.298111 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/90b909c3-266d-4252-848d-30d45d2effd1-audit-dir\") pod \"90b909c3-266d-4252-848d-30d45d2effd1\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " Feb 24 02:04:16.299130 master-0 kubenswrapper[7864]: I0224 02:04:16.298170 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-etcd-serving-ca\") pod \"90b909c3-266d-4252-848d-30d45d2effd1\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " Feb 24 02:04:16.299130 master-0 kubenswrapper[7864]: I0224 02:04:16.298218 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-image-import-ca\") pod \"90b909c3-266d-4252-848d-30d45d2effd1\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " Feb 24 02:04:16.299130 master-0 kubenswrapper[7864]: I0224 02:04:16.298258 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-trusted-ca-bundle\") pod \"90b909c3-266d-4252-848d-30d45d2effd1\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " Feb 24 02:04:16.299130 master-0 kubenswrapper[7864]: I0224 02:04:16.298296 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/90b909c3-266d-4252-848d-30d45d2effd1-node-pullsecrets\") pod \"90b909c3-266d-4252-848d-30d45d2effd1\" (UID: \"90b909c3-266d-4252-848d-30d45d2effd1\") " Feb 24 02:04:16.299130 master-0 kubenswrapper[7864]: I0224 02:04:16.298338 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90b909c3-266d-4252-848d-30d45d2effd1-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "90b909c3-266d-4252-848d-30d45d2effd1" (UID: "90b909c3-266d-4252-848d-30d45d2effd1"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:04:16.299130 master-0 kubenswrapper[7864]: I0224 02:04:16.298529 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90b909c3-266d-4252-848d-30d45d2effd1-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "90b909c3-266d-4252-848d-30d45d2effd1" (UID: "90b909c3-266d-4252-848d-30d45d2effd1"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:04:16.299130 master-0 kubenswrapper[7864]: I0224 02:04:16.298792 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "90b909c3-266d-4252-848d-30d45d2effd1" (UID: "90b909c3-266d-4252-848d-30d45d2effd1"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:04:16.299130 master-0 kubenswrapper[7864]: I0224 02:04:16.298807 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "90b909c3-266d-4252-848d-30d45d2effd1" (UID: "90b909c3-266d-4252-848d-30d45d2effd1"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:04:16.299130 master-0 kubenswrapper[7864]: I0224 02:04:16.298864 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "90b909c3-266d-4252-848d-30d45d2effd1" (UID: "90b909c3-266d-4252-848d-30d45d2effd1"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:04:16.299130 master-0 kubenswrapper[7864]: I0224 02:04:16.298952 7864 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/90b909c3-266d-4252-848d-30d45d2effd1-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:16.299130 master-0 kubenswrapper[7864]: I0224 02:04:16.298969 7864 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:16.299130 master-0 kubenswrapper[7864]: I0224 02:04:16.298979 7864 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-image-import-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:16.299130 master-0 kubenswrapper[7864]: I0224 02:04:16.298989 7864 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:16.299130 master-0 kubenswrapper[7864]: I0224 02:04:16.299000 7864 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/90b909c3-266d-4252-848d-30d45d2effd1-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:16.299130 master-0 kubenswrapper[7864]: I0224 02:04:16.298988 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-config" (OuterVolumeSpecName: "config") pod "90b909c3-266d-4252-848d-30d45d2effd1" (UID: "90b909c3-266d-4252-848d-30d45d2effd1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:04:16.304142 master-0 kubenswrapper[7864]: I0224 02:04:16.304104 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90b909c3-266d-4252-848d-30d45d2effd1-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "90b909c3-266d-4252-848d-30d45d2effd1" (UID: "90b909c3-266d-4252-848d-30d45d2effd1"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:04:16.304441 master-0 kubenswrapper[7864]: I0224 02:04:16.304388 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90b909c3-266d-4252-848d-30d45d2effd1-kube-api-access-nw9nk" (OuterVolumeSpecName: "kube-api-access-nw9nk") pod "90b909c3-266d-4252-848d-30d45d2effd1" (UID: "90b909c3-266d-4252-848d-30d45d2effd1"). InnerVolumeSpecName "kube-api-access-nw9nk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:04:16.304815 master-0 kubenswrapper[7864]: I0224 02:04:16.304780 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90b909c3-266d-4252-848d-30d45d2effd1-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "90b909c3-266d-4252-848d-30d45d2effd1" (UID: "90b909c3-266d-4252-848d-30d45d2effd1"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:04:16.400436 master-0 kubenswrapper[7864]: I0224 02:04:16.400356 7864 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/90b909c3-266d-4252-848d-30d45d2effd1-etcd-client\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:16.400436 master-0 kubenswrapper[7864]: I0224 02:04:16.400416 7864 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nw9nk\" (UniqueName: \"kubernetes.io/projected/90b909c3-266d-4252-848d-30d45d2effd1-kube-api-access-nw9nk\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:16.400436 master-0 kubenswrapper[7864]: I0224 02:04:16.400437 7864 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:16.400776 master-0 kubenswrapper[7864]: I0224 02:04:16.400457 7864 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/90b909c3-266d-4252-848d-30d45d2effd1-encryption-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:17.199735 master-0 kubenswrapper[7864]: I0224 02:04:17.199683 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7f665d79f-x624m" Feb 24 02:04:17.275228 master-0 kubenswrapper[7864]: I0224 02:04:17.275147 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-79dc9447fd-x64vl"] Feb 24 02:04:17.276991 master-0 kubenswrapper[7864]: I0224 02:04:17.276932 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.281509 master-0 kubenswrapper[7864]: I0224 02:04:17.280566 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 24 02:04:17.281509 master-0 kubenswrapper[7864]: I0224 02:04:17.280957 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 24 02:04:17.281509 master-0 kubenswrapper[7864]: I0224 02:04:17.281222 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 24 02:04:17.281886 master-0 kubenswrapper[7864]: I0224 02:04:17.281661 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 24 02:04:17.281960 master-0 kubenswrapper[7864]: I0224 02:04:17.281888 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 24 02:04:17.282939 master-0 kubenswrapper[7864]: I0224 02:04:17.282215 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 24 02:04:17.283387 master-0 kubenswrapper[7864]: I0224 02:04:17.283339 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 24 02:04:17.283684 master-0 kubenswrapper[7864]: I0224 02:04:17.283555 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 24 02:04:17.283684 master-0 kubenswrapper[7864]: I0224 02:04:17.283654 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 24 02:04:17.298799 master-0 kubenswrapper[7864]: I0224 02:04:17.298712 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-7f665d79f-x624m"] Feb 24 02:04:17.303953 master-0 kubenswrapper[7864]: I0224 02:04:17.300299 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-79dc9447fd-x64vl"] Feb 24 02:04:17.303953 master-0 kubenswrapper[7864]: I0224 02:04:17.302052 7864 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-7f665d79f-x624m"] Feb 24 02:04:17.305113 master-0 kubenswrapper[7864]: I0224 02:04:17.304465 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 24 02:04:17.417696 master-0 kubenswrapper[7864]: I0224 02:04:17.417595 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/25190a18-bdac-479b-b526-840d28636be3-audit-dir\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.417967 master-0 kubenswrapper[7864]: I0224 02:04:17.417779 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/25190a18-bdac-479b-b526-840d28636be3-node-pullsecrets\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.417967 master-0 kubenswrapper[7864]: I0224 02:04:17.417851 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/25190a18-bdac-479b-b526-840d28636be3-image-import-ca\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.417967 master-0 kubenswrapper[7864]: I0224 02:04:17.417937 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsb4q\" (UniqueName: \"kubernetes.io/projected/25190a18-bdac-479b-b526-840d28636be3-kube-api-access-bsb4q\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.418164 master-0 kubenswrapper[7864]: I0224 02:04:17.418016 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25190a18-bdac-479b-b526-840d28636be3-trusted-ca-bundle\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.418164 master-0 kubenswrapper[7864]: I0224 02:04:17.418055 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/25190a18-bdac-479b-b526-840d28636be3-etcd-client\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.418284 master-0 kubenswrapper[7864]: I0224 02:04:17.418181 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/25190a18-bdac-479b-b526-840d28636be3-etcd-serving-ca\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.418284 master-0 kubenswrapper[7864]: I0224 02:04:17.418211 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25190a18-bdac-479b-b526-840d28636be3-serving-cert\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.418284 master-0 kubenswrapper[7864]: I0224 02:04:17.418280 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25190a18-bdac-479b-b526-840d28636be3-config\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.418537 master-0 kubenswrapper[7864]: I0224 02:04:17.418328 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/25190a18-bdac-479b-b526-840d28636be3-audit\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.418537 master-0 kubenswrapper[7864]: I0224 02:04:17.418350 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/25190a18-bdac-479b-b526-840d28636be3-encryption-config\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.418537 master-0 kubenswrapper[7864]: I0224 02:04:17.418400 7864 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/90b909c3-266d-4252-848d-30d45d2effd1-audit\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:17.418537 master-0 kubenswrapper[7864]: I0224 02:04:17.418413 7864 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90b909c3-266d-4252-848d-30d45d2effd1-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:17.519469 master-0 kubenswrapper[7864]: I0224 02:04:17.519323 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25190a18-bdac-479b-b526-840d28636be3-config\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.519469 master-0 kubenswrapper[7864]: I0224 02:04:17.519371 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/25190a18-bdac-479b-b526-840d28636be3-encryption-config\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.519469 master-0 kubenswrapper[7864]: I0224 02:04:17.519404 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/25190a18-bdac-479b-b526-840d28636be3-audit\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.519469 master-0 kubenswrapper[7864]: I0224 02:04:17.519462 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/25190a18-bdac-479b-b526-840d28636be3-audit-dir\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.519866 master-0 kubenswrapper[7864]: I0224 02:04:17.519484 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/25190a18-bdac-479b-b526-840d28636be3-node-pullsecrets\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.519866 master-0 kubenswrapper[7864]: I0224 02:04:17.519508 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/25190a18-bdac-479b-b526-840d28636be3-image-import-ca\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.519866 master-0 kubenswrapper[7864]: I0224 02:04:17.519527 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsb4q\" (UniqueName: \"kubernetes.io/projected/25190a18-bdac-479b-b526-840d28636be3-kube-api-access-bsb4q\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.519866 master-0 kubenswrapper[7864]: I0224 02:04:17.519556 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25190a18-bdac-479b-b526-840d28636be3-trusted-ca-bundle\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.519866 master-0 kubenswrapper[7864]: I0224 02:04:17.519592 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/25190a18-bdac-479b-b526-840d28636be3-etcd-client\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.519866 master-0 kubenswrapper[7864]: I0224 02:04:17.519627 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25190a18-bdac-479b-b526-840d28636be3-serving-cert\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.519866 master-0 kubenswrapper[7864]: I0224 02:04:17.519651 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/25190a18-bdac-479b-b526-840d28636be3-etcd-serving-ca\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.520417 master-0 kubenswrapper[7864]: I0224 02:04:17.519963 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/25190a18-bdac-479b-b526-840d28636be3-node-pullsecrets\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.521403 master-0 kubenswrapper[7864]: I0224 02:04:17.520806 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25190a18-bdac-479b-b526-840d28636be3-config\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.521403 master-0 kubenswrapper[7864]: E0224 02:04:17.520953 7864 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 24 02:04:17.521403 master-0 kubenswrapper[7864]: I0224 02:04:17.521134 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/25190a18-bdac-479b-b526-840d28636be3-audit-dir\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.522015 master-0 kubenswrapper[7864]: I0224 02:04:17.521983 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/25190a18-bdac-479b-b526-840d28636be3-etcd-serving-ca\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.522067 master-0 kubenswrapper[7864]: I0224 02:04:17.522019 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/25190a18-bdac-479b-b526-840d28636be3-audit\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.522659 master-0 kubenswrapper[7864]: I0224 02:04:17.522538 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25190a18-bdac-479b-b526-840d28636be3-trusted-ca-bundle\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.525238 master-0 kubenswrapper[7864]: E0224 02:04:17.524165 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/25190a18-bdac-479b-b526-840d28636be3-serving-cert podName:25190a18-bdac-479b-b526-840d28636be3 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:18.021025398 +0000 UTC m=+22.348679050 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/25190a18-bdac-479b-b526-840d28636be3-serving-cert") pod "apiserver-79dc9447fd-x64vl" (UID: "25190a18-bdac-479b-b526-840d28636be3") : secret "serving-cert" not found Feb 24 02:04:17.525238 master-0 kubenswrapper[7864]: I0224 02:04:17.524701 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/25190a18-bdac-479b-b526-840d28636be3-image-import-ca\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.529700 master-0 kubenswrapper[7864]: I0224 02:04:17.529632 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/25190a18-bdac-479b-b526-840d28636be3-encryption-config\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.539182 master-0 kubenswrapper[7864]: I0224 02:04:17.539124 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/25190a18-bdac-479b-b526-840d28636be3-etcd-client\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.551463 master-0 kubenswrapper[7864]: I0224 02:04:17.551403 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsb4q\" (UniqueName: \"kubernetes.io/projected/25190a18-bdac-479b-b526-840d28636be3-kube-api-access-bsb4q\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:17.883662 master-0 kubenswrapper[7864]: I0224 02:04:17.883411 7864 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90b909c3-266d-4252-848d-30d45d2effd1" path="/var/lib/kubelet/pods/90b909c3-266d-4252-848d-30d45d2effd1/volumes" Feb 24 02:04:18.027030 master-0 kubenswrapper[7864]: I0224 02:04:18.026936 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25190a18-bdac-479b-b526-840d28636be3-serving-cert\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:18.027298 master-0 kubenswrapper[7864]: E0224 02:04:18.027174 7864 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 24 02:04:18.027298 master-0 kubenswrapper[7864]: E0224 02:04:18.027272 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/25190a18-bdac-479b-b526-840d28636be3-serving-cert podName:25190a18-bdac-479b-b526-840d28636be3 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:19.027247864 +0000 UTC m=+23.354901486 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/25190a18-bdac-479b-b526-840d28636be3-serving-cert") pod "apiserver-79dc9447fd-x64vl" (UID: "25190a18-bdac-479b-b526-840d28636be3") : secret "serving-cert" not found Feb 24 02:04:19.047545 master-0 kubenswrapper[7864]: I0224 02:04:19.047465 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25190a18-bdac-479b-b526-840d28636be3-serving-cert\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:19.048093 master-0 kubenswrapper[7864]: E0224 02:04:19.047787 7864 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 24 02:04:19.048093 master-0 kubenswrapper[7864]: E0224 02:04:19.047928 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/25190a18-bdac-479b-b526-840d28636be3-serving-cert podName:25190a18-bdac-479b-b526-840d28636be3 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:21.047894983 +0000 UTC m=+25.375548615 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/25190a18-bdac-479b-b526-840d28636be3-serving-cert") pod "apiserver-79dc9447fd-x64vl" (UID: "25190a18-bdac-479b-b526-840d28636be3") : secret "serving-cert" not found Feb 24 02:04:19.148690 master-0 kubenswrapper[7864]: I0224 02:04:19.148589 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-serving-cert\") pod \"route-controller-manager-9786ffb6f-5tj2q\" (UID: \"562a4f9a-d27d-4b88-8ab2-92b2cb0277b3\") " pod="openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q" Feb 24 02:04:19.148907 master-0 kubenswrapper[7864]: I0224 02:04:19.148838 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-client-ca\") pod \"route-controller-manager-9786ffb6f-5tj2q\" (UID: \"562a4f9a-d27d-4b88-8ab2-92b2cb0277b3\") " pod="openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q" Feb 24 02:04:19.149055 master-0 kubenswrapper[7864]: E0224 02:04:19.149003 7864 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 24 02:04:19.149151 master-0 kubenswrapper[7864]: E0224 02:04:19.149128 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-client-ca podName:562a4f9a-d27d-4b88-8ab2-92b2cb0277b3 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:35.149097091 +0000 UTC m=+39.476750713 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-client-ca") pod "route-controller-manager-9786ffb6f-5tj2q" (UID: "562a4f9a-d27d-4b88-8ab2-92b2cb0277b3") : configmap "client-ca" not found Feb 24 02:04:19.153416 master-0 kubenswrapper[7864]: I0224 02:04:19.153357 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-serving-cert\") pod \"route-controller-manager-9786ffb6f-5tj2q\" (UID: \"562a4f9a-d27d-4b88-8ab2-92b2cb0277b3\") " pod="openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q" Feb 24 02:04:19.432880 master-0 kubenswrapper[7864]: I0224 02:04:19.432708 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 24 02:04:19.433622 master-0 kubenswrapper[7864]: I0224 02:04:19.433375 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 24 02:04:19.436855 master-0 kubenswrapper[7864]: I0224 02:04:19.436785 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 24 02:04:19.443601 master-0 kubenswrapper[7864]: I0224 02:04:19.443526 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 24 02:04:19.553556 master-0 kubenswrapper[7864]: I0224 02:04:19.553202 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8436c0c2-ba30-462c-a003-ce076ff59ee1-var-lock\") pod \"installer-1-master-0\" (UID: \"8436c0c2-ba30-462c-a003-ce076ff59ee1\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 24 02:04:19.553556 master-0 kubenswrapper[7864]: I0224 02:04:19.553568 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8436c0c2-ba30-462c-a003-ce076ff59ee1-kube-api-access\") pod \"installer-1-master-0\" (UID: \"8436c0c2-ba30-462c-a003-ce076ff59ee1\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 24 02:04:19.553903 master-0 kubenswrapper[7864]: I0224 02:04:19.553634 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8436c0c2-ba30-462c-a003-ce076ff59ee1-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"8436c0c2-ba30-462c-a003-ce076ff59ee1\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 24 02:04:19.655252 master-0 kubenswrapper[7864]: I0224 02:04:19.655177 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8436c0c2-ba30-462c-a003-ce076ff59ee1-kube-api-access\") pod \"installer-1-master-0\" (UID: \"8436c0c2-ba30-462c-a003-ce076ff59ee1\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 24 02:04:19.656555 master-0 kubenswrapper[7864]: I0224 02:04:19.655681 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8436c0c2-ba30-462c-a003-ce076ff59ee1-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"8436c0c2-ba30-462c-a003-ce076ff59ee1\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 24 02:04:19.656555 master-0 kubenswrapper[7864]: I0224 02:04:19.655832 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8436c0c2-ba30-462c-a003-ce076ff59ee1-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"8436c0c2-ba30-462c-a003-ce076ff59ee1\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 24 02:04:19.656555 master-0 kubenswrapper[7864]: I0224 02:04:19.655948 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8436c0c2-ba30-462c-a003-ce076ff59ee1-var-lock\") pod \"installer-1-master-0\" (UID: \"8436c0c2-ba30-462c-a003-ce076ff59ee1\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 24 02:04:19.656555 master-0 kubenswrapper[7864]: I0224 02:04:19.656022 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8436c0c2-ba30-462c-a003-ce076ff59ee1-var-lock\") pod \"installer-1-master-0\" (UID: \"8436c0c2-ba30-462c-a003-ce076ff59ee1\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 24 02:04:19.674975 master-0 kubenswrapper[7864]: I0224 02:04:19.674941 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8436c0c2-ba30-462c-a003-ce076ff59ee1-kube-api-access\") pod \"installer-1-master-0\" (UID: \"8436c0c2-ba30-462c-a003-ce076ff59ee1\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 24 02:04:19.800493 master-0 kubenswrapper[7864]: I0224 02:04:19.800422 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 24 02:04:20.434210 master-0 kubenswrapper[7864]: I0224 02:04:20.434132 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:04:20.435040 master-0 kubenswrapper[7864]: I0224 02:04:20.434399 7864 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:04:20.459651 master-0 kubenswrapper[7864]: I0224 02:04:20.459604 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:04:21.076657 master-0 kubenswrapper[7864]: I0224 02:04:21.074902 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25190a18-bdac-479b-b526-840d28636be3-serving-cert\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:21.094045 master-0 kubenswrapper[7864]: I0224 02:04:21.093960 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25190a18-bdac-479b-b526-840d28636be3-serving-cert\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:21.120107 master-0 kubenswrapper[7864]: I0224 02:04:21.120026 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b75dfd574-s72zx"] Feb 24 02:04:21.120962 master-0 kubenswrapper[7864]: E0224 02:04:21.120823 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-5b75dfd574-s72zx" podUID="a3096e84-fed3-48ad-ab9f-6d51e941cb2a" Feb 24 02:04:21.140404 master-0 kubenswrapper[7864]: I0224 02:04:21.139102 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q"] Feb 24 02:04:21.140404 master-0 kubenswrapper[7864]: E0224 02:04:21.139769 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q" podUID="562a4f9a-d27d-4b88-8ab2-92b2cb0277b3" Feb 24 02:04:21.217780 master-0 kubenswrapper[7864]: I0224 02:04:21.217166 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b75dfd574-s72zx" Feb 24 02:04:21.218371 master-0 kubenswrapper[7864]: I0224 02:04:21.218131 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q" Feb 24 02:04:21.229428 master-0 kubenswrapper[7864]: I0224 02:04:21.229383 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b75dfd574-s72zx" Feb 24 02:04:21.236358 master-0 kubenswrapper[7864]: I0224 02:04:21.236300 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q" Feb 24 02:04:21.245816 master-0 kubenswrapper[7864]: I0224 02:04:21.245756 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:21.380532 master-0 kubenswrapper[7864]: I0224 02:04:21.379857 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-config\") pod \"562a4f9a-d27d-4b88-8ab2-92b2cb0277b3\" (UID: \"562a4f9a-d27d-4b88-8ab2-92b2cb0277b3\") " Feb 24 02:04:21.380532 master-0 kubenswrapper[7864]: I0224 02:04:21.379999 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrn8d\" (UniqueName: \"kubernetes.io/projected/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-kube-api-access-nrn8d\") pod \"a3096e84-fed3-48ad-ab9f-6d51e941cb2a\" (UID: \"a3096e84-fed3-48ad-ab9f-6d51e941cb2a\") " Feb 24 02:04:21.380532 master-0 kubenswrapper[7864]: I0224 02:04:21.380054 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-proxy-ca-bundles\") pod \"a3096e84-fed3-48ad-ab9f-6d51e941cb2a\" (UID: \"a3096e84-fed3-48ad-ab9f-6d51e941cb2a\") " Feb 24 02:04:21.380532 master-0 kubenswrapper[7864]: I0224 02:04:21.380102 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-serving-cert\") pod \"562a4f9a-d27d-4b88-8ab2-92b2cb0277b3\" (UID: \"562a4f9a-d27d-4b88-8ab2-92b2cb0277b3\") " Feb 24 02:04:21.380532 master-0 kubenswrapper[7864]: I0224 02:04:21.380141 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-config\") pod \"a3096e84-fed3-48ad-ab9f-6d51e941cb2a\" (UID: \"a3096e84-fed3-48ad-ab9f-6d51e941cb2a\") " Feb 24 02:04:21.380532 master-0 kubenswrapper[7864]: I0224 02:04:21.380197 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhgqz\" (UniqueName: \"kubernetes.io/projected/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-kube-api-access-zhgqz\") pod \"562a4f9a-d27d-4b88-8ab2-92b2cb0277b3\" (UID: \"562a4f9a-d27d-4b88-8ab2-92b2cb0277b3\") " Feb 24 02:04:21.380532 master-0 kubenswrapper[7864]: I0224 02:04:21.380270 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-serving-cert\") pod \"a3096e84-fed3-48ad-ab9f-6d51e941cb2a\" (UID: \"a3096e84-fed3-48ad-ab9f-6d51e941cb2a\") " Feb 24 02:04:21.381212 master-0 kubenswrapper[7864]: I0224 02:04:21.380670 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-config" (OuterVolumeSpecName: "config") pod "562a4f9a-d27d-4b88-8ab2-92b2cb0277b3" (UID: "562a4f9a-d27d-4b88-8ab2-92b2cb0277b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:04:21.381212 master-0 kubenswrapper[7864]: I0224 02:04:21.381175 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-config" (OuterVolumeSpecName: "config") pod "a3096e84-fed3-48ad-ab9f-6d51e941cb2a" (UID: "a3096e84-fed3-48ad-ab9f-6d51e941cb2a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:04:21.382083 master-0 kubenswrapper[7864]: I0224 02:04:21.382008 7864 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:21.382083 master-0 kubenswrapper[7864]: I0224 02:04:21.382046 7864 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:21.382679 master-0 kubenswrapper[7864]: I0224 02:04:21.382632 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a3096e84-fed3-48ad-ab9f-6d51e941cb2a" (UID: "a3096e84-fed3-48ad-ab9f-6d51e941cb2a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:04:21.384025 master-0 kubenswrapper[7864]: I0224 02:04:21.383924 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "562a4f9a-d27d-4b88-8ab2-92b2cb0277b3" (UID: "562a4f9a-d27d-4b88-8ab2-92b2cb0277b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:04:21.384951 master-0 kubenswrapper[7864]: I0224 02:04:21.384852 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-kube-api-access-nrn8d" (OuterVolumeSpecName: "kube-api-access-nrn8d") pod "a3096e84-fed3-48ad-ab9f-6d51e941cb2a" (UID: "a3096e84-fed3-48ad-ab9f-6d51e941cb2a"). InnerVolumeSpecName "kube-api-access-nrn8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:04:21.386141 master-0 kubenswrapper[7864]: I0224 02:04:21.386022 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-kube-api-access-zhgqz" (OuterVolumeSpecName: "kube-api-access-zhgqz") pod "562a4f9a-d27d-4b88-8ab2-92b2cb0277b3" (UID: "562a4f9a-d27d-4b88-8ab2-92b2cb0277b3"). InnerVolumeSpecName "kube-api-access-zhgqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:04:21.386814 master-0 kubenswrapper[7864]: I0224 02:04:21.386757 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a3096e84-fed3-48ad-ab9f-6d51e941cb2a" (UID: "a3096e84-fed3-48ad-ab9f-6d51e941cb2a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:04:21.483836 master-0 kubenswrapper[7864]: I0224 02:04:21.483777 7864 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrn8d\" (UniqueName: \"kubernetes.io/projected/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-kube-api-access-nrn8d\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:21.483836 master-0 kubenswrapper[7864]: I0224 02:04:21.483835 7864 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:21.484495 master-0 kubenswrapper[7864]: I0224 02:04:21.483856 7864 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:21.484495 master-0 kubenswrapper[7864]: I0224 02:04:21.483876 7864 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhgqz\" (UniqueName: \"kubernetes.io/projected/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-kube-api-access-zhgqz\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:21.484495 master-0 kubenswrapper[7864]: I0224 02:04:21.483897 7864 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:21.584857 master-0 kubenswrapper[7864]: I0224 02:04:21.584784 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-client-ca\") pod \"controller-manager-5b75dfd574-s72zx\" (UID: \"a3096e84-fed3-48ad-ab9f-6d51e941cb2a\") " pod="openshift-controller-manager/controller-manager-5b75dfd574-s72zx" Feb 24 02:04:21.585078 master-0 kubenswrapper[7864]: E0224 02:04:21.585032 7864 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 24 02:04:21.585166 master-0 kubenswrapper[7864]: E0224 02:04:21.585125 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-client-ca podName:a3096e84-fed3-48ad-ab9f-6d51e941cb2a nodeName:}" failed. No retries permitted until 2026-02-24 02:04:37.585098181 +0000 UTC m=+41.912751833 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-client-ca") pod "controller-manager-5b75dfd574-s72zx" (UID: "a3096e84-fed3-48ad-ab9f-6d51e941cb2a") : configmap "client-ca" not found Feb 24 02:04:21.777407 master-0 kubenswrapper[7864]: I0224 02:04:21.776645 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 24 02:04:21.824627 master-0 kubenswrapper[7864]: I0224 02:04:21.824554 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-79dc9447fd-x64vl"] Feb 24 02:04:21.844078 master-0 kubenswrapper[7864]: W0224 02:04:21.844012 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25190a18_bdac_479b_b526_840d28636be3.slice/crio-229e86eaf4e88e77ef5e6c4bc8577da2618b97072b856ff2c58bb725165574ff WatchSource:0}: Error finding container 229e86eaf4e88e77ef5e6c4bc8577da2618b97072b856ff2c58bb725165574ff: Status 404 returned error can't find the container with id 229e86eaf4e88e77ef5e6c4bc8577da2618b97072b856ff2c58bb725165574ff Feb 24 02:04:22.078592 master-0 kubenswrapper[7864]: I0224 02:04:22.074534 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-26b2v"] Feb 24 02:04:22.078592 master-0 kubenswrapper[7864]: I0224 02:04:22.075392 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.208470 master-0 kubenswrapper[7864]: I0224 02:04:22.208411 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-lib-modules\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.208684 master-0 kubenswrapper[7864]: I0224 02:04:22.208517 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-var-lib-kubelet\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.208684 master-0 kubenswrapper[7864]: I0224 02:04:22.208562 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-kubernetes\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.208684 master-0 kubenswrapper[7864]: I0224 02:04:22.208603 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-996wg\" (UniqueName: \"kubernetes.io/projected/638b3f88-0386-4f30-8ca5-6255e8f936fc-kube-api-access-996wg\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.208684 master-0 kubenswrapper[7864]: I0224 02:04:22.208651 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-sysctl-conf\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.208684 master-0 kubenswrapper[7864]: I0224 02:04:22.208666 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-systemd\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.208684 master-0 kubenswrapper[7864]: I0224 02:04:22.208682 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-sysconfig\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.208861 master-0 kubenswrapper[7864]: I0224 02:04:22.208706 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-host\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.208861 master-0 kubenswrapper[7864]: I0224 02:04:22.208722 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-sys\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.208861 master-0 kubenswrapper[7864]: I0224 02:04:22.208737 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-run\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.208861 master-0 kubenswrapper[7864]: I0224 02:04:22.208752 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/638b3f88-0386-4f30-8ca5-6255e8f936fc-tmp\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.208861 master-0 kubenswrapper[7864]: I0224 02:04:22.208775 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-sysctl-d\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.208861 master-0 kubenswrapper[7864]: I0224 02:04:22.208800 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-tuned\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.208861 master-0 kubenswrapper[7864]: I0224 02:04:22.208816 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-modprobe-d\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.223018 master-0 kubenswrapper[7864]: I0224 02:04:22.222970 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" event={"ID":"6a9ccd8e-d964-4c03-8ffc-51b464030c25","Type":"ContainerStarted","Data":"3b252dadf775881151b12a03ae689a3184f813df8c5304f84973c53cfd29954e"} Feb 24 02:04:22.226779 master-0 kubenswrapper[7864]: I0224 02:04:22.226713 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" event={"ID":"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334","Type":"ContainerStarted","Data":"71611d4b716a18718dc4de42fcc89f0b3de2244f35a8054faf3e9c668e532c8f"} Feb 24 02:04:22.226779 master-0 kubenswrapper[7864]: I0224 02:04:22.226738 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" event={"ID":"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334","Type":"ContainerStarted","Data":"bb8e1724e77d6ceb463e444b223fcd8637d9a803be2af1a8dcbebbfedcda21d8"} Feb 24 02:04:22.228193 master-0 kubenswrapper[7864]: I0224 02:04:22.228171 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"8436c0c2-ba30-462c-a003-ce076ff59ee1","Type":"ContainerStarted","Data":"591a05566cc7e2ccfb09e536118bdde145566fe70602687d004847303c0950b4"} Feb 24 02:04:22.228257 master-0 kubenswrapper[7864]: I0224 02:04:22.228196 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"8436c0c2-ba30-462c-a003-ce076ff59ee1","Type":"ContainerStarted","Data":"eae8d87dbece0b375f133f5b2d88b4f7a5bcf77c6a392134712a99310e0394fe"} Feb 24 02:04:22.230141 master-0 kubenswrapper[7864]: I0224 02:04:22.230119 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" event={"ID":"4f72a322-2142-482a-9b0b-2ad890181d7a","Type":"ContainerStarted","Data":"838d02aabd1baa9d3da84bd6d0dabc756420cacab2fb6dc26dedcb269a2fa794"} Feb 24 02:04:22.231481 master-0 kubenswrapper[7864]: I0224 02:04:22.231457 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" event={"ID":"7b4e3ba0-5194-4e20-8f12-dea4b67504fe","Type":"ContainerStarted","Data":"8c06bdff0d8155655c32b3773e94d9d6596a89111686f4ac225c1d656438d2f6"} Feb 24 02:04:22.231481 master-0 kubenswrapper[7864]: I0224 02:04:22.231479 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" event={"ID":"7b4e3ba0-5194-4e20-8f12-dea4b67504fe","Type":"ContainerStarted","Data":"7144c5e947ad686471e67b52048230854640c3d324dfe4c40330e542a4803eda"} Feb 24 02:04:22.248796 master-0 kubenswrapper[7864]: I0224 02:04:22.248750 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" event={"ID":"25190a18-bdac-479b-b526-840d28636be3","Type":"ContainerStarted","Data":"229e86eaf4e88e77ef5e6c4bc8577da2618b97072b856ff2c58bb725165574ff"} Feb 24 02:04:22.252284 master-0 kubenswrapper[7864]: I0224 02:04:22.252263 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" event={"ID":"2cb764f6-40f8-4e87-8be0-b9d7b0364201","Type":"ContainerStarted","Data":"a4e2b660500e8f18a668256b1dac8c7a8ab77c9c1715967242e37dc5cc945cda"} Feb 24 02:04:22.254608 master-0 kubenswrapper[7864]: I0224 02:04:22.254553 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q" Feb 24 02:04:22.254741 master-0 kubenswrapper[7864]: I0224 02:04:22.254690 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" event={"ID":"c84dc269-43ae-4083-9998-a0b3c90bb681","Type":"ContainerStarted","Data":"d334785d9219b7d8b6844162f83a93171c2b15443bfd41ab182a8af1bf2c16e9"} Feb 24 02:04:22.255379 master-0 kubenswrapper[7864]: I0224 02:04:22.255339 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b75dfd574-s72zx" Feb 24 02:04:22.310349 master-0 kubenswrapper[7864]: I0224 02:04:22.310182 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-kubernetes\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.310349 master-0 kubenswrapper[7864]: I0224 02:04:22.310260 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-996wg\" (UniqueName: \"kubernetes.io/projected/638b3f88-0386-4f30-8ca5-6255e8f936fc-kube-api-access-996wg\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.310566 master-0 kubenswrapper[7864]: I0224 02:04:22.310527 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-kubernetes\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.310697 master-0 kubenswrapper[7864]: I0224 02:04:22.310646 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-sysctl-conf\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.310738 master-0 kubenswrapper[7864]: I0224 02:04:22.310712 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-systemd\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.310770 master-0 kubenswrapper[7864]: I0224 02:04:22.310757 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-sysconfig\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.310817 master-0 kubenswrapper[7864]: I0224 02:04:22.310801 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-host\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.310850 master-0 kubenswrapper[7864]: I0224 02:04:22.310825 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-sys\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.310947 master-0 kubenswrapper[7864]: I0224 02:04:22.310925 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-systemd\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.310988 master-0 kubenswrapper[7864]: I0224 02:04:22.310967 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-run\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.311020 master-0 kubenswrapper[7864]: I0224 02:04:22.310999 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/638b3f88-0386-4f30-8ca5-6255e8f936fc-tmp\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.311051 master-0 kubenswrapper[7864]: I0224 02:04:22.311028 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-sysctl-d\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.311083 master-0 kubenswrapper[7864]: I0224 02:04:22.311073 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-tuned\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.311114 master-0 kubenswrapper[7864]: I0224 02:04:22.311057 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-master-0" podStartSLOduration=3.311021969 podStartE2EDuration="3.311021969s" podCreationTimestamp="2026-02-24 02:04:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:04:22.308241989 +0000 UTC m=+26.635895611" watchObservedRunningTime="2026-02-24 02:04:22.311021969 +0000 UTC m=+26.638675611" Feb 24 02:04:22.311478 master-0 kubenswrapper[7864]: I0224 02:04:22.311410 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-run\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.311797 master-0 kubenswrapper[7864]: I0224 02:04:22.311741 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-modprobe-d\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.311840 master-0 kubenswrapper[7864]: I0224 02:04:22.311798 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-sysconfig\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.311840 master-0 kubenswrapper[7864]: I0224 02:04:22.311769 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-sys\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.311840 master-0 kubenswrapper[7864]: I0224 02:04:22.311816 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-lib-modules\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.311978 master-0 kubenswrapper[7864]: I0224 02:04:22.311939 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-var-lib-kubelet\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.312012 master-0 kubenswrapper[7864]: I0224 02:04:22.311983 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-host\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.312041 master-0 kubenswrapper[7864]: I0224 02:04:22.312031 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-lib-modules\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.312143 master-0 kubenswrapper[7864]: I0224 02:04:22.312105 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-sysctl-d\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.312253 master-0 kubenswrapper[7864]: I0224 02:04:22.312229 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-modprobe-d\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.312371 master-0 kubenswrapper[7864]: I0224 02:04:22.312335 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-sysctl-conf\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.312931 master-0 kubenswrapper[7864]: I0224 02:04:22.312903 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-var-lib-kubelet\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.321534 master-0 kubenswrapper[7864]: I0224 02:04:22.321493 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/638b3f88-0386-4f30-8ca5-6255e8f936fc-tmp\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.330751 master-0 kubenswrapper[7864]: I0224 02:04:22.330405 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-tuned\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.343817 master-0 kubenswrapper[7864]: I0224 02:04:22.343775 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-996wg\" (UniqueName: \"kubernetes.io/projected/638b3f88-0386-4f30-8ca5-6255e8f936fc-kube-api-access-996wg\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.370745 master-0 kubenswrapper[7864]: I0224 02:04:22.370181 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-975858db4-g96fv"] Feb 24 02:04:22.371805 master-0 kubenswrapper[7864]: I0224 02:04:22.371779 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-975858db4-g96fv" Feb 24 02:04:22.377472 master-0 kubenswrapper[7864]: I0224 02:04:22.377441 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 02:04:22.380450 master-0 kubenswrapper[7864]: I0224 02:04:22.380356 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 02:04:22.380450 master-0 kubenswrapper[7864]: I0224 02:04:22.380389 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 02:04:22.380682 master-0 kubenswrapper[7864]: I0224 02:04:22.380663 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 02:04:22.380752 master-0 kubenswrapper[7864]: I0224 02:04:22.380735 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 02:04:22.385053 master-0 kubenswrapper[7864]: I0224 02:04:22.384004 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q"] Feb 24 02:04:22.386871 master-0 kubenswrapper[7864]: I0224 02:04:22.386842 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-975858db4-g96fv"] Feb 24 02:04:22.387717 master-0 kubenswrapper[7864]: I0224 02:04:22.387686 7864 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9786ffb6f-5tj2q"] Feb 24 02:04:22.434001 master-0 kubenswrapper[7864]: I0224 02:04:22.433532 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b75dfd574-s72zx"] Feb 24 02:04:22.435371 master-0 kubenswrapper[7864]: I0224 02:04:22.435351 7864 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5b75dfd574-s72zx"] Feb 24 02:04:22.459214 master-0 kubenswrapper[7864]: I0224 02:04:22.459170 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:04:22.473658 master-0 kubenswrapper[7864]: W0224 02:04:22.473623 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod638b3f88_0386_4f30_8ca5_6255e8f936fc.slice/crio-813fd4ddbe7cd984c71971b2f1f90cf9374e4a929ba9dc48db8a98bf388f1d1f WatchSource:0}: Error finding container 813fd4ddbe7cd984c71971b2f1f90cf9374e4a929ba9dc48db8a98bf388f1d1f: Status 404 returned error can't find the container with id 813fd4ddbe7cd984c71971b2f1f90cf9374e4a929ba9dc48db8a98bf388f1d1f Feb 24 02:04:22.526019 master-0 kubenswrapper[7864]: I0224 02:04:22.525390 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-client-ca\") pod \"route-controller-manager-975858db4-g96fv\" (UID: \"8b5bb0d2-d0a5-414b-88ea-e585a8a6f471\") " pod="openshift-route-controller-manager/route-controller-manager-975858db4-g96fv" Feb 24 02:04:22.526019 master-0 kubenswrapper[7864]: I0224 02:04:22.525912 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-config\") pod \"route-controller-manager-975858db4-g96fv\" (UID: \"8b5bb0d2-d0a5-414b-88ea-e585a8a6f471\") " pod="openshift-route-controller-manager/route-controller-manager-975858db4-g96fv" Feb 24 02:04:22.526019 master-0 kubenswrapper[7864]: I0224 02:04:22.526002 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-serving-cert\") pod \"route-controller-manager-975858db4-g96fv\" (UID: \"8b5bb0d2-d0a5-414b-88ea-e585a8a6f471\") " pod="openshift-route-controller-manager/route-controller-manager-975858db4-g96fv" Feb 24 02:04:22.532200 master-0 kubenswrapper[7864]: I0224 02:04:22.526122 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxwmm\" (UniqueName: \"kubernetes.io/projected/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-kube-api-access-dxwmm\") pod \"route-controller-manager-975858db4-g96fv\" (UID: \"8b5bb0d2-d0a5-414b-88ea-e585a8a6f471\") " pod="openshift-route-controller-manager/route-controller-manager-975858db4-g96fv" Feb 24 02:04:22.532200 master-0 kubenswrapper[7864]: I0224 02:04:22.526208 7864 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:22.577716 master-0 kubenswrapper[7864]: I0224 02:04:22.577671 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-5rf6m"] Feb 24 02:04:22.580536 master-0 kubenswrapper[7864]: I0224 02:04:22.580494 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5rf6m" Feb 24 02:04:22.583474 master-0 kubenswrapper[7864]: I0224 02:04:22.583427 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 24 02:04:22.587605 master-0 kubenswrapper[7864]: I0224 02:04:22.584875 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 24 02:04:22.587605 master-0 kubenswrapper[7864]: I0224 02:04:22.585032 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 24 02:04:22.587605 master-0 kubenswrapper[7864]: I0224 02:04:22.585146 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 24 02:04:22.587605 master-0 kubenswrapper[7864]: I0224 02:04:22.586891 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5rf6m"] Feb 24 02:04:22.629587 master-0 kubenswrapper[7864]: I0224 02:04:22.627095 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxwmm\" (UniqueName: \"kubernetes.io/projected/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-kube-api-access-dxwmm\") pod \"route-controller-manager-975858db4-g96fv\" (UID: \"8b5bb0d2-d0a5-414b-88ea-e585a8a6f471\") " pod="openshift-route-controller-manager/route-controller-manager-975858db4-g96fv" Feb 24 02:04:22.629587 master-0 kubenswrapper[7864]: I0224 02:04:22.627153 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-client-ca\") pod \"route-controller-manager-975858db4-g96fv\" (UID: \"8b5bb0d2-d0a5-414b-88ea-e585a8a6f471\") " pod="openshift-route-controller-manager/route-controller-manager-975858db4-g96fv" Feb 24 02:04:22.629587 master-0 kubenswrapper[7864]: E0224 02:04:22.627551 7864 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 24 02:04:22.629587 master-0 kubenswrapper[7864]: E0224 02:04:22.627655 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-client-ca podName:8b5bb0d2-d0a5-414b-88ea-e585a8a6f471 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:23.127633073 +0000 UTC m=+27.455286705 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-client-ca") pod "route-controller-manager-975858db4-g96fv" (UID: "8b5bb0d2-d0a5-414b-88ea-e585a8a6f471") : configmap "client-ca" not found Feb 24 02:04:22.629587 master-0 kubenswrapper[7864]: I0224 02:04:22.627828 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-config\") pod \"route-controller-manager-975858db4-g96fv\" (UID: \"8b5bb0d2-d0a5-414b-88ea-e585a8a6f471\") " pod="openshift-route-controller-manager/route-controller-manager-975858db4-g96fv" Feb 24 02:04:22.629587 master-0 kubenswrapper[7864]: I0224 02:04:22.627946 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-serving-cert\") pod \"route-controller-manager-975858db4-g96fv\" (UID: \"8b5bb0d2-d0a5-414b-88ea-e585a8a6f471\") " pod="openshift-route-controller-manager/route-controller-manager-975858db4-g96fv" Feb 24 02:04:22.629587 master-0 kubenswrapper[7864]: I0224 02:04:22.628011 7864 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3096e84-fed3-48ad-ab9f-6d51e941cb2a-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:22.635589 master-0 kubenswrapper[7864]: I0224 02:04:22.630032 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-config\") pod \"route-controller-manager-975858db4-g96fv\" (UID: \"8b5bb0d2-d0a5-414b-88ea-e585a8a6f471\") " pod="openshift-route-controller-manager/route-controller-manager-975858db4-g96fv" Feb 24 02:04:22.635589 master-0 kubenswrapper[7864]: I0224 02:04:22.632758 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-serving-cert\") pod \"route-controller-manager-975858db4-g96fv\" (UID: \"8b5bb0d2-d0a5-414b-88ea-e585a8a6f471\") " pod="openshift-route-controller-manager/route-controller-manager-975858db4-g96fv" Feb 24 02:04:22.659588 master-0 kubenswrapper[7864]: I0224 02:04:22.655530 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxwmm\" (UniqueName: \"kubernetes.io/projected/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-kube-api-access-dxwmm\") pod \"route-controller-manager-975858db4-g96fv\" (UID: \"8b5bb0d2-d0a5-414b-88ea-e585a8a6f471\") " pod="openshift-route-controller-manager/route-controller-manager-975858db4-g96fv" Feb 24 02:04:22.733586 master-0 kubenswrapper[7864]: I0224 02:04:22.728547 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpg44\" (UniqueName: \"kubernetes.io/projected/8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c-kube-api-access-qpg44\") pod \"dns-default-5rf6m\" (UID: \"8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c\") " pod="openshift-dns/dns-default-5rf6m" Feb 24 02:04:22.733586 master-0 kubenswrapper[7864]: I0224 02:04:22.728771 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c-metrics-tls\") pod \"dns-default-5rf6m\" (UID: \"8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c\") " pod="openshift-dns/dns-default-5rf6m" Feb 24 02:04:22.733586 master-0 kubenswrapper[7864]: I0224 02:04:22.728821 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c-config-volume\") pod \"dns-default-5rf6m\" (UID: \"8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c\") " pod="openshift-dns/dns-default-5rf6m" Feb 24 02:04:22.829993 master-0 kubenswrapper[7864]: I0224 02:04:22.829930 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c-config-volume\") pod \"dns-default-5rf6m\" (UID: \"8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c\") " pod="openshift-dns/dns-default-5rf6m" Feb 24 02:04:22.830110 master-0 kubenswrapper[7864]: I0224 02:04:22.830003 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpg44\" (UniqueName: \"kubernetes.io/projected/8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c-kube-api-access-qpg44\") pod \"dns-default-5rf6m\" (UID: \"8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c\") " pod="openshift-dns/dns-default-5rf6m" Feb 24 02:04:22.830663 master-0 kubenswrapper[7864]: I0224 02:04:22.830629 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c-metrics-tls\") pod \"dns-default-5rf6m\" (UID: \"8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c\") " pod="openshift-dns/dns-default-5rf6m" Feb 24 02:04:22.831181 master-0 kubenswrapper[7864]: I0224 02:04:22.831146 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c-config-volume\") pod \"dns-default-5rf6m\" (UID: \"8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c\") " pod="openshift-dns/dns-default-5rf6m" Feb 24 02:04:22.833686 master-0 kubenswrapper[7864]: I0224 02:04:22.833465 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c-metrics-tls\") pod \"dns-default-5rf6m\" (UID: \"8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c\") " pod="openshift-dns/dns-default-5rf6m" Feb 24 02:04:22.848732 master-0 kubenswrapper[7864]: I0224 02:04:22.848048 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpg44\" (UniqueName: \"kubernetes.io/projected/8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c-kube-api-access-qpg44\") pod \"dns-default-5rf6m\" (UID: \"8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c\") " pod="openshift-dns/dns-default-5rf6m" Feb 24 02:04:22.934348 master-0 kubenswrapper[7864]: I0224 02:04:22.931970 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5rf6m" Feb 24 02:04:22.937350 master-0 kubenswrapper[7864]: I0224 02:04:22.937120 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-4lwwp"] Feb 24 02:04:22.940593 master-0 kubenswrapper[7864]: I0224 02:04:22.937649 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4lwwp" Feb 24 02:04:22.956788 master-0 kubenswrapper[7864]: I0224 02:04:22.956748 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 24 02:04:22.958729 master-0 kubenswrapper[7864]: I0224 02:04:22.957254 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 24 02:04:22.962594 master-0 kubenswrapper[7864]: I0224 02:04:22.959413 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Feb 24 02:04:22.972598 master-0 kubenswrapper[7864]: I0224 02:04:22.969685 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 24 02:04:23.034291 master-0 kubenswrapper[7864]: I0224 02:04:23.033935 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/64b7ea36-8849-4955-80b5-c7e7c12fcc29-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"64b7ea36-8849-4955-80b5-c7e7c12fcc29\") " pod="openshift-etcd/installer-1-master-0" Feb 24 02:04:23.034291 master-0 kubenswrapper[7864]: I0224 02:04:23.033987 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/390a7aa5-c7f7-4baf-a2d2-e6da9a465042-hosts-file\") pod \"node-resolver-4lwwp\" (UID: \"390a7aa5-c7f7-4baf-a2d2-e6da9a465042\") " pod="openshift-dns/node-resolver-4lwwp" Feb 24 02:04:23.034291 master-0 kubenswrapper[7864]: I0224 02:04:23.034011 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkpjn\" (UniqueName: \"kubernetes.io/projected/390a7aa5-c7f7-4baf-a2d2-e6da9a465042-kube-api-access-dkpjn\") pod \"node-resolver-4lwwp\" (UID: \"390a7aa5-c7f7-4baf-a2d2-e6da9a465042\") " pod="openshift-dns/node-resolver-4lwwp" Feb 24 02:04:23.034291 master-0 kubenswrapper[7864]: I0224 02:04:23.034071 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/64b7ea36-8849-4955-80b5-c7e7c12fcc29-kube-api-access\") pod \"installer-1-master-0\" (UID: \"64b7ea36-8849-4955-80b5-c7e7c12fcc29\") " pod="openshift-etcd/installer-1-master-0" Feb 24 02:04:23.034291 master-0 kubenswrapper[7864]: I0224 02:04:23.034105 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/64b7ea36-8849-4955-80b5-c7e7c12fcc29-var-lock\") pod \"installer-1-master-0\" (UID: \"64b7ea36-8849-4955-80b5-c7e7c12fcc29\") " pod="openshift-etcd/installer-1-master-0" Feb 24 02:04:23.130644 master-0 kubenswrapper[7864]: I0224 02:04:23.129478 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5rf6m"] Feb 24 02:04:23.135515 master-0 kubenswrapper[7864]: I0224 02:04:23.135490 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/64b7ea36-8849-4955-80b5-c7e7c12fcc29-kube-api-access\") pod \"installer-1-master-0\" (UID: \"64b7ea36-8849-4955-80b5-c7e7c12fcc29\") " pod="openshift-etcd/installer-1-master-0" Feb 24 02:04:23.135613 master-0 kubenswrapper[7864]: I0224 02:04:23.135527 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-client-ca\") pod \"route-controller-manager-975858db4-g96fv\" (UID: \"8b5bb0d2-d0a5-414b-88ea-e585a8a6f471\") " pod="openshift-route-controller-manager/route-controller-manager-975858db4-g96fv" Feb 24 02:04:23.135613 master-0 kubenswrapper[7864]: I0224 02:04:23.135550 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/64b7ea36-8849-4955-80b5-c7e7c12fcc29-var-lock\") pod \"installer-1-master-0\" (UID: \"64b7ea36-8849-4955-80b5-c7e7c12fcc29\") " pod="openshift-etcd/installer-1-master-0" Feb 24 02:04:23.135730 master-0 kubenswrapper[7864]: I0224 02:04:23.135619 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/64b7ea36-8849-4955-80b5-c7e7c12fcc29-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"64b7ea36-8849-4955-80b5-c7e7c12fcc29\") " pod="openshift-etcd/installer-1-master-0" Feb 24 02:04:23.135730 master-0 kubenswrapper[7864]: I0224 02:04:23.135658 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/390a7aa5-c7f7-4baf-a2d2-e6da9a465042-hosts-file\") pod \"node-resolver-4lwwp\" (UID: \"390a7aa5-c7f7-4baf-a2d2-e6da9a465042\") " pod="openshift-dns/node-resolver-4lwwp" Feb 24 02:04:23.135730 master-0 kubenswrapper[7864]: I0224 02:04:23.135673 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkpjn\" (UniqueName: \"kubernetes.io/projected/390a7aa5-c7f7-4baf-a2d2-e6da9a465042-kube-api-access-dkpjn\") pod \"node-resolver-4lwwp\" (UID: \"390a7aa5-c7f7-4baf-a2d2-e6da9a465042\") " pod="openshift-dns/node-resolver-4lwwp" Feb 24 02:04:23.136221 master-0 kubenswrapper[7864]: I0224 02:04:23.136178 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/64b7ea36-8849-4955-80b5-c7e7c12fcc29-var-lock\") pod \"installer-1-master-0\" (UID: \"64b7ea36-8849-4955-80b5-c7e7c12fcc29\") " pod="openshift-etcd/installer-1-master-0" Feb 24 02:04:23.136316 master-0 kubenswrapper[7864]: I0224 02:04:23.136229 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/390a7aa5-c7f7-4baf-a2d2-e6da9a465042-hosts-file\") pod \"node-resolver-4lwwp\" (UID: \"390a7aa5-c7f7-4baf-a2d2-e6da9a465042\") " pod="openshift-dns/node-resolver-4lwwp" Feb 24 02:04:23.136397 master-0 kubenswrapper[7864]: E0224 02:04:23.136377 7864 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 24 02:04:23.136435 master-0 kubenswrapper[7864]: E0224 02:04:23.136425 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-client-ca podName:8b5bb0d2-d0a5-414b-88ea-e585a8a6f471 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:24.136410401 +0000 UTC m=+28.464064023 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-client-ca") pod "route-controller-manager-975858db4-g96fv" (UID: "8b5bb0d2-d0a5-414b-88ea-e585a8a6f471") : configmap "client-ca" not found Feb 24 02:04:23.136543 master-0 kubenswrapper[7864]: I0224 02:04:23.136514 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/64b7ea36-8849-4955-80b5-c7e7c12fcc29-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"64b7ea36-8849-4955-80b5-c7e7c12fcc29\") " pod="openshift-etcd/installer-1-master-0" Feb 24 02:04:23.151561 master-0 kubenswrapper[7864]: W0224 02:04:23.148777 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e90470d_20e0_4eb4_bc8e_b4e4c19aab3c.slice/crio-6dc4ae2fbc88ea5c43de5d695fea8a7c8829343138c8856dcfeff187994c5c0f WatchSource:0}: Error finding container 6dc4ae2fbc88ea5c43de5d695fea8a7c8829343138c8856dcfeff187994c5c0f: Status 404 returned error can't find the container with id 6dc4ae2fbc88ea5c43de5d695fea8a7c8829343138c8856dcfeff187994c5c0f Feb 24 02:04:23.155059 master-0 kubenswrapper[7864]: I0224 02:04:23.155028 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkpjn\" (UniqueName: \"kubernetes.io/projected/390a7aa5-c7f7-4baf-a2d2-e6da9a465042-kube-api-access-dkpjn\") pod \"node-resolver-4lwwp\" (UID: \"390a7aa5-c7f7-4baf-a2d2-e6da9a465042\") " pod="openshift-dns/node-resolver-4lwwp" Feb 24 02:04:23.158002 master-0 kubenswrapper[7864]: I0224 02:04:23.157969 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/64b7ea36-8849-4955-80b5-c7e7c12fcc29-kube-api-access\") pod \"installer-1-master-0\" (UID: \"64b7ea36-8849-4955-80b5-c7e7c12fcc29\") " pod="openshift-etcd/installer-1-master-0" Feb 24 02:04:23.258844 master-0 kubenswrapper[7864]: I0224 02:04:23.258787 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4lwwp" Feb 24 02:04:23.262151 master-0 kubenswrapper[7864]: I0224 02:04:23.262092 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-26b2v" event={"ID":"638b3f88-0386-4f30-8ca5-6255e8f936fc","Type":"ContainerStarted","Data":"b391ee85aee1f228cd791d5e8c18c2ef5e9b62963dca456df8107dfcd3ddc959"} Feb 24 02:04:23.262151 master-0 kubenswrapper[7864]: I0224 02:04:23.262147 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-26b2v" event={"ID":"638b3f88-0386-4f30-8ca5-6255e8f936fc","Type":"ContainerStarted","Data":"813fd4ddbe7cd984c71971b2f1f90cf9374e4a929ba9dc48db8a98bf388f1d1f"} Feb 24 02:04:23.263603 master-0 kubenswrapper[7864]: I0224 02:04:23.263566 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5rf6m" event={"ID":"8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c","Type":"ContainerStarted","Data":"6dc4ae2fbc88ea5c43de5d695fea8a7c8829343138c8856dcfeff187994c5c0f"} Feb 24 02:04:23.267788 master-0 kubenswrapper[7864]: I0224 02:04:23.266250 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" event={"ID":"2cb764f6-40f8-4e87-8be0-b9d7b0364201","Type":"ContainerStarted","Data":"0c2041bc3003c23bdf033e8d4336b3793e7dde4d2a89e2fa38af3e920180f589"} Feb 24 02:04:23.275215 master-0 kubenswrapper[7864]: I0224 02:04:23.275102 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 24 02:04:23.524424 master-0 kubenswrapper[7864]: I0224 02:04:23.523980 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-26b2v" podStartSLOduration=1.523950745 podStartE2EDuration="1.523950745s" podCreationTimestamp="2026-02-24 02:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:04:23.27655169 +0000 UTC m=+27.604205322" watchObservedRunningTime="2026-02-24 02:04:23.523950745 +0000 UTC m=+27.851604367" Feb 24 02:04:23.524552 master-0 kubenswrapper[7864]: I0224 02:04:23.524522 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 24 02:04:23.542012 master-0 kubenswrapper[7864]: W0224 02:04:23.541965 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod64b7ea36_8849_4955_80b5_c7e7c12fcc29.slice/crio-88f76db39d71bd25ceb20f4306d7f26b67459cb15713885a8eb24d8304cfae77 WatchSource:0}: Error finding container 88f76db39d71bd25ceb20f4306d7f26b67459cb15713885a8eb24d8304cfae77: Status 404 returned error can't find the container with id 88f76db39d71bd25ceb20f4306d7f26b67459cb15713885a8eb24d8304cfae77 Feb 24 02:04:23.881543 master-0 kubenswrapper[7864]: I0224 02:04:23.881471 7864 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="562a4f9a-d27d-4b88-8ab2-92b2cb0277b3" path="/var/lib/kubelet/pods/562a4f9a-d27d-4b88-8ab2-92b2cb0277b3/volumes" Feb 24 02:04:23.882043 master-0 kubenswrapper[7864]: I0224 02:04:23.882002 7864 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3096e84-fed3-48ad-ab9f-6d51e941cb2a" path="/var/lib/kubelet/pods/a3096e84-fed3-48ad-ab9f-6d51e941cb2a/volumes" Feb 24 02:04:24.151379 master-0 kubenswrapper[7864]: I0224 02:04:24.151233 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-client-ca\") pod \"route-controller-manager-975858db4-g96fv\" (UID: \"8b5bb0d2-d0a5-414b-88ea-e585a8a6f471\") " pod="openshift-route-controller-manager/route-controller-manager-975858db4-g96fv" Feb 24 02:04:24.151732 master-0 kubenswrapper[7864]: E0224 02:04:24.151475 7864 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 24 02:04:24.151732 master-0 kubenswrapper[7864]: E0224 02:04:24.151626 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-client-ca podName:8b5bb0d2-d0a5-414b-88ea-e585a8a6f471 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:26.151570181 +0000 UTC m=+30.479223843 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-client-ca") pod "route-controller-manager-975858db4-g96fv" (UID: "8b5bb0d2-d0a5-414b-88ea-e585a8a6f471") : configmap "client-ca" not found Feb 24 02:04:24.277604 master-0 kubenswrapper[7864]: I0224 02:04:24.277528 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4lwwp" event={"ID":"390a7aa5-c7f7-4baf-a2d2-e6da9a465042","Type":"ContainerStarted","Data":"1be30aec3fa0bec94f6864e2fd84027a688b746b1f841fb7b577e57ec8f40903"} Feb 24 02:04:24.277830 master-0 kubenswrapper[7864]: I0224 02:04:24.277621 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4lwwp" event={"ID":"390a7aa5-c7f7-4baf-a2d2-e6da9a465042","Type":"ContainerStarted","Data":"21aa7b4dfda40f1610fd6b64e23f1c617ce7b50ea96960fc42e2a8aaa9a792b2"} Feb 24 02:04:24.279543 master-0 kubenswrapper[7864]: I0224 02:04:24.279372 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"64b7ea36-8849-4955-80b5-c7e7c12fcc29","Type":"ContainerStarted","Data":"266ce948594252c2399468918fec845a74da7e6fcd999550c798b018f78a387f"} Feb 24 02:04:24.279543 master-0 kubenswrapper[7864]: I0224 02:04:24.279448 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"64b7ea36-8849-4955-80b5-c7e7c12fcc29","Type":"ContainerStarted","Data":"88f76db39d71bd25ceb20f4306d7f26b67459cb15713885a8eb24d8304cfae77"} Feb 24 02:04:24.314562 master-0 kubenswrapper[7864]: I0224 02:04:24.313139 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-0" podStartSLOduration=2.313113315 podStartE2EDuration="2.313113315s" podCreationTimestamp="2026-02-24 02:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:04:24.312622017 +0000 UTC m=+28.640275679" watchObservedRunningTime="2026-02-24 02:04:24.313113315 +0000 UTC m=+28.640766977" Feb 24 02:04:24.314562 master-0 kubenswrapper[7864]: I0224 02:04:24.313550 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-4lwwp" podStartSLOduration=2.313540541 podStartE2EDuration="2.313540541s" podCreationTimestamp="2026-02-24 02:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:04:24.29780082 +0000 UTC m=+28.625454482" watchObservedRunningTime="2026-02-24 02:04:24.313540541 +0000 UTC m=+28.641194203" Feb 24 02:04:24.747198 master-0 kubenswrapper[7864]: I0224 02:04:24.746371 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-56767fb5d4-2ghfz"] Feb 24 02:04:24.749034 master-0 kubenswrapper[7864]: I0224 02:04:24.748985 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56767fb5d4-2ghfz" Feb 24 02:04:24.752215 master-0 kubenswrapper[7864]: I0224 02:04:24.752155 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 02:04:24.752607 master-0 kubenswrapper[7864]: I0224 02:04:24.752539 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 02:04:24.753253 master-0 kubenswrapper[7864]: I0224 02:04:24.753201 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 02:04:24.753253 master-0 kubenswrapper[7864]: I0224 02:04:24.753201 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 02:04:24.753369 master-0 kubenswrapper[7864]: I0224 02:04:24.753264 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 02:04:24.767656 master-0 kubenswrapper[7864]: I0224 02:04:24.767615 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56767fb5d4-2ghfz"] Feb 24 02:04:24.768550 master-0 kubenswrapper[7864]: I0224 02:04:24.768458 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 02:04:24.862161 master-0 kubenswrapper[7864]: I0224 02:04:24.862111 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-proxy-ca-bundles\") pod \"controller-manager-56767fb5d4-2ghfz\" (UID: \"80c4b74e-052d-4202-94aa-a9d35a4b2410\") " pod="openshift-controller-manager/controller-manager-56767fb5d4-2ghfz" Feb 24 02:04:24.862481 master-0 kubenswrapper[7864]: I0224 02:04:24.862448 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqbdx\" (UniqueName: \"kubernetes.io/projected/80c4b74e-052d-4202-94aa-a9d35a4b2410-kube-api-access-dqbdx\") pod \"controller-manager-56767fb5d4-2ghfz\" (UID: \"80c4b74e-052d-4202-94aa-a9d35a4b2410\") " pod="openshift-controller-manager/controller-manager-56767fb5d4-2ghfz" Feb 24 02:04:24.862684 master-0 kubenswrapper[7864]: I0224 02:04:24.862657 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-client-ca\") pod \"controller-manager-56767fb5d4-2ghfz\" (UID: \"80c4b74e-052d-4202-94aa-a9d35a4b2410\") " pod="openshift-controller-manager/controller-manager-56767fb5d4-2ghfz" Feb 24 02:04:24.862933 master-0 kubenswrapper[7864]: I0224 02:04:24.862905 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-config\") pod \"controller-manager-56767fb5d4-2ghfz\" (UID: \"80c4b74e-052d-4202-94aa-a9d35a4b2410\") " pod="openshift-controller-manager/controller-manager-56767fb5d4-2ghfz" Feb 24 02:04:24.863105 master-0 kubenswrapper[7864]: I0224 02:04:24.863080 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80c4b74e-052d-4202-94aa-a9d35a4b2410-serving-cert\") pod \"controller-manager-56767fb5d4-2ghfz\" (UID: \"80c4b74e-052d-4202-94aa-a9d35a4b2410\") " pod="openshift-controller-manager/controller-manager-56767fb5d4-2ghfz" Feb 24 02:04:24.964022 master-0 kubenswrapper[7864]: I0224 02:04:24.963930 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqbdx\" (UniqueName: \"kubernetes.io/projected/80c4b74e-052d-4202-94aa-a9d35a4b2410-kube-api-access-dqbdx\") pod \"controller-manager-56767fb5d4-2ghfz\" (UID: \"80c4b74e-052d-4202-94aa-a9d35a4b2410\") " pod="openshift-controller-manager/controller-manager-56767fb5d4-2ghfz" Feb 24 02:04:24.964146 master-0 kubenswrapper[7864]: I0224 02:04:24.964031 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-client-ca\") pod \"controller-manager-56767fb5d4-2ghfz\" (UID: \"80c4b74e-052d-4202-94aa-a9d35a4b2410\") " pod="openshift-controller-manager/controller-manager-56767fb5d4-2ghfz" Feb 24 02:04:24.964146 master-0 kubenswrapper[7864]: I0224 02:04:24.964129 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-config\") pod \"controller-manager-56767fb5d4-2ghfz\" (UID: \"80c4b74e-052d-4202-94aa-a9d35a4b2410\") " pod="openshift-controller-manager/controller-manager-56767fb5d4-2ghfz" Feb 24 02:04:24.964283 master-0 kubenswrapper[7864]: I0224 02:04:24.964167 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80c4b74e-052d-4202-94aa-a9d35a4b2410-serving-cert\") pod \"controller-manager-56767fb5d4-2ghfz\" (UID: \"80c4b74e-052d-4202-94aa-a9d35a4b2410\") " pod="openshift-controller-manager/controller-manager-56767fb5d4-2ghfz" Feb 24 02:04:24.964283 master-0 kubenswrapper[7864]: I0224 02:04:24.964267 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-proxy-ca-bundles\") pod \"controller-manager-56767fb5d4-2ghfz\" (UID: \"80c4b74e-052d-4202-94aa-a9d35a4b2410\") " pod="openshift-controller-manager/controller-manager-56767fb5d4-2ghfz" Feb 24 02:04:24.966547 master-0 kubenswrapper[7864]: I0224 02:04:24.966495 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-proxy-ca-bundles\") pod \"controller-manager-56767fb5d4-2ghfz\" (UID: \"80c4b74e-052d-4202-94aa-a9d35a4b2410\") " pod="openshift-controller-manager/controller-manager-56767fb5d4-2ghfz" Feb 24 02:04:24.967204 master-0 kubenswrapper[7864]: E0224 02:04:24.967158 7864 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 24 02:04:24.967283 master-0 kubenswrapper[7864]: E0224 02:04:24.967239 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-client-ca podName:80c4b74e-052d-4202-94aa-a9d35a4b2410 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:25.46721683 +0000 UTC m=+29.794870482 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-client-ca") pod "controller-manager-56767fb5d4-2ghfz" (UID: "80c4b74e-052d-4202-94aa-a9d35a4b2410") : configmap "client-ca" not found Feb 24 02:04:24.969958 master-0 kubenswrapper[7864]: I0224 02:04:24.969907 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-config\") pod \"controller-manager-56767fb5d4-2ghfz\" (UID: \"80c4b74e-052d-4202-94aa-a9d35a4b2410\") " pod="openshift-controller-manager/controller-manager-56767fb5d4-2ghfz" Feb 24 02:04:24.978691 master-0 kubenswrapper[7864]: I0224 02:04:24.978638 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80c4b74e-052d-4202-94aa-a9d35a4b2410-serving-cert\") pod \"controller-manager-56767fb5d4-2ghfz\" (UID: \"80c4b74e-052d-4202-94aa-a9d35a4b2410\") " pod="openshift-controller-manager/controller-manager-56767fb5d4-2ghfz" Feb 24 02:04:24.991827 master-0 kubenswrapper[7864]: I0224 02:04:24.991771 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqbdx\" (UniqueName: \"kubernetes.io/projected/80c4b74e-052d-4202-94aa-a9d35a4b2410-kube-api-access-dqbdx\") pod \"controller-manager-56767fb5d4-2ghfz\" (UID: \"80c4b74e-052d-4202-94aa-a9d35a4b2410\") " pod="openshift-controller-manager/controller-manager-56767fb5d4-2ghfz" Feb 24 02:04:25.108125 master-0 kubenswrapper[7864]: I0224 02:04:25.107989 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2"] Feb 24 02:04:25.120601 master-0 kubenswrapper[7864]: I0224 02:04:25.113856 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:04:25.120601 master-0 kubenswrapper[7864]: I0224 02:04:25.117336 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 24 02:04:25.120601 master-0 kubenswrapper[7864]: I0224 02:04:25.119004 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 24 02:04:25.120601 master-0 kubenswrapper[7864]: I0224 02:04:25.119149 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 24 02:04:25.120601 master-0 kubenswrapper[7864]: I0224 02:04:25.119429 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 24 02:04:25.120601 master-0 kubenswrapper[7864]: I0224 02:04:25.119622 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 24 02:04:25.120601 master-0 kubenswrapper[7864]: I0224 02:04:25.119677 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 24 02:04:25.120601 master-0 kubenswrapper[7864]: I0224 02:04:25.119928 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 24 02:04:25.120601 master-0 kubenswrapper[7864]: I0224 02:04:25.120101 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 24 02:04:25.137590 master-0 kubenswrapper[7864]: I0224 02:04:25.133287 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2"] Feb 24 02:04:25.174484 master-0 kubenswrapper[7864]: I0224 02:04:25.172462 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b176946a-c056-441c-9145-b88ca4d75758-etcd-serving-ca\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:04:25.174484 master-0 kubenswrapper[7864]: I0224 02:04:25.172553 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b176946a-c056-441c-9145-b88ca4d75758-audit-dir\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:04:25.174484 master-0 kubenswrapper[7864]: I0224 02:04:25.172664 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b176946a-c056-441c-9145-b88ca4d75758-etcd-client\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:04:25.174484 master-0 kubenswrapper[7864]: I0224 02:04:25.172726 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b176946a-c056-441c-9145-b88ca4d75758-audit-policies\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:04:25.174484 master-0 kubenswrapper[7864]: I0224 02:04:25.172782 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcq24\" (UniqueName: \"kubernetes.io/projected/b176946a-c056-441c-9145-b88ca4d75758-kube-api-access-kcq24\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:04:25.174484 master-0 kubenswrapper[7864]: I0224 02:04:25.172853 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b176946a-c056-441c-9145-b88ca4d75758-encryption-config\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:04:25.174484 master-0 kubenswrapper[7864]: I0224 02:04:25.172890 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b176946a-c056-441c-9145-b88ca4d75758-trusted-ca-bundle\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:04:25.174484 master-0 kubenswrapper[7864]: I0224 02:04:25.172924 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b176946a-c056-441c-9145-b88ca4d75758-serving-cert\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:04:25.203878 master-0 kubenswrapper[7864]: I0224 02:04:25.203838 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b"] Feb 24 02:04:25.204997 master-0 kubenswrapper[7864]: I0224 02:04:25.204974 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:04:25.209118 master-0 kubenswrapper[7864]: I0224 02:04:25.209062 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 24 02:04:25.210369 master-0 kubenswrapper[7864]: I0224 02:04:25.210329 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 24 02:04:25.215827 master-0 kubenswrapper[7864]: I0224 02:04:25.215774 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b"] Feb 24 02:04:25.217251 master-0 kubenswrapper[7864]: I0224 02:04:25.217230 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 24 02:04:25.273977 master-0 kubenswrapper[7864]: I0224 02:04:25.273934 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b176946a-c056-441c-9145-b88ca4d75758-encryption-config\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:04:25.274396 master-0 kubenswrapper[7864]: I0224 02:04:25.274169 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/4a2d8ef6-14ac-490d-a931-7082344d3f46-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-hvr8b\" (UID: \"4a2d8ef6-14ac-490d-a931-7082344d3f46\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:04:25.274595 master-0 kubenswrapper[7864]: I0224 02:04:25.274544 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b176946a-c056-441c-9145-b88ca4d75758-trusted-ca-bundle\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:04:25.274766 master-0 kubenswrapper[7864]: I0224 02:04:25.274740 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b176946a-c056-441c-9145-b88ca4d75758-serving-cert\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:04:25.274925 master-0 kubenswrapper[7864]: I0224 02:04:25.274900 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/4a2d8ef6-14ac-490d-a931-7082344d3f46-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-hvr8b\" (UID: \"4a2d8ef6-14ac-490d-a931-7082344d3f46\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:04:25.275149 master-0 kubenswrapper[7864]: I0224 02:04:25.275124 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b176946a-c056-441c-9145-b88ca4d75758-etcd-serving-ca\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:04:25.275318 master-0 kubenswrapper[7864]: I0224 02:04:25.275291 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4a2d8ef6-14ac-490d-a931-7082344d3f46-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-hvr8b\" (UID: \"4a2d8ef6-14ac-490d-a931-7082344d3f46\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:04:25.275480 master-0 kubenswrapper[7864]: I0224 02:04:25.275456 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b176946a-c056-441c-9145-b88ca4d75758-audit-dir\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:04:25.275649 master-0 kubenswrapper[7864]: I0224 02:04:25.275621 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/4a2d8ef6-14ac-490d-a931-7082344d3f46-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-hvr8b\" (UID: \"4a2d8ef6-14ac-490d-a931-7082344d3f46\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:04:25.275832 master-0 kubenswrapper[7864]: I0224 02:04:25.275807 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b176946a-c056-441c-9145-b88ca4d75758-etcd-client\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:04:25.276032 master-0 kubenswrapper[7864]: I0224 02:04:25.276006 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b176946a-c056-441c-9145-b88ca4d75758-audit-policies\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:04:25.276183 master-0 kubenswrapper[7864]: I0224 02:04:25.276159 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcq24\" (UniqueName: \"kubernetes.io/projected/b176946a-c056-441c-9145-b88ca4d75758-kube-api-access-kcq24\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:04:25.276346 master-0 kubenswrapper[7864]: I0224 02:04:25.276319 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddtsj\" (UniqueName: \"kubernetes.io/projected/4a2d8ef6-14ac-490d-a931-7082344d3f46-kube-api-access-ddtsj\") pod \"operator-controller-controller-manager-9cc7d7bb-hvr8b\" (UID: \"4a2d8ef6-14ac-490d-a931-7082344d3f46\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:04:25.280606 master-0 kubenswrapper[7864]: I0224 02:04:25.279186 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b176946a-c056-441c-9145-b88ca4d75758-audit-dir\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:04:25.280606 master-0 kubenswrapper[7864]: I0224 02:04:25.280173 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b176946a-c056-441c-9145-b88ca4d75758-audit-policies\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:04:25.283847 master-0 kubenswrapper[7864]: I0224 02:04:25.280954 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b176946a-c056-441c-9145-b88ca4d75758-trusted-ca-bundle\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:04:25.284310 master-0 kubenswrapper[7864]: I0224 02:04:25.284288 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b176946a-c056-441c-9145-b88ca4d75758-encryption-config\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:04:25.290273 master-0 kubenswrapper[7864]: I0224 02:04:25.290235 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b176946a-c056-441c-9145-b88ca4d75758-serving-cert\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:04:25.307601 master-0 kubenswrapper[7864]: I0224 02:04:25.306953 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b176946a-c056-441c-9145-b88ca4d75758-etcd-serving-ca\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:04:25.313290 master-0 kubenswrapper[7864]: I0224 02:04:25.313256 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcq24\" (UniqueName: \"kubernetes.io/projected/b176946a-c056-441c-9145-b88ca4d75758-kube-api-access-kcq24\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:04:25.319658 master-0 kubenswrapper[7864]: I0224 02:04:25.316810 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz"] Feb 24 02:04:25.319658 master-0 kubenswrapper[7864]: I0224 02:04:25.317318 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b176946a-c056-441c-9145-b88ca4d75758-etcd-client\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:04:25.319658 master-0 kubenswrapper[7864]: I0224 02:04:25.317844 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:04:25.326305 master-0 kubenswrapper[7864]: I0224 02:04:25.326252 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz"] Feb 24 02:04:25.336255 master-0 kubenswrapper[7864]: I0224 02:04:25.334824 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 24 02:04:25.336255 master-0 kubenswrapper[7864]: I0224 02:04:25.334945 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 24 02:04:25.339593 master-0 kubenswrapper[7864]: I0224 02:04:25.336682 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 24 02:04:25.339593 master-0 kubenswrapper[7864]: I0224 02:04:25.336791 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 24 02:04:25.377753 master-0 kubenswrapper[7864]: I0224 02:04:25.377716 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/4f5b3b93-a59d-495c-a311-8913fa6000fc-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:04:25.377823 master-0 kubenswrapper[7864]: I0224 02:04:25.377771 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4a2d8ef6-14ac-490d-a931-7082344d3f46-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-hvr8b\" (UID: \"4a2d8ef6-14ac-490d-a931-7082344d3f46\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:04:25.377920 master-0 kubenswrapper[7864]: I0224 02:04:25.377885 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/4a2d8ef6-14ac-490d-a931-7082344d3f46-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-hvr8b\" (UID: \"4a2d8ef6-14ac-490d-a931-7082344d3f46\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:04:25.378146 master-0 kubenswrapper[7864]: I0224 02:04:25.378115 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tszx\" (UniqueName: \"kubernetes.io/projected/4f5b3b93-a59d-495c-a311-8913fa6000fc-kube-api-access-2tszx\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:04:25.378194 master-0 kubenswrapper[7864]: I0224 02:04:25.378174 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/4a2d8ef6-14ac-490d-a931-7082344d3f46-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-hvr8b\" (UID: \"4a2d8ef6-14ac-490d-a931-7082344d3f46\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:04:25.378262 master-0 kubenswrapper[7864]: I0224 02:04:25.378224 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddtsj\" (UniqueName: \"kubernetes.io/projected/4a2d8ef6-14ac-490d-a931-7082344d3f46-kube-api-access-ddtsj\") pod \"operator-controller-controller-manager-9cc7d7bb-hvr8b\" (UID: \"4a2d8ef6-14ac-490d-a931-7082344d3f46\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:04:25.378327 master-0 kubenswrapper[7864]: I0224 02:04:25.378303 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/4f5b3b93-a59d-495c-a311-8913fa6000fc-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:04:25.378392 master-0 kubenswrapper[7864]: I0224 02:04:25.378371 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/4a2d8ef6-14ac-490d-a931-7082344d3f46-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-hvr8b\" (UID: \"4a2d8ef6-14ac-490d-a931-7082344d3f46\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:04:25.378622 master-0 kubenswrapper[7864]: I0224 02:04:25.378600 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/4a2d8ef6-14ac-490d-a931-7082344d3f46-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-hvr8b\" (UID: \"4a2d8ef6-14ac-490d-a931-7082344d3f46\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:04:25.378841 master-0 kubenswrapper[7864]: I0224 02:04:25.378805 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/4a2d8ef6-14ac-490d-a931-7082344d3f46-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-hvr8b\" (UID: \"4a2d8ef6-14ac-490d-a931-7082344d3f46\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:04:25.378934 master-0 kubenswrapper[7864]: I0224 02:04:25.378895 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4a2d8ef6-14ac-490d-a931-7082344d3f46-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-hvr8b\" (UID: \"4a2d8ef6-14ac-490d-a931-7082344d3f46\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:04:25.379038 master-0 kubenswrapper[7864]: I0224 02:04:25.378662 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/4f5b3b93-a59d-495c-a311-8913fa6000fc-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:04:25.379172 master-0 kubenswrapper[7864]: I0224 02:04:25.379149 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/4f5b3b93-a59d-495c-a311-8913fa6000fc-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:04:25.379302 master-0 kubenswrapper[7864]: I0224 02:04:25.379286 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4f5b3b93-a59d-495c-a311-8913fa6000fc-cache\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:04:25.383495 master-0 kubenswrapper[7864]: I0224 02:04:25.383433 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/4a2d8ef6-14ac-490d-a931-7082344d3f46-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-hvr8b\" (UID: \"4a2d8ef6-14ac-490d-a931-7082344d3f46\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:04:25.462777 master-0 kubenswrapper[7864]: I0224 02:04:25.462716 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:04:25.481253 master-0 kubenswrapper[7864]: I0224 02:04:25.480607 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/4f5b3b93-a59d-495c-a311-8913fa6000fc-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:04:25.481253 master-0 kubenswrapper[7864]: I0224 02:04:25.480953 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/4f5b3b93-a59d-495c-a311-8913fa6000fc-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:04:25.481253 master-0 kubenswrapper[7864]: I0224 02:04:25.481123 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/4f5b3b93-a59d-495c-a311-8913fa6000fc-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:04:25.481253 master-0 kubenswrapper[7864]: I0224 02:04:25.481139 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4f5b3b93-a59d-495c-a311-8913fa6000fc-cache\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:04:25.481396 master-0 kubenswrapper[7864]: I0224 02:04:25.481251 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/4f5b3b93-a59d-495c-a311-8913fa6000fc-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:04:25.481396 master-0 kubenswrapper[7864]: I0224 02:04:25.481328 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tszx\" (UniqueName: \"kubernetes.io/projected/4f5b3b93-a59d-495c-a311-8913fa6000fc-kube-api-access-2tszx\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:04:25.481449 master-0 kubenswrapper[7864]: I0224 02:04:25.481434 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-client-ca\") pod \"controller-manager-56767fb5d4-2ghfz\" (UID: \"80c4b74e-052d-4202-94aa-a9d35a4b2410\") " pod="openshift-controller-manager/controller-manager-56767fb5d4-2ghfz" Feb 24 02:04:25.481567 master-0 kubenswrapper[7864]: I0224 02:04:25.481539 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/4f5b3b93-a59d-495c-a311-8913fa6000fc-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:04:25.482544 master-0 kubenswrapper[7864]: I0224 02:04:25.482510 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4f5b3b93-a59d-495c-a311-8913fa6000fc-cache\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:04:25.482723 master-0 kubenswrapper[7864]: I0224 02:04:25.482685 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/4f5b3b93-a59d-495c-a311-8913fa6000fc-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:04:25.482829 master-0 kubenswrapper[7864]: E0224 02:04:25.482801 7864 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 24 02:04:25.482899 master-0 kubenswrapper[7864]: E0224 02:04:25.482877 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-client-ca podName:80c4b74e-052d-4202-94aa-a9d35a4b2410 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:26.482847697 +0000 UTC m=+30.810501359 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-client-ca") pod "controller-manager-56767fb5d4-2ghfz" (UID: "80c4b74e-052d-4202-94aa-a9d35a4b2410") : configmap "client-ca" not found Feb 24 02:04:25.485474 master-0 kubenswrapper[7864]: I0224 02:04:25.485415 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/4f5b3b93-a59d-495c-a311-8913fa6000fc-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:04:25.487795 master-0 kubenswrapper[7864]: I0224 02:04:25.487764 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/4f5b3b93-a59d-495c-a311-8913fa6000fc-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:04:25.564093 master-0 kubenswrapper[7864]: I0224 02:04:25.563115 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddtsj\" (UniqueName: \"kubernetes.io/projected/4a2d8ef6-14ac-490d-a931-7082344d3f46-kube-api-access-ddtsj\") pod \"operator-controller-controller-manager-9cc7d7bb-hvr8b\" (UID: \"4a2d8ef6-14ac-490d-a931-7082344d3f46\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:04:25.627910 master-0 kubenswrapper[7864]: I0224 02:04:25.627801 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tszx\" (UniqueName: \"kubernetes.io/projected/4f5b3b93-a59d-495c-a311-8913fa6000fc-kube-api-access-2tszx\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:04:25.673409 master-0 kubenswrapper[7864]: I0224 02:04:25.673346 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:04:25.848715 master-0 kubenswrapper[7864]: I0224 02:04:25.848611 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:04:26.198619 master-0 kubenswrapper[7864]: I0224 02:04:26.196418 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-client-ca\") pod \"route-controller-manager-975858db4-g96fv\" (UID: \"8b5bb0d2-d0a5-414b-88ea-e585a8a6f471\") " pod="openshift-route-controller-manager/route-controller-manager-975858db4-g96fv" Feb 24 02:04:26.198619 master-0 kubenswrapper[7864]: E0224 02:04:26.196724 7864 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 24 02:04:26.198619 master-0 kubenswrapper[7864]: E0224 02:04:26.196873 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-client-ca podName:8b5bb0d2-d0a5-414b-88ea-e585a8a6f471 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:30.196836822 +0000 UTC m=+34.524490474 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-client-ca") pod "route-controller-manager-975858db4-g96fv" (UID: "8b5bb0d2-d0a5-414b-88ea-e585a8a6f471") : configmap "client-ca" not found Feb 24 02:04:26.501161 master-0 kubenswrapper[7864]: I0224 02:04:26.501027 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-client-ca\") pod \"controller-manager-56767fb5d4-2ghfz\" (UID: \"80c4b74e-052d-4202-94aa-a9d35a4b2410\") " pod="openshift-controller-manager/controller-manager-56767fb5d4-2ghfz" Feb 24 02:04:26.501353 master-0 kubenswrapper[7864]: E0224 02:04:26.501254 7864 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 24 02:04:26.501353 master-0 kubenswrapper[7864]: E0224 02:04:26.501329 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-client-ca podName:80c4b74e-052d-4202-94aa-a9d35a4b2410 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:28.501306535 +0000 UTC m=+32.828960157 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-client-ca") pod "controller-manager-56767fb5d4-2ghfz" (UID: "80c4b74e-052d-4202-94aa-a9d35a4b2410") : configmap "client-ca" not found Feb 24 02:04:28.536962 master-0 kubenswrapper[7864]: I0224 02:04:28.536862 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-client-ca\") pod \"controller-manager-56767fb5d4-2ghfz\" (UID: \"80c4b74e-052d-4202-94aa-a9d35a4b2410\") " pod="openshift-controller-manager/controller-manager-56767fb5d4-2ghfz" Feb 24 02:04:28.537709 master-0 kubenswrapper[7864]: E0224 02:04:28.537098 7864 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 24 02:04:28.537709 master-0 kubenswrapper[7864]: E0224 02:04:28.537244 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-client-ca podName:80c4b74e-052d-4202-94aa-a9d35a4b2410 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:32.537202397 +0000 UTC m=+36.864856059 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-client-ca") pod "controller-manager-56767fb5d4-2ghfz" (UID: "80c4b74e-052d-4202-94aa-a9d35a4b2410") : configmap "client-ca" not found Feb 24 02:04:28.840072 master-0 kubenswrapper[7864]: I0224 02:04:28.839964 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:04:28.840072 master-0 kubenswrapper[7864]: I0224 02:04:28.840061 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs\") pod \"network-metrics-daemon-tntcf\" (UID: \"70e2ba24-4871-4d1d-9935-156fdbeb2810\") " pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:04:28.840358 master-0 kubenswrapper[7864]: I0224 02:04:28.840103 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert\") pod \"catalog-operator-596f79dd6f-8cg5c\" (UID: \"12b89e05-a503-47aa-90b2-4d741e015b19\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:04:28.840358 master-0 kubenswrapper[7864]: I0224 02:04:28.840194 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2hllb\" (UID: \"6320dbb5-b84d-4a57-8c65-fbed8421f84a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" Feb 24 02:04:28.840358 master-0 kubenswrapper[7864]: I0224 02:04:28.840244 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-fkzdb\" (UID: \"f2e9cdff-8c15-43df-b8df-7fe3a73fda86\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:04:28.840358 master-0 kubenswrapper[7864]: I0224 02:04:28.840286 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-dg77f\" (UID: \"dc3d08db-45fa-4fef-b1fd-2875f22d5c45\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" Feb 24 02:04:28.840647 master-0 kubenswrapper[7864]: I0224 02:04:28.840362 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-4qf9p\" (UID: \"91d16f7b-390a-4d9d-99d6-cc8e210801d1\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:04:28.840647 master-0 kubenswrapper[7864]: I0224 02:04:28.840401 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert\") pod \"olm-operator-5499d7f7bb-5g6nc\" (UID: \"02f1d753-983a-4c4a-b1a0-560de173859a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:04:28.846061 master-0 kubenswrapper[7864]: I0224 02:04:28.846012 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert\") pod \"olm-operator-5499d7f7bb-5g6nc\" (UID: \"02f1d753-983a-4c4a-b1a0-560de173859a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:04:28.848192 master-0 kubenswrapper[7864]: I0224 02:04:28.848116 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-dg77f\" (UID: \"dc3d08db-45fa-4fef-b1fd-2875f22d5c45\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" Feb 24 02:04:28.848452 master-0 kubenswrapper[7864]: I0224 02:04:28.848390 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:04:28.848527 master-0 kubenswrapper[7864]: I0224 02:04:28.848440 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert\") pod \"catalog-operator-596f79dd6f-8cg5c\" (UID: \"12b89e05-a503-47aa-90b2-4d741e015b19\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:04:28.848952 master-0 kubenswrapper[7864]: I0224 02:04:28.848878 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2hllb\" (UID: \"6320dbb5-b84d-4a57-8c65-fbed8421f84a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" Feb 24 02:04:28.849025 master-0 kubenswrapper[7864]: I0224 02:04:28.848976 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-fkzdb\" (UID: \"f2e9cdff-8c15-43df-b8df-7fe3a73fda86\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:04:28.849304 master-0 kubenswrapper[7864]: I0224 02:04:28.849233 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-4qf9p\" (UID: \"91d16f7b-390a-4d9d-99d6-cc8e210801d1\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:04:28.850179 master-0 kubenswrapper[7864]: I0224 02:04:28.850105 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs\") pod \"network-metrics-daemon-tntcf\" (UID: \"70e2ba24-4871-4d1d-9935-156fdbeb2810\") " pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:04:28.938545 master-0 kubenswrapper[7864]: I0224 02:04:28.938433 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:04:28.938545 master-0 kubenswrapper[7864]: I0224 02:04:28.938494 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" Feb 24 02:04:28.938817 master-0 kubenswrapper[7864]: I0224 02:04:28.938630 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:04:28.939554 master-0 kubenswrapper[7864]: I0224 02:04:28.939518 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:04:28.941679 master-0 kubenswrapper[7864]: I0224 02:04:28.941636 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:04:28.942283 master-0 kubenswrapper[7864]: I0224 02:04:28.942239 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:04:28.942508 master-0 kubenswrapper[7864]: I0224 02:04:28.942427 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:04:28.942829 master-0 kubenswrapper[7864]: I0224 02:04:28.942797 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" Feb 24 02:04:29.864676 master-0 kubenswrapper[7864]: I0224 02:04:29.863379 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 24 02:04:29.864676 master-0 kubenswrapper[7864]: I0224 02:04:29.863700 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-1-master-0" podUID="8436c0c2-ba30-462c-a003-ce076ff59ee1" containerName="installer" containerID="cri-o://591a05566cc7e2ccfb09e536118bdde145566fe70602687d004847303c0950b4" gracePeriod=30 Feb 24 02:04:30.093563 master-0 kubenswrapper[7864]: I0224 02:04:30.093520 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2"] Feb 24 02:04:30.265551 master-0 kubenswrapper[7864]: I0224 02:04:30.265408 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-client-ca\") pod \"route-controller-manager-975858db4-g96fv\" (UID: \"8b5bb0d2-d0a5-414b-88ea-e585a8a6f471\") " pod="openshift-route-controller-manager/route-controller-manager-975858db4-g96fv" Feb 24 02:04:30.265669 master-0 kubenswrapper[7864]: E0224 02:04:30.265554 7864 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 24 02:04:30.265669 master-0 kubenswrapper[7864]: E0224 02:04:30.265629 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-client-ca podName:8b5bb0d2-d0a5-414b-88ea-e585a8a6f471 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:38.265612405 +0000 UTC m=+42.593266027 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-client-ca") pod "route-controller-manager-975858db4-g96fv" (UID: "8b5bb0d2-d0a5-414b-88ea-e585a8a6f471") : configmap "client-ca" not found Feb 24 02:04:30.370781 master-0 kubenswrapper[7864]: I0224 02:04:30.370125 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5rf6m" event={"ID":"8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c","Type":"ContainerStarted","Data":"13cd6a6aa3859231bf27568381f642374c95090918911bed4bf38f7204f41cd6"} Feb 24 02:04:30.376635 master-0 kubenswrapper[7864]: I0224 02:04:30.375707 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz"] Feb 24 02:04:30.376635 master-0 kubenswrapper[7864]: I0224 02:04:30.376023 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" event={"ID":"b176946a-c056-441c-9145-b88ca4d75758","Type":"ContainerStarted","Data":"0e94bb6d8da81f692c353aed9041e8cea1ef96da518c0c68ab1453f8b2183856"} Feb 24 02:04:30.386165 master-0 kubenswrapper[7864]: I0224 02:04:30.386122 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" event={"ID":"25190a18-bdac-479b-b526-840d28636be3","Type":"ContainerStarted","Data":"2bd08832a83f0b581af5dd0d4502909325c97e4a1b072cf713d68506345db86b"} Feb 24 02:04:30.484949 master-0 kubenswrapper[7864]: I0224 02:04:30.484903 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f"] Feb 24 02:04:30.485233 master-0 kubenswrapper[7864]: I0224 02:04:30.485194 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tntcf"] Feb 24 02:04:30.496088 master-0 kubenswrapper[7864]: I0224 02:04:30.496056 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b"] Feb 24 02:04:30.560620 master-0 kubenswrapper[7864]: W0224 02:04:30.558316 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70e2ba24_4871_4d1d_9935_156fdbeb2810.slice/crio-39f5130f968afbc399fff9d4193b2dc7547fc4010b24b675d8cfe4f908871554 WatchSource:0}: Error finding container 39f5130f968afbc399fff9d4193b2dc7547fc4010b24b675d8cfe4f908871554: Status 404 returned error can't find the container with id 39f5130f968afbc399fff9d4193b2dc7547fc4010b24b675d8cfe4f908871554 Feb 24 02:04:30.560620 master-0 kubenswrapper[7864]: W0224 02:04:30.558757 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc3d08db_45fa_4fef_b1fd_2875f22d5c45.slice/crio-fcf31e223d10d13349fb4cdd6b66ae2d5ca057b142afb57903be6e940e13cdfc WatchSource:0}: Error finding container fcf31e223d10d13349fb4cdd6b66ae2d5ca057b142afb57903be6e940e13cdfc: Status 404 returned error can't find the container with id fcf31e223d10d13349fb4cdd6b66ae2d5ca057b142afb57903be6e940e13cdfc Feb 24 02:04:30.566140 master-0 kubenswrapper[7864]: W0224 02:04:30.565955 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a2d8ef6_14ac_490d_a931_7082344d3f46.slice/crio-77149d9718de6b23dc52d1b4901db52831b3adbf959e9b29aeeec05c6d2db97e WatchSource:0}: Error finding container 77149d9718de6b23dc52d1b4901db52831b3adbf959e9b29aeeec05c6d2db97e: Status 404 returned error can't find the container with id 77149d9718de6b23dc52d1b4901db52831b3adbf959e9b29aeeec05c6d2db97e Feb 24 02:04:30.619084 master-0 kubenswrapper[7864]: I0224 02:04:30.618404 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7"] Feb 24 02:04:30.627784 master-0 kubenswrapper[7864]: I0224 02:04:30.626743 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc"] Feb 24 02:04:30.639595 master-0 kubenswrapper[7864]: I0224 02:04:30.639528 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb"] Feb 24 02:04:30.645496 master-0 kubenswrapper[7864]: I0224 02:04:30.644683 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-6f5488b997-4qf9p"] Feb 24 02:04:30.649860 master-0 kubenswrapper[7864]: I0224 02:04:30.649800 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb"] Feb 24 02:04:30.652365 master-0 kubenswrapper[7864]: I0224 02:04:30.652216 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c"] Feb 24 02:04:30.660968 master-0 kubenswrapper[7864]: W0224 02:04:30.660831 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12b89e05_a503_47aa_90b2_4d741e015b19.slice/crio-5365291f917df72d79e3f7635bb96352fde98df3b379b1b1c5331b7f5952d294 WatchSource:0}: Error finding container 5365291f917df72d79e3f7635bb96352fde98df3b379b1b1c5331b7f5952d294: Status 404 returned error can't find the container with id 5365291f917df72d79e3f7635bb96352fde98df3b379b1b1c5331b7f5952d294 Feb 24 02:04:30.663146 master-0 kubenswrapper[7864]: W0224 02:04:30.662840 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2e9cdff_8c15_43df_b8df_7fe3a73fda86.slice/crio-f571f0a4aeeadbbb146ec437860c7a57c1e485b485fb0691d9981c6a2b22a120 WatchSource:0}: Error finding container f571f0a4aeeadbbb146ec437860c7a57c1e485b485fb0691d9981c6a2b22a120: Status 404 returned error can't find the container with id f571f0a4aeeadbbb146ec437860c7a57c1e485b485fb0691d9981c6a2b22a120 Feb 24 02:04:30.665665 master-0 kubenswrapper[7864]: W0224 02:04:30.665614 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91d16f7b_390a_4d9d_99d6_cc8e210801d1.slice/crio-89a7efb88fa53095b161b71e1b8530a4c4c20e49713a6786f3a59609c9325838 WatchSource:0}: Error finding container 89a7efb88fa53095b161b71e1b8530a4c4c20e49713a6786f3a59609c9325838: Status 404 returned error can't find the container with id 89a7efb88fa53095b161b71e1b8530a4c4c20e49713a6786f3a59609c9325838 Feb 24 02:04:30.667388 master-0 kubenswrapper[7864]: W0224 02:04:30.667272 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb8d6627_394c_4087_bfa4_bf7580f6bb4b.slice/crio-0fcf5ec505a8c60fa755fbe1033404d9a2bfa8dd51c8a8904db6a212bec7d594 WatchSource:0}: Error finding container 0fcf5ec505a8c60fa755fbe1033404d9a2bfa8dd51c8a8904db6a212bec7d594: Status 404 returned error can't find the container with id 0fcf5ec505a8c60fa755fbe1033404d9a2bfa8dd51c8a8904db6a212bec7d594 Feb 24 02:04:30.668785 master-0 kubenswrapper[7864]: W0224 02:04:30.668736 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6320dbb5_b84d_4a57_8c65_fbed8421f84a.slice/crio-35cb29197091c21cf559145587727d1cb31b46813a4d0aded5d8409120c45182 WatchSource:0}: Error finding container 35cb29197091c21cf559145587727d1cb31b46813a4d0aded5d8409120c45182: Status 404 returned error can't find the container with id 35cb29197091c21cf559145587727d1cb31b46813a4d0aded5d8409120c45182 Feb 24 02:04:30.672034 master-0 kubenswrapper[7864]: W0224 02:04:30.672018 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02f1d753_983a_4c4a_b1a0_560de173859a.slice/crio-f5861a89c1b826c96f8d7eb1735da2b4cdf59be101852d074de82cd32893d879 WatchSource:0}: Error finding container f5861a89c1b826c96f8d7eb1735da2b4cdf59be101852d074de82cd32893d879: Status 404 returned error can't find the container with id f5861a89c1b826c96f8d7eb1735da2b4cdf59be101852d074de82cd32893d879 Feb 24 02:04:31.409193 master-0 kubenswrapper[7864]: I0224 02:04:31.409057 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" event={"ID":"f2e9cdff-8c15-43df-b8df-7fe3a73fda86","Type":"ContainerStarted","Data":"f571f0a4aeeadbbb146ec437860c7a57c1e485b485fb0691d9981c6a2b22a120"} Feb 24 02:04:31.411371 master-0 kubenswrapper[7864]: I0224 02:04:31.411230 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" event={"ID":"dc3d08db-45fa-4fef-b1fd-2875f22d5c45","Type":"ContainerStarted","Data":"fcf31e223d10d13349fb4cdd6b66ae2d5ca057b142afb57903be6e940e13cdfc"} Feb 24 02:04:31.413693 master-0 kubenswrapper[7864]: I0224 02:04:31.413358 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5rf6m" event={"ID":"8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c","Type":"ContainerStarted","Data":"231439d030fa0bf89da2c3a1b0ebaeaa5fe611600712dfc3abffa791f37a575f"} Feb 24 02:04:31.414284 master-0 kubenswrapper[7864]: I0224 02:04:31.414256 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-5rf6m" Feb 24 02:04:31.422673 master-0 kubenswrapper[7864]: I0224 02:04:31.422625 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" event={"ID":"db8d6627-394c-4087-bfa4-bf7580f6bb4b","Type":"ContainerStarted","Data":"19d33a97db38aed4fe60654f6c6d7b0c8c528614fa81fd6b06e33fcb80383ce0"} Feb 24 02:04:31.422749 master-0 kubenswrapper[7864]: I0224 02:04:31.422686 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" event={"ID":"db8d6627-394c-4087-bfa4-bf7580f6bb4b","Type":"ContainerStarted","Data":"9f51b04d3c1486984e3307c52dc019c9d3269455962317058730d274ae7bbc94"} Feb 24 02:04:31.422749 master-0 kubenswrapper[7864]: I0224 02:04:31.422707 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" event={"ID":"db8d6627-394c-4087-bfa4-bf7580f6bb4b","Type":"ContainerStarted","Data":"0fcf5ec505a8c60fa755fbe1033404d9a2bfa8dd51c8a8904db6a212bec7d594"} Feb 24 02:04:31.426770 master-0 kubenswrapper[7864]: I0224 02:04:31.426716 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-5rf6m" podStartSLOduration=2.799663677 podStartE2EDuration="9.426701384s" podCreationTimestamp="2026-02-24 02:04:22 +0000 UTC" firstStartedPulling="2026-02-24 02:04:23.151229928 +0000 UTC m=+27.478883550" lastFinishedPulling="2026-02-24 02:04:29.778267595 +0000 UTC m=+34.105921257" observedRunningTime="2026-02-24 02:04:31.425976187 +0000 UTC m=+35.753629809" watchObservedRunningTime="2026-02-24 02:04:31.426701384 +0000 UTC m=+35.754355006" Feb 24 02:04:31.428336 master-0 kubenswrapper[7864]: I0224 02:04:31.428276 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" event={"ID":"02f1d753-983a-4c4a-b1a0-560de173859a","Type":"ContainerStarted","Data":"f5861a89c1b826c96f8d7eb1735da2b4cdf59be101852d074de82cd32893d879"} Feb 24 02:04:31.435507 master-0 kubenswrapper[7864]: I0224 02:04:31.435471 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" event={"ID":"4a2d8ef6-14ac-490d-a931-7082344d3f46","Type":"ContainerStarted","Data":"c63bd4b594bd4f6109b07378bf72db7f1a51b694e1fb2208c36ad6c33c119837"} Feb 24 02:04:31.435670 master-0 kubenswrapper[7864]: I0224 02:04:31.435546 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" event={"ID":"4a2d8ef6-14ac-490d-a931-7082344d3f46","Type":"ContainerStarted","Data":"e69376d98cee67244b069177748eb8161f1ffee16e9b9f5abd63b6aff145de6c"} Feb 24 02:04:31.435670 master-0 kubenswrapper[7864]: I0224 02:04:31.435563 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" event={"ID":"4a2d8ef6-14ac-490d-a931-7082344d3f46","Type":"ContainerStarted","Data":"77149d9718de6b23dc52d1b4901db52831b3adbf959e9b29aeeec05c6d2db97e"} Feb 24 02:04:31.435952 master-0 kubenswrapper[7864]: I0224 02:04:31.435718 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:04:31.457277 master-0 kubenswrapper[7864]: I0224 02:04:31.457147 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" event={"ID":"4f5b3b93-a59d-495c-a311-8913fa6000fc","Type":"ContainerStarted","Data":"2a70331e31f309db225d3996274bc257195cff624763144e3200d4a89257b219"} Feb 24 02:04:31.457277 master-0 kubenswrapper[7864]: I0224 02:04:31.457200 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" event={"ID":"4f5b3b93-a59d-495c-a311-8913fa6000fc","Type":"ContainerStarted","Data":"5d6948ce490f3fa6ce851d875800c55d419c3dab3ddc783fd0565943ba63fbf3"} Feb 24 02:04:31.457277 master-0 kubenswrapper[7864]: I0224 02:04:31.457216 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" event={"ID":"4f5b3b93-a59d-495c-a311-8913fa6000fc","Type":"ContainerStarted","Data":"d789b7d1d1c624f3c1461f3405b95a301ab5f66347a0727135e2339f341d9052"} Feb 24 02:04:31.460379 master-0 kubenswrapper[7864]: I0224 02:04:31.457342 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:04:31.462826 master-0 kubenswrapper[7864]: I0224 02:04:31.462724 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" event={"ID":"6320dbb5-b84d-4a57-8c65-fbed8421f84a","Type":"ContainerStarted","Data":"8fb2770446d8dd83e57430002eaccc69501e1f4b4da7692571e4fb967ebfeb35"} Feb 24 02:04:31.462899 master-0 kubenswrapper[7864]: I0224 02:04:31.462837 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" event={"ID":"6320dbb5-b84d-4a57-8c65-fbed8421f84a","Type":"ContainerStarted","Data":"35cb29197091c21cf559145587727d1cb31b46813a4d0aded5d8409120c45182"} Feb 24 02:04:31.473362 master-0 kubenswrapper[7864]: I0224 02:04:31.465459 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" podStartSLOduration=6.465441728 podStartE2EDuration="6.465441728s" podCreationTimestamp="2026-02-24 02:04:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:04:31.464500143 +0000 UTC m=+35.792153755" watchObservedRunningTime="2026-02-24 02:04:31.465441728 +0000 UTC m=+35.793095350" Feb 24 02:04:31.473362 master-0 kubenswrapper[7864]: I0224 02:04:31.471630 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tntcf" event={"ID":"70e2ba24-4871-4d1d-9935-156fdbeb2810","Type":"ContainerStarted","Data":"39f5130f968afbc399fff9d4193b2dc7547fc4010b24b675d8cfe4f908871554"} Feb 24 02:04:31.473362 master-0 kubenswrapper[7864]: I0224 02:04:31.472500 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" event={"ID":"12b89e05-a503-47aa-90b2-4d741e015b19","Type":"ContainerStarted","Data":"5365291f917df72d79e3f7635bb96352fde98df3b379b1b1c5331b7f5952d294"} Feb 24 02:04:31.475711 master-0 kubenswrapper[7864]: I0224 02:04:31.475678 7864 generic.go:334] "Generic (PLEG): container finished" podID="25190a18-bdac-479b-b526-840d28636be3" containerID="2bd08832a83f0b581af5dd0d4502909325c97e4a1b072cf713d68506345db86b" exitCode=0 Feb 24 02:04:31.475766 master-0 kubenswrapper[7864]: I0224 02:04:31.475727 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" event={"ID":"25190a18-bdac-479b-b526-840d28636be3","Type":"ContainerDied","Data":"2bd08832a83f0b581af5dd0d4502909325c97e4a1b072cf713d68506345db86b"} Feb 24 02:04:31.475766 master-0 kubenswrapper[7864]: I0224 02:04:31.475743 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" event={"ID":"25190a18-bdac-479b-b526-840d28636be3","Type":"ContainerStarted","Data":"63c62ec4ef85b454c2032773739c7fd21c21de0158d8f07f7e5dd6a835789cb3"} Feb 24 02:04:31.475766 master-0 kubenswrapper[7864]: I0224 02:04:31.475753 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" event={"ID":"25190a18-bdac-479b-b526-840d28636be3","Type":"ContainerStarted","Data":"b92739f4eec45b9fa61d0a87a22d8bd988c1f4a14cd1e3cd849380cb57883acc"} Feb 24 02:04:31.477465 master-0 kubenswrapper[7864]: I0224 02:04:31.477176 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" event={"ID":"91d16f7b-390a-4d9d-99d6-cc8e210801d1","Type":"ContainerStarted","Data":"89a7efb88fa53095b161b71e1b8530a4c4c20e49713a6786f3a59609c9325838"} Feb 24 02:04:31.487101 master-0 kubenswrapper[7864]: I0224 02:04:31.486672 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" podStartSLOduration=6.486660017 podStartE2EDuration="6.486660017s" podCreationTimestamp="2026-02-24 02:04:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:04:31.485062039 +0000 UTC m=+35.812715651" watchObservedRunningTime="2026-02-24 02:04:31.486660017 +0000 UTC m=+35.814313639" Feb 24 02:04:32.528017 master-0 kubenswrapper[7864]: I0224 02:04:32.510825 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" podStartSLOduration=8.401008111 podStartE2EDuration="16.510800432s" podCreationTimestamp="2026-02-24 02:04:16 +0000 UTC" firstStartedPulling="2026-02-24 02:04:21.845941815 +0000 UTC m=+26.173595437" lastFinishedPulling="2026-02-24 02:04:29.955734136 +0000 UTC m=+34.283387758" observedRunningTime="2026-02-24 02:04:31.513538101 +0000 UTC m=+35.841191723" watchObservedRunningTime="2026-02-24 02:04:32.510800432 +0000 UTC m=+36.838454044" Feb 24 02:04:32.529732 master-0 kubenswrapper[7864]: I0224 02:04:32.529671 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 24 02:04:32.530827 master-0 kubenswrapper[7864]: I0224 02:04:32.530804 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 24 02:04:32.567860 master-0 kubenswrapper[7864]: I0224 02:04:32.567108 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 24 02:04:32.615603 master-0 kubenswrapper[7864]: I0224 02:04:32.614561 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-client-ca\") pod \"controller-manager-56767fb5d4-2ghfz\" (UID: \"80c4b74e-052d-4202-94aa-a9d35a4b2410\") " pod="openshift-controller-manager/controller-manager-56767fb5d4-2ghfz" Feb 24 02:04:32.638610 master-0 kubenswrapper[7864]: E0224 02:04:32.619662 7864 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 24 02:04:32.638610 master-0 kubenswrapper[7864]: E0224 02:04:32.619738 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-client-ca podName:80c4b74e-052d-4202-94aa-a9d35a4b2410 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:40.619715679 +0000 UTC m=+44.947369311 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-client-ca") pod "controller-manager-56767fb5d4-2ghfz" (UID: "80c4b74e-052d-4202-94aa-a9d35a4b2410") : configmap "client-ca" not found Feb 24 02:04:32.719521 master-0 kubenswrapper[7864]: I0224 02:04:32.719446 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4fd14dbd-ebb1-4442-85db-9a8567d3cdcc-kube-api-access\") pod \"installer-2-master-0\" (UID: \"4fd14dbd-ebb1-4442-85db-9a8567d3cdcc\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 24 02:04:32.719872 master-0 kubenswrapper[7864]: I0224 02:04:32.719559 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4fd14dbd-ebb1-4442-85db-9a8567d3cdcc-var-lock\") pod \"installer-2-master-0\" (UID: \"4fd14dbd-ebb1-4442-85db-9a8567d3cdcc\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 24 02:04:32.719872 master-0 kubenswrapper[7864]: I0224 02:04:32.719642 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4fd14dbd-ebb1-4442-85db-9a8567d3cdcc-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"4fd14dbd-ebb1-4442-85db-9a8567d3cdcc\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 24 02:04:32.821243 master-0 kubenswrapper[7864]: I0224 02:04:32.821082 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4fd14dbd-ebb1-4442-85db-9a8567d3cdcc-var-lock\") pod \"installer-2-master-0\" (UID: \"4fd14dbd-ebb1-4442-85db-9a8567d3cdcc\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 24 02:04:32.821243 master-0 kubenswrapper[7864]: I0224 02:04:32.821161 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4fd14dbd-ebb1-4442-85db-9a8567d3cdcc-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"4fd14dbd-ebb1-4442-85db-9a8567d3cdcc\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 24 02:04:32.821243 master-0 kubenswrapper[7864]: I0224 02:04:32.821198 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4fd14dbd-ebb1-4442-85db-9a8567d3cdcc-kube-api-access\") pod \"installer-2-master-0\" (UID: \"4fd14dbd-ebb1-4442-85db-9a8567d3cdcc\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 24 02:04:32.821243 master-0 kubenswrapper[7864]: I0224 02:04:32.821201 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4fd14dbd-ebb1-4442-85db-9a8567d3cdcc-var-lock\") pod \"installer-2-master-0\" (UID: \"4fd14dbd-ebb1-4442-85db-9a8567d3cdcc\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 24 02:04:32.821605 master-0 kubenswrapper[7864]: I0224 02:04:32.821265 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4fd14dbd-ebb1-4442-85db-9a8567d3cdcc-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"4fd14dbd-ebb1-4442-85db-9a8567d3cdcc\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 24 02:04:32.836232 master-0 kubenswrapper[7864]: I0224 02:04:32.836194 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4fd14dbd-ebb1-4442-85db-9a8567d3cdcc-kube-api-access\") pod \"installer-2-master-0\" (UID: \"4fd14dbd-ebb1-4442-85db-9a8567d3cdcc\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 24 02:04:32.890514 master-0 kubenswrapper[7864]: I0224 02:04:32.890471 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 24 02:04:33.151031 master-0 kubenswrapper[7864]: I0224 02:04:33.150833 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:04:35.852533 master-0 kubenswrapper[7864]: I0224 02:04:35.852475 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:04:36.247004 master-0 kubenswrapper[7864]: I0224 02:04:36.246482 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:36.247621 master-0 kubenswrapper[7864]: I0224 02:04:36.247332 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:36.256653 master-0 kubenswrapper[7864]: I0224 02:04:36.253218 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:36.295286 master-0 kubenswrapper[7864]: I0224 02:04:36.295182 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 24 02:04:36.535170 master-0 kubenswrapper[7864]: I0224 02:04:36.535130 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" event={"ID":"91d16f7b-390a-4d9d-99d6-cc8e210801d1","Type":"ContainerStarted","Data":"f88738c7cb3808e8ebb5ddd209f4e28577d6aec5e69f689e145c36b78a77fe4b"} Feb 24 02:04:36.536482 master-0 kubenswrapper[7864]: I0224 02:04:36.536446 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:04:36.537678 master-0 kubenswrapper[7864]: I0224 02:04:36.537651 7864 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-4qf9p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" start-of-body= Feb 24 02:04:36.537793 master-0 kubenswrapper[7864]: I0224 02:04:36.537689 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" podUID="91d16f7b-390a-4d9d-99d6-cc8e210801d1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" Feb 24 02:04:36.546703 master-0 kubenswrapper[7864]: I0224 02:04:36.546617 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" event={"ID":"f2e9cdff-8c15-43df-b8df-7fe3a73fda86","Type":"ContainerStarted","Data":"2c84a94f2a6a9cb8677b242aead424cd42f233786a43d4ea77fa8c1270383306"} Feb 24 02:04:36.560345 master-0 kubenswrapper[7864]: I0224 02:04:36.559557 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"4fd14dbd-ebb1-4442-85db-9a8567d3cdcc","Type":"ContainerStarted","Data":"a0a6e5b25bf1447fa4c8132cd0c3239d84e0332243511da2a3a1feb525dc8ed3"} Feb 24 02:04:36.563319 master-0 kubenswrapper[7864]: I0224 02:04:36.563127 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" event={"ID":"dc3d08db-45fa-4fef-b1fd-2875f22d5c45","Type":"ContainerStarted","Data":"1a9c80348ec3d9615f2f58e4f90b6f801e400fc962ca77ec229b6df397014b2d"} Feb 24 02:04:36.583153 master-0 kubenswrapper[7864]: I0224 02:04:36.579984 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tntcf" event={"ID":"70e2ba24-4871-4d1d-9935-156fdbeb2810","Type":"ContainerStarted","Data":"7d3c45256a841eb70d92b2ac89a09096ea7f9fe8d5d44f496e1fd00978ad355c"} Feb 24 02:04:36.583153 master-0 kubenswrapper[7864]: I0224 02:04:36.580036 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tntcf" event={"ID":"70e2ba24-4871-4d1d-9935-156fdbeb2810","Type":"ContainerStarted","Data":"9fc038be325d2c443e30ca715ec9ed96813d02435d90dbec4dda98e800fa480c"} Feb 24 02:04:36.586034 master-0 kubenswrapper[7864]: I0224 02:04:36.586002 7864 generic.go:334] "Generic (PLEG): container finished" podID="b176946a-c056-441c-9145-b88ca4d75758" containerID="428d9eac1aa20c4549b1b4238b89b04f2faa950b7b0a74457007efebb7f09258" exitCode=0 Feb 24 02:04:36.587652 master-0 kubenswrapper[7864]: I0224 02:04:36.587625 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" event={"ID":"b176946a-c056-441c-9145-b88ca4d75758","Type":"ContainerDied","Data":"428d9eac1aa20c4549b1b4238b89b04f2faa950b7b0a74457007efebb7f09258"} Feb 24 02:04:36.603640 master-0 kubenswrapper[7864]: I0224 02:04:36.602421 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:04:36.637622 master-0 kubenswrapper[7864]: I0224 02:04:36.630011 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt"] Feb 24 02:04:36.637622 master-0 kubenswrapper[7864]: I0224 02:04:36.630229 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" podUID="4f72a322-2142-482a-9b0b-2ad890181d7a" containerName="cluster-version-operator" containerID="cri-o://838d02aabd1baa9d3da84bd6d0dabc756420cacab2fb6dc26dedcb269a2fa794" gracePeriod=130 Feb 24 02:04:37.040144 master-0 kubenswrapper[7864]: I0224 02:04:37.039846 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:04:37.100262 master-0 kubenswrapper[7864]: I0224 02:04:37.100225 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4f72a322-2142-482a-9b0b-2ad890181d7a-etc-ssl-certs\") pod \"4f72a322-2142-482a-9b0b-2ad890181d7a\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " Feb 24 02:04:37.100355 master-0 kubenswrapper[7864]: I0224 02:04:37.100332 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f72a322-2142-482a-9b0b-2ad890181d7a-service-ca\") pod \"4f72a322-2142-482a-9b0b-2ad890181d7a\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " Feb 24 02:04:37.100389 master-0 kubenswrapper[7864]: I0224 02:04:37.100360 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert\") pod \"4f72a322-2142-482a-9b0b-2ad890181d7a\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " Feb 24 02:04:37.100389 master-0 kubenswrapper[7864]: I0224 02:04:37.100343 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f72a322-2142-482a-9b0b-2ad890181d7a-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "4f72a322-2142-482a-9b0b-2ad890181d7a" (UID: "4f72a322-2142-482a-9b0b-2ad890181d7a"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:04:37.100457 master-0 kubenswrapper[7864]: I0224 02:04:37.100408 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f72a322-2142-482a-9b0b-2ad890181d7a-kube-api-access\") pod \"4f72a322-2142-482a-9b0b-2ad890181d7a\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " Feb 24 02:04:37.100491 master-0 kubenswrapper[7864]: I0224 02:04:37.100465 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4f72a322-2142-482a-9b0b-2ad890181d7a-etc-cvo-updatepayloads\") pod \"4f72a322-2142-482a-9b0b-2ad890181d7a\" (UID: \"4f72a322-2142-482a-9b0b-2ad890181d7a\") " Feb 24 02:04:37.100722 master-0 kubenswrapper[7864]: I0224 02:04:37.100702 7864 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4f72a322-2142-482a-9b0b-2ad890181d7a-etc-ssl-certs\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:37.100787 master-0 kubenswrapper[7864]: I0224 02:04:37.100762 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f72a322-2142-482a-9b0b-2ad890181d7a-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "4f72a322-2142-482a-9b0b-2ad890181d7a" (UID: "4f72a322-2142-482a-9b0b-2ad890181d7a"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:04:37.100874 master-0 kubenswrapper[7864]: I0224 02:04:37.100851 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f72a322-2142-482a-9b0b-2ad890181d7a-service-ca" (OuterVolumeSpecName: "service-ca") pod "4f72a322-2142-482a-9b0b-2ad890181d7a" (UID: "4f72a322-2142-482a-9b0b-2ad890181d7a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:04:37.103909 master-0 kubenswrapper[7864]: I0224 02:04:37.103886 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f72a322-2142-482a-9b0b-2ad890181d7a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4f72a322-2142-482a-9b0b-2ad890181d7a" (UID: "4f72a322-2142-482a-9b0b-2ad890181d7a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:04:37.104653 master-0 kubenswrapper[7864]: I0224 02:04:37.104624 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4f72a322-2142-482a-9b0b-2ad890181d7a" (UID: "4f72a322-2142-482a-9b0b-2ad890181d7a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:04:37.201485 master-0 kubenswrapper[7864]: I0224 02:04:37.201409 7864 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f72a322-2142-482a-9b0b-2ad890181d7a-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:37.201485 master-0 kubenswrapper[7864]: I0224 02:04:37.201444 7864 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f72a322-2142-482a-9b0b-2ad890181d7a-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:37.201485 master-0 kubenswrapper[7864]: I0224 02:04:37.201456 7864 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4f72a322-2142-482a-9b0b-2ad890181d7a-etc-cvo-updatepayloads\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:37.201485 master-0 kubenswrapper[7864]: I0224 02:04:37.201466 7864 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f72a322-2142-482a-9b0b-2ad890181d7a-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:37.597609 master-0 kubenswrapper[7864]: I0224 02:04:37.596266 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" event={"ID":"b176946a-c056-441c-9145-b88ca4d75758","Type":"ContainerStarted","Data":"a40a29f749e0573ac6d90972333ff728d387ff0f88ce4e87bf1c84f1f2298927"} Feb 24 02:04:37.602446 master-0 kubenswrapper[7864]: I0224 02:04:37.602206 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"4fd14dbd-ebb1-4442-85db-9a8567d3cdcc","Type":"ContainerStarted","Data":"2e4c4c5eb4b6dd6fc07e7770b42c4804db7c38c00642a4b3f6368952d27cf824"} Feb 24 02:04:37.607017 master-0 kubenswrapper[7864]: I0224 02:04:37.605787 7864 generic.go:334] "Generic (PLEG): container finished" podID="4f72a322-2142-482a-9b0b-2ad890181d7a" containerID="838d02aabd1baa9d3da84bd6d0dabc756420cacab2fb6dc26dedcb269a2fa794" exitCode=0 Feb 24 02:04:37.607017 master-0 kubenswrapper[7864]: I0224 02:04:37.605888 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" Feb 24 02:04:37.607017 master-0 kubenswrapper[7864]: I0224 02:04:37.606504 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" event={"ID":"4f72a322-2142-482a-9b0b-2ad890181d7a","Type":"ContainerDied","Data":"838d02aabd1baa9d3da84bd6d0dabc756420cacab2fb6dc26dedcb269a2fa794"} Feb 24 02:04:37.607017 master-0 kubenswrapper[7864]: I0224 02:04:37.606538 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt" event={"ID":"4f72a322-2142-482a-9b0b-2ad890181d7a","Type":"ContainerDied","Data":"7240cd722a282af2e89b1de235fe3e28c9e5681ff6d5937b5f928d0aa4e3ea83"} Feb 24 02:04:37.607017 master-0 kubenswrapper[7864]: I0224 02:04:37.606562 7864 scope.go:117] "RemoveContainer" containerID="838d02aabd1baa9d3da84bd6d0dabc756420cacab2fb6dc26dedcb269a2fa794" Feb 24 02:04:37.617755 master-0 kubenswrapper[7864]: I0224 02:04:37.615116 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" event={"ID":"dc3d08db-45fa-4fef-b1fd-2875f22d5c45","Type":"ContainerStarted","Data":"206e32f211480b70d154888e7eaab059acddf0419748ec8afd5db9a5bab1c507"} Feb 24 02:04:37.622744 master-0 kubenswrapper[7864]: I0224 02:04:37.622715 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:04:37.631857 master-0 kubenswrapper[7864]: I0224 02:04:37.631776 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" podStartSLOduration=6.938285652 podStartE2EDuration="12.631754416s" podCreationTimestamp="2026-02-24 02:04:25 +0000 UTC" firstStartedPulling="2026-02-24 02:04:30.171502194 +0000 UTC m=+34.499155816" lastFinishedPulling="2026-02-24 02:04:35.864970948 +0000 UTC m=+40.192624580" observedRunningTime="2026-02-24 02:04:37.628772788 +0000 UTC m=+41.956426420" watchObservedRunningTime="2026-02-24 02:04:37.631754416 +0000 UTC m=+41.959408058" Feb 24 02:04:37.643011 master-0 kubenswrapper[7864]: I0224 02:04:37.642827 7864 scope.go:117] "RemoveContainer" containerID="838d02aabd1baa9d3da84bd6d0dabc756420cacab2fb6dc26dedcb269a2fa794" Feb 24 02:04:37.646247 master-0 kubenswrapper[7864]: E0224 02:04:37.645806 7864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"838d02aabd1baa9d3da84bd6d0dabc756420cacab2fb6dc26dedcb269a2fa794\": container with ID starting with 838d02aabd1baa9d3da84bd6d0dabc756420cacab2fb6dc26dedcb269a2fa794 not found: ID does not exist" containerID="838d02aabd1baa9d3da84bd6d0dabc756420cacab2fb6dc26dedcb269a2fa794" Feb 24 02:04:37.646247 master-0 kubenswrapper[7864]: I0224 02:04:37.645856 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"838d02aabd1baa9d3da84bd6d0dabc756420cacab2fb6dc26dedcb269a2fa794"} err="failed to get container status \"838d02aabd1baa9d3da84bd6d0dabc756420cacab2fb6dc26dedcb269a2fa794\": rpc error: code = NotFound desc = could not find container \"838d02aabd1baa9d3da84bd6d0dabc756420cacab2fb6dc26dedcb269a2fa794\": container with ID starting with 838d02aabd1baa9d3da84bd6d0dabc756420cacab2fb6dc26dedcb269a2fa794 not found: ID does not exist" Feb 24 02:04:37.649005 master-0 kubenswrapper[7864]: I0224 02:04:37.648855 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-master-0" podStartSLOduration=5.648824655 podStartE2EDuration="5.648824655s" podCreationTimestamp="2026-02-24 02:04:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:04:37.644156125 +0000 UTC m=+41.971809757" watchObservedRunningTime="2026-02-24 02:04:37.648824655 +0000 UTC m=+41.976478277" Feb 24 02:04:37.711912 master-0 kubenswrapper[7864]: I0224 02:04:37.711602 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt"] Feb 24 02:04:37.712209 master-0 kubenswrapper[7864]: I0224 02:04:37.712163 7864 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-5cfd9759cf-v5tpt"] Feb 24 02:04:37.746820 master-0 kubenswrapper[7864]: I0224 02:04:37.746698 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-57476485-9cjj5"] Feb 24 02:04:37.749141 master-0 kubenswrapper[7864]: E0224 02:04:37.749114 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f72a322-2142-482a-9b0b-2ad890181d7a" containerName="cluster-version-operator" Feb 24 02:04:37.749141 master-0 kubenswrapper[7864]: I0224 02:04:37.749141 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f72a322-2142-482a-9b0b-2ad890181d7a" containerName="cluster-version-operator" Feb 24 02:04:37.749285 master-0 kubenswrapper[7864]: I0224 02:04:37.749259 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f72a322-2142-482a-9b0b-2ad890181d7a" containerName="cluster-version-operator" Feb 24 02:04:37.749918 master-0 kubenswrapper[7864]: I0224 02:04:37.749896 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" Feb 24 02:04:37.755389 master-0 kubenswrapper[7864]: I0224 02:04:37.755339 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 24 02:04:37.755507 master-0 kubenswrapper[7864]: I0224 02:04:37.755423 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 24 02:04:37.755507 master-0 kubenswrapper[7864]: I0224 02:04:37.755477 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 24 02:04:37.810966 master-0 kubenswrapper[7864]: I0224 02:04:37.810893 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/732a3831-20e0-47dc-a29a-8bb4659541b7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-9cjj5\" (UID: \"732a3831-20e0-47dc-a29a-8bb4659541b7\") " pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" Feb 24 02:04:37.810966 master-0 kubenswrapper[7864]: I0224 02:04:37.810946 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/732a3831-20e0-47dc-a29a-8bb4659541b7-service-ca\") pod \"cluster-version-operator-57476485-9cjj5\" (UID: \"732a3831-20e0-47dc-a29a-8bb4659541b7\") " pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" Feb 24 02:04:37.811178 master-0 kubenswrapper[7864]: I0224 02:04:37.811010 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/732a3831-20e0-47dc-a29a-8bb4659541b7-serving-cert\") pod \"cluster-version-operator-57476485-9cjj5\" (UID: \"732a3831-20e0-47dc-a29a-8bb4659541b7\") " pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" Feb 24 02:04:37.811892 master-0 kubenswrapper[7864]: I0224 02:04:37.811231 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/732a3831-20e0-47dc-a29a-8bb4659541b7-kube-api-access\") pod \"cluster-version-operator-57476485-9cjj5\" (UID: \"732a3831-20e0-47dc-a29a-8bb4659541b7\") " pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" Feb 24 02:04:37.811892 master-0 kubenswrapper[7864]: I0224 02:04:37.811432 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/732a3831-20e0-47dc-a29a-8bb4659541b7-etc-ssl-certs\") pod \"cluster-version-operator-57476485-9cjj5\" (UID: \"732a3831-20e0-47dc-a29a-8bb4659541b7\") " pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" Feb 24 02:04:37.882772 master-0 kubenswrapper[7864]: I0224 02:04:37.882656 7864 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f72a322-2142-482a-9b0b-2ad890181d7a" path="/var/lib/kubelet/pods/4f72a322-2142-482a-9b0b-2ad890181d7a/volumes" Feb 24 02:04:37.916208 master-0 kubenswrapper[7864]: I0224 02:04:37.916027 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/732a3831-20e0-47dc-a29a-8bb4659541b7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-9cjj5\" (UID: \"732a3831-20e0-47dc-a29a-8bb4659541b7\") " pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" Feb 24 02:04:37.916208 master-0 kubenswrapper[7864]: I0224 02:04:37.916084 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/732a3831-20e0-47dc-a29a-8bb4659541b7-service-ca\") pod \"cluster-version-operator-57476485-9cjj5\" (UID: \"732a3831-20e0-47dc-a29a-8bb4659541b7\") " pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" Feb 24 02:04:37.916208 master-0 kubenswrapper[7864]: I0224 02:04:37.916135 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/732a3831-20e0-47dc-a29a-8bb4659541b7-serving-cert\") pod \"cluster-version-operator-57476485-9cjj5\" (UID: \"732a3831-20e0-47dc-a29a-8bb4659541b7\") " pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" Feb 24 02:04:37.916208 master-0 kubenswrapper[7864]: I0224 02:04:37.916161 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/732a3831-20e0-47dc-a29a-8bb4659541b7-kube-api-access\") pod \"cluster-version-operator-57476485-9cjj5\" (UID: \"732a3831-20e0-47dc-a29a-8bb4659541b7\") " pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" Feb 24 02:04:37.916208 master-0 kubenswrapper[7864]: I0224 02:04:37.916189 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/732a3831-20e0-47dc-a29a-8bb4659541b7-etc-ssl-certs\") pod \"cluster-version-operator-57476485-9cjj5\" (UID: \"732a3831-20e0-47dc-a29a-8bb4659541b7\") " pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" Feb 24 02:04:37.916475 master-0 kubenswrapper[7864]: I0224 02:04:37.916300 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/732a3831-20e0-47dc-a29a-8bb4659541b7-etc-ssl-certs\") pod \"cluster-version-operator-57476485-9cjj5\" (UID: \"732a3831-20e0-47dc-a29a-8bb4659541b7\") " pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" Feb 24 02:04:37.916475 master-0 kubenswrapper[7864]: I0224 02:04:37.916348 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/732a3831-20e0-47dc-a29a-8bb4659541b7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-9cjj5\" (UID: \"732a3831-20e0-47dc-a29a-8bb4659541b7\") " pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" Feb 24 02:04:37.918868 master-0 kubenswrapper[7864]: I0224 02:04:37.918845 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/732a3831-20e0-47dc-a29a-8bb4659541b7-service-ca\") pod \"cluster-version-operator-57476485-9cjj5\" (UID: \"732a3831-20e0-47dc-a29a-8bb4659541b7\") " pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" Feb 24 02:04:37.922905 master-0 kubenswrapper[7864]: I0224 02:04:37.922474 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/732a3831-20e0-47dc-a29a-8bb4659541b7-serving-cert\") pod \"cluster-version-operator-57476485-9cjj5\" (UID: \"732a3831-20e0-47dc-a29a-8bb4659541b7\") " pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" Feb 24 02:04:37.943214 master-0 kubenswrapper[7864]: I0224 02:04:37.943145 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/732a3831-20e0-47dc-a29a-8bb4659541b7-kube-api-access\") pod \"cluster-version-operator-57476485-9cjj5\" (UID: \"732a3831-20e0-47dc-a29a-8bb4659541b7\") " pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" Feb 24 02:04:38.029339 master-0 kubenswrapper[7864]: I0224 02:04:38.029289 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 24 02:04:38.070647 master-0 kubenswrapper[7864]: I0224 02:04:38.070556 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" Feb 24 02:04:38.321165 master-0 kubenswrapper[7864]: I0224 02:04:38.321089 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-client-ca\") pod \"route-controller-manager-975858db4-g96fv\" (UID: \"8b5bb0d2-d0a5-414b-88ea-e585a8a6f471\") " pod="openshift-route-controller-manager/route-controller-manager-975858db4-g96fv" Feb 24 02:04:38.321389 master-0 kubenswrapper[7864]: E0224 02:04:38.321302 7864 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 24 02:04:38.321425 master-0 kubenswrapper[7864]: E0224 02:04:38.321405 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-client-ca podName:8b5bb0d2-d0a5-414b-88ea-e585a8a6f471 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:54.321378048 +0000 UTC m=+58.649031670 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-client-ca") pod "route-controller-manager-975858db4-g96fv" (UID: "8b5bb0d2-d0a5-414b-88ea-e585a8a6f471") : configmap "client-ca" not found Feb 24 02:04:39.143497 master-0 kubenswrapper[7864]: I0224 02:04:39.143413 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 24 02:04:39.144946 master-0 kubenswrapper[7864]: I0224 02:04:39.144875 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 24 02:04:39.150342 master-0 kubenswrapper[7864]: I0224 02:04:39.150283 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 24 02:04:39.158745 master-0 kubenswrapper[7864]: I0224 02:04:39.152549 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 24 02:04:39.239141 master-0 kubenswrapper[7864]: I0224 02:04:39.239102 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/598f691a-1472-4198-bcd5-6956217d30f9-kube-api-access\") pod \"installer-1-master-0\" (UID: \"598f691a-1472-4198-bcd5-6956217d30f9\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 24 02:04:39.239367 master-0 kubenswrapper[7864]: I0224 02:04:39.239169 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/598f691a-1472-4198-bcd5-6956217d30f9-var-lock\") pod \"installer-1-master-0\" (UID: \"598f691a-1472-4198-bcd5-6956217d30f9\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 24 02:04:39.239367 master-0 kubenswrapper[7864]: I0224 02:04:39.239274 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/598f691a-1472-4198-bcd5-6956217d30f9-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"598f691a-1472-4198-bcd5-6956217d30f9\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 24 02:04:39.341711 master-0 kubenswrapper[7864]: I0224 02:04:39.341071 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/598f691a-1472-4198-bcd5-6956217d30f9-kube-api-access\") pod \"installer-1-master-0\" (UID: \"598f691a-1472-4198-bcd5-6956217d30f9\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 24 02:04:39.341879 master-0 kubenswrapper[7864]: I0224 02:04:39.341758 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/598f691a-1472-4198-bcd5-6956217d30f9-var-lock\") pod \"installer-1-master-0\" (UID: \"598f691a-1472-4198-bcd5-6956217d30f9\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 24 02:04:39.341879 master-0 kubenswrapper[7864]: I0224 02:04:39.341856 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/598f691a-1472-4198-bcd5-6956217d30f9-var-lock\") pod \"installer-1-master-0\" (UID: \"598f691a-1472-4198-bcd5-6956217d30f9\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 24 02:04:39.342113 master-0 kubenswrapper[7864]: I0224 02:04:39.342082 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/598f691a-1472-4198-bcd5-6956217d30f9-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"598f691a-1472-4198-bcd5-6956217d30f9\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 24 02:04:39.342195 master-0 kubenswrapper[7864]: I0224 02:04:39.342173 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/598f691a-1472-4198-bcd5-6956217d30f9-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"598f691a-1472-4198-bcd5-6956217d30f9\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 24 02:04:39.357934 master-0 kubenswrapper[7864]: I0224 02:04:39.357877 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/598f691a-1472-4198-bcd5-6956217d30f9-kube-api-access\") pod \"installer-1-master-0\" (UID: \"598f691a-1472-4198-bcd5-6956217d30f9\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 24 02:04:39.535374 master-0 kubenswrapper[7864]: I0224 02:04:39.535267 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 24 02:04:39.632958 master-0 kubenswrapper[7864]: I0224 02:04:39.632768 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-2-master-0" podUID="4fd14dbd-ebb1-4442-85db-9a8567d3cdcc" containerName="installer" containerID="cri-o://2e4c4c5eb4b6dd6fc07e7770b42c4804db7c38c00642a4b3f6368952d27cf824" gracePeriod=30 Feb 24 02:04:40.424435 master-0 kubenswrapper[7864]: I0224 02:04:40.424384 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 24 02:04:40.425037 master-0 kubenswrapper[7864]: I0224 02:04:40.424995 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 24 02:04:40.469209 master-0 kubenswrapper[7864]: I0224 02:04:40.440954 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 24 02:04:40.469209 master-0 kubenswrapper[7864]: I0224 02:04:40.469156 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:04:40.469642 master-0 kubenswrapper[7864]: I0224 02:04:40.469603 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:04:40.478300 master-0 kubenswrapper[7864]: I0224 02:04:40.471550 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/683deae1-94b1-4c17-a73f-ad628a09134b-var-lock\") pod \"installer-3-master-0\" (UID: \"683deae1-94b1-4c17-a73f-ad628a09134b\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 24 02:04:40.478300 master-0 kubenswrapper[7864]: I0224 02:04:40.471666 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/683deae1-94b1-4c17-a73f-ad628a09134b-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"683deae1-94b1-4c17-a73f-ad628a09134b\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 24 02:04:40.478300 master-0 kubenswrapper[7864]: I0224 02:04:40.471723 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/683deae1-94b1-4c17-a73f-ad628a09134b-kube-api-access\") pod \"installer-3-master-0\" (UID: \"683deae1-94b1-4c17-a73f-ad628a09134b\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 24 02:04:40.485996 master-0 kubenswrapper[7864]: I0224 02:04:40.485910 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:04:40.574675 master-0 kubenswrapper[7864]: I0224 02:04:40.573304 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/683deae1-94b1-4c17-a73f-ad628a09134b-var-lock\") pod \"installer-3-master-0\" (UID: \"683deae1-94b1-4c17-a73f-ad628a09134b\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 24 02:04:40.574675 master-0 kubenswrapper[7864]: I0224 02:04:40.573924 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/683deae1-94b1-4c17-a73f-ad628a09134b-var-lock\") pod \"installer-3-master-0\" (UID: \"683deae1-94b1-4c17-a73f-ad628a09134b\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 24 02:04:40.576226 master-0 kubenswrapper[7864]: I0224 02:04:40.576196 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/683deae1-94b1-4c17-a73f-ad628a09134b-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"683deae1-94b1-4c17-a73f-ad628a09134b\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 24 02:04:40.576473 master-0 kubenswrapper[7864]: I0224 02:04:40.576445 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/683deae1-94b1-4c17-a73f-ad628a09134b-kube-api-access\") pod \"installer-3-master-0\" (UID: \"683deae1-94b1-4c17-a73f-ad628a09134b\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 24 02:04:40.581670 master-0 kubenswrapper[7864]: I0224 02:04:40.576954 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/683deae1-94b1-4c17-a73f-ad628a09134b-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"683deae1-94b1-4c17-a73f-ad628a09134b\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 24 02:04:40.592841 master-0 kubenswrapper[7864]: I0224 02:04:40.592808 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/683deae1-94b1-4c17-a73f-ad628a09134b-kube-api-access\") pod \"installer-3-master-0\" (UID: \"683deae1-94b1-4c17-a73f-ad628a09134b\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 24 02:04:40.649003 master-0 kubenswrapper[7864]: I0224 02:04:40.644465 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_4fd14dbd-ebb1-4442-85db-9a8567d3cdcc/installer/0.log" Feb 24 02:04:40.649003 master-0 kubenswrapper[7864]: I0224 02:04:40.644544 7864 generic.go:334] "Generic (PLEG): container finished" podID="4fd14dbd-ebb1-4442-85db-9a8567d3cdcc" containerID="2e4c4c5eb4b6dd6fc07e7770b42c4804db7c38c00642a4b3f6368952d27cf824" exitCode=1 Feb 24 02:04:40.649003 master-0 kubenswrapper[7864]: I0224 02:04:40.644703 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"4fd14dbd-ebb1-4442-85db-9a8567d3cdcc","Type":"ContainerDied","Data":"2e4c4c5eb4b6dd6fc07e7770b42c4804db7c38c00642a4b3f6368952d27cf824"} Feb 24 02:04:40.651018 master-0 kubenswrapper[7864]: I0224 02:04:40.650965 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:04:40.679453 master-0 kubenswrapper[7864]: I0224 02:04:40.679303 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-client-ca\") pod \"controller-manager-56767fb5d4-2ghfz\" (UID: \"80c4b74e-052d-4202-94aa-a9d35a4b2410\") " pod="openshift-controller-manager/controller-manager-56767fb5d4-2ghfz" Feb 24 02:04:40.680175 master-0 kubenswrapper[7864]: E0224 02:04:40.680122 7864 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 24 02:04:40.680273 master-0 kubenswrapper[7864]: E0224 02:04:40.680234 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-client-ca podName:80c4b74e-052d-4202-94aa-a9d35a4b2410 nodeName:}" failed. No retries permitted until 2026-02-24 02:04:56.680197873 +0000 UTC m=+61.007851495 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-client-ca") pod "controller-manager-56767fb5d4-2ghfz" (UID: "80c4b74e-052d-4202-94aa-a9d35a4b2410") : configmap "client-ca" not found Feb 24 02:04:40.787427 master-0 kubenswrapper[7864]: I0224 02:04:40.787343 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 24 02:04:40.943830 master-0 kubenswrapper[7864]: I0224 02:04:40.940919 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-5rf6m" Feb 24 02:04:40.945503 master-0 kubenswrapper[7864]: I0224 02:04:40.945066 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56767fb5d4-2ghfz"] Feb 24 02:04:40.945503 master-0 kubenswrapper[7864]: E0224 02:04:40.945445 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-56767fb5d4-2ghfz" podUID="80c4b74e-052d-4202-94aa-a9d35a4b2410" Feb 24 02:04:40.963598 master-0 kubenswrapper[7864]: I0224 02:04:40.963526 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-975858db4-g96fv"] Feb 24 02:04:40.963974 master-0 kubenswrapper[7864]: E0224 02:04:40.963938 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-975858db4-g96fv" podUID="8b5bb0d2-d0a5-414b-88ea-e585a8a6f471" Feb 24 02:04:41.030988 master-0 kubenswrapper[7864]: W0224 02:04:41.030917 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod732a3831_20e0_47dc_a29a_8bb4659541b7.slice/crio-bb7daa4c061606545ddec9122a80563e4f785ed9c98cafaa54bb7196f126bd02 WatchSource:0}: Error finding container bb7daa4c061606545ddec9122a80563e4f785ed9c98cafaa54bb7196f126bd02: Status 404 returned error can't find the container with id bb7daa4c061606545ddec9122a80563e4f785ed9c98cafaa54bb7196f126bd02 Feb 24 02:04:41.246526 master-0 kubenswrapper[7864]: I0224 02:04:41.246250 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_4fd14dbd-ebb1-4442-85db-9a8567d3cdcc/installer/0.log" Feb 24 02:04:41.246526 master-0 kubenswrapper[7864]: I0224 02:04:41.246321 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 24 02:04:41.290338 master-0 kubenswrapper[7864]: I0224 02:04:41.290286 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4fd14dbd-ebb1-4442-85db-9a8567d3cdcc-var-lock\") pod \"4fd14dbd-ebb1-4442-85db-9a8567d3cdcc\" (UID: \"4fd14dbd-ebb1-4442-85db-9a8567d3cdcc\") " Feb 24 02:04:41.290501 master-0 kubenswrapper[7864]: I0224 02:04:41.290367 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4fd14dbd-ebb1-4442-85db-9a8567d3cdcc-kubelet-dir\") pod \"4fd14dbd-ebb1-4442-85db-9a8567d3cdcc\" (UID: \"4fd14dbd-ebb1-4442-85db-9a8567d3cdcc\") " Feb 24 02:04:41.290501 master-0 kubenswrapper[7864]: I0224 02:04:41.290480 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4fd14dbd-ebb1-4442-85db-9a8567d3cdcc-kube-api-access\") pod \"4fd14dbd-ebb1-4442-85db-9a8567d3cdcc\" (UID: \"4fd14dbd-ebb1-4442-85db-9a8567d3cdcc\") " Feb 24 02:04:41.290893 master-0 kubenswrapper[7864]: I0224 02:04:41.290650 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4fd14dbd-ebb1-4442-85db-9a8567d3cdcc-var-lock" (OuterVolumeSpecName: "var-lock") pod "4fd14dbd-ebb1-4442-85db-9a8567d3cdcc" (UID: "4fd14dbd-ebb1-4442-85db-9a8567d3cdcc"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:04:41.290893 master-0 kubenswrapper[7864]: I0224 02:04:41.290852 7864 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4fd14dbd-ebb1-4442-85db-9a8567d3cdcc-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:41.290893 master-0 kubenswrapper[7864]: I0224 02:04:41.290876 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4fd14dbd-ebb1-4442-85db-9a8567d3cdcc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4fd14dbd-ebb1-4442-85db-9a8567d3cdcc" (UID: "4fd14dbd-ebb1-4442-85db-9a8567d3cdcc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:04:41.295929 master-0 kubenswrapper[7864]: I0224 02:04:41.295873 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fd14dbd-ebb1-4442-85db-9a8567d3cdcc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4fd14dbd-ebb1-4442-85db-9a8567d3cdcc" (UID: "4fd14dbd-ebb1-4442-85db-9a8567d3cdcc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:04:41.392929 master-0 kubenswrapper[7864]: I0224 02:04:41.392850 7864 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4fd14dbd-ebb1-4442-85db-9a8567d3cdcc-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:41.392929 master-0 kubenswrapper[7864]: I0224 02:04:41.392909 7864 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4fd14dbd-ebb1-4442-85db-9a8567d3cdcc-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:41.399235 master-0 kubenswrapper[7864]: I0224 02:04:41.399184 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 24 02:04:41.409881 master-0 kubenswrapper[7864]: W0224 02:04:41.409630 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod598f691a_1472_4198_bcd5_6956217d30f9.slice/crio-2b23e83f678bb636a25eee04dbc168b7185773cd9d9737c0ad45b8d37c504e40 WatchSource:0}: Error finding container 2b23e83f678bb636a25eee04dbc168b7185773cd9d9737c0ad45b8d37c504e40: Status 404 returned error can't find the container with id 2b23e83f678bb636a25eee04dbc168b7185773cd9d9737c0ad45b8d37c504e40 Feb 24 02:04:41.472223 master-0 kubenswrapper[7864]: I0224 02:04:41.472109 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 24 02:04:41.653080 master-0 kubenswrapper[7864]: I0224 02:04:41.652161 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" event={"ID":"6320dbb5-b84d-4a57-8c65-fbed8421f84a","Type":"ContainerStarted","Data":"0a8ba6ce68a26edc1451ec182c1b30b8fe0084be7cbefabb415d809de554af9e"} Feb 24 02:04:41.653080 master-0 kubenswrapper[7864]: I0224 02:04:41.653049 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" Feb 24 02:04:41.663209 master-0 kubenswrapper[7864]: I0224 02:04:41.663126 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"598f691a-1472-4198-bcd5-6956217d30f9","Type":"ContainerStarted","Data":"2b23e83f678bb636a25eee04dbc168b7185773cd9d9737c0ad45b8d37c504e40"} Feb 24 02:04:41.668627 master-0 kubenswrapper[7864]: I0224 02:04:41.667555 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" event={"ID":"12b89e05-a503-47aa-90b2-4d741e015b19","Type":"ContainerStarted","Data":"e83ff6dbf21d18933989d16dabdd55b76dba2e6aefd8a69a1cc0797cddc207b9"} Feb 24 02:04:41.668627 master-0 kubenswrapper[7864]: I0224 02:04:41.668595 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:04:41.674957 master-0 kubenswrapper[7864]: I0224 02:04:41.674905 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"683deae1-94b1-4c17-a73f-ad628a09134b","Type":"ContainerStarted","Data":"63cac87aa9f86fe69782f0e078c00d8a3d420e25f4f78bbbbc9cfebe09080f84"} Feb 24 02:04:41.678058 master-0 kubenswrapper[7864]: I0224 02:04:41.677972 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:04:41.683508 master-0 kubenswrapper[7864]: I0224 02:04:41.680094 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" event={"ID":"732a3831-20e0-47dc-a29a-8bb4659541b7","Type":"ContainerStarted","Data":"535ac4e7253be828eb8365a54787c4f76f98be222f5aa26756ea0b019790bb97"} Feb 24 02:04:41.683508 master-0 kubenswrapper[7864]: I0224 02:04:41.680125 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" event={"ID":"732a3831-20e0-47dc-a29a-8bb4659541b7","Type":"ContainerStarted","Data":"bb7daa4c061606545ddec9122a80563e4f785ed9c98cafaa54bb7196f126bd02"} Feb 24 02:04:41.689087 master-0 kubenswrapper[7864]: I0224 02:04:41.689051 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" event={"ID":"02f1d753-983a-4c4a-b1a0-560de173859a","Type":"ContainerStarted","Data":"3b203d15747d627cfdb5de00e47a4742a40d9cb938d42607f4724f640a852526"} Feb 24 02:04:41.690106 master-0 kubenswrapper[7864]: I0224 02:04:41.690083 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:04:41.693374 master-0 kubenswrapper[7864]: I0224 02:04:41.692186 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_4fd14dbd-ebb1-4442-85db-9a8567d3cdcc/installer/0.log" Feb 24 02:04:41.693374 master-0 kubenswrapper[7864]: I0224 02:04:41.692259 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-975858db4-g96fv" Feb 24 02:04:41.693374 master-0 kubenswrapper[7864]: I0224 02:04:41.692727 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 24 02:04:41.696906 master-0 kubenswrapper[7864]: I0224 02:04:41.694914 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"4fd14dbd-ebb1-4442-85db-9a8567d3cdcc","Type":"ContainerDied","Data":"a0a6e5b25bf1447fa4c8132cd0c3239d84e0332243511da2a3a1feb525dc8ed3"} Feb 24 02:04:41.696906 master-0 kubenswrapper[7864]: I0224 02:04:41.694961 7864 scope.go:117] "RemoveContainer" containerID="2e4c4c5eb4b6dd6fc07e7770b42c4804db7c38c00642a4b3f6368952d27cf824" Feb 24 02:04:41.696906 master-0 kubenswrapper[7864]: I0224 02:04:41.695313 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:04:41.696906 master-0 kubenswrapper[7864]: I0224 02:04:41.696286 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56767fb5d4-2ghfz" Feb 24 02:04:41.706907 master-0 kubenswrapper[7864]: I0224 02:04:41.706870 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-975858db4-g96fv" Feb 24 02:04:41.716196 master-0 kubenswrapper[7864]: I0224 02:04:41.716164 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56767fb5d4-2ghfz" Feb 24 02:04:41.730194 master-0 kubenswrapper[7864]: I0224 02:04:41.730070 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" podStartSLOduration=4.730047209 podStartE2EDuration="4.730047209s" podCreationTimestamp="2026-02-24 02:04:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:04:41.729199829 +0000 UTC m=+46.056853451" watchObservedRunningTime="2026-02-24 02:04:41.730047209 +0000 UTC m=+46.057700831" Feb 24 02:04:41.801208 master-0 kubenswrapper[7864]: I0224 02:04:41.799546 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxwmm\" (UniqueName: \"kubernetes.io/projected/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-kube-api-access-dxwmm\") pod \"8b5bb0d2-d0a5-414b-88ea-e585a8a6f471\" (UID: \"8b5bb0d2-d0a5-414b-88ea-e585a8a6f471\") " Feb 24 02:04:41.801208 master-0 kubenswrapper[7864]: I0224 02:04:41.799635 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqbdx\" (UniqueName: \"kubernetes.io/projected/80c4b74e-052d-4202-94aa-a9d35a4b2410-kube-api-access-dqbdx\") pod \"80c4b74e-052d-4202-94aa-a9d35a4b2410\" (UID: \"80c4b74e-052d-4202-94aa-a9d35a4b2410\") " Feb 24 02:04:41.801208 master-0 kubenswrapper[7864]: I0224 02:04:41.799699 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-serving-cert\") pod \"8b5bb0d2-d0a5-414b-88ea-e585a8a6f471\" (UID: \"8b5bb0d2-d0a5-414b-88ea-e585a8a6f471\") " Feb 24 02:04:41.801208 master-0 kubenswrapper[7864]: I0224 02:04:41.799780 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80c4b74e-052d-4202-94aa-a9d35a4b2410-serving-cert\") pod \"80c4b74e-052d-4202-94aa-a9d35a4b2410\" (UID: \"80c4b74e-052d-4202-94aa-a9d35a4b2410\") " Feb 24 02:04:41.801208 master-0 kubenswrapper[7864]: I0224 02:04:41.799802 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-proxy-ca-bundles\") pod \"80c4b74e-052d-4202-94aa-a9d35a4b2410\" (UID: \"80c4b74e-052d-4202-94aa-a9d35a4b2410\") " Feb 24 02:04:41.801208 master-0 kubenswrapper[7864]: I0224 02:04:41.799831 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-config\") pod \"8b5bb0d2-d0a5-414b-88ea-e585a8a6f471\" (UID: \"8b5bb0d2-d0a5-414b-88ea-e585a8a6f471\") " Feb 24 02:04:41.801208 master-0 kubenswrapper[7864]: I0224 02:04:41.799882 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-config\") pod \"80c4b74e-052d-4202-94aa-a9d35a4b2410\" (UID: \"80c4b74e-052d-4202-94aa-a9d35a4b2410\") " Feb 24 02:04:41.801208 master-0 kubenswrapper[7864]: I0224 02:04:41.800838 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "80c4b74e-052d-4202-94aa-a9d35a4b2410" (UID: "80c4b74e-052d-4202-94aa-a9d35a4b2410"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:04:41.801602 master-0 kubenswrapper[7864]: I0224 02:04:41.801232 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-config" (OuterVolumeSpecName: "config") pod "8b5bb0d2-d0a5-414b-88ea-e585a8a6f471" (UID: "8b5bb0d2-d0a5-414b-88ea-e585a8a6f471"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:04:41.802535 master-0 kubenswrapper[7864]: I0224 02:04:41.801749 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-config" (OuterVolumeSpecName: "config") pod "80c4b74e-052d-4202-94aa-a9d35a4b2410" (UID: "80c4b74e-052d-4202-94aa-a9d35a4b2410"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:04:41.805437 master-0 kubenswrapper[7864]: I0224 02:04:41.805382 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80c4b74e-052d-4202-94aa-a9d35a4b2410-kube-api-access-dqbdx" (OuterVolumeSpecName: "kube-api-access-dqbdx") pod "80c4b74e-052d-4202-94aa-a9d35a4b2410" (UID: "80c4b74e-052d-4202-94aa-a9d35a4b2410"). InnerVolumeSpecName "kube-api-access-dqbdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:04:41.806000 master-0 kubenswrapper[7864]: I0224 02:04:41.805973 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-kube-api-access-dxwmm" (OuterVolumeSpecName: "kube-api-access-dxwmm") pod "8b5bb0d2-d0a5-414b-88ea-e585a8a6f471" (UID: "8b5bb0d2-d0a5-414b-88ea-e585a8a6f471"). InnerVolumeSpecName "kube-api-access-dxwmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:04:41.806382 master-0 kubenswrapper[7864]: I0224 02:04:41.806353 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8b5bb0d2-d0a5-414b-88ea-e585a8a6f471" (UID: "8b5bb0d2-d0a5-414b-88ea-e585a8a6f471"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:04:41.822254 master-0 kubenswrapper[7864]: I0224 02:04:41.821897 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 24 02:04:41.827177 master-0 kubenswrapper[7864]: I0224 02:04:41.827129 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80c4b74e-052d-4202-94aa-a9d35a4b2410-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "80c4b74e-052d-4202-94aa-a9d35a4b2410" (UID: "80c4b74e-052d-4202-94aa-a9d35a4b2410"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:04:41.832975 master-0 kubenswrapper[7864]: I0224 02:04:41.830364 7864 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 24 02:04:41.887798 master-0 kubenswrapper[7864]: I0224 02:04:41.886662 7864 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fd14dbd-ebb1-4442-85db-9a8567d3cdcc" path="/var/lib/kubelet/pods/4fd14dbd-ebb1-4442-85db-9a8567d3cdcc/volumes" Feb 24 02:04:41.901306 master-0 kubenswrapper[7864]: I0224 02:04:41.901261 7864 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:41.901306 master-0 kubenswrapper[7864]: I0224 02:04:41.901300 7864 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80c4b74e-052d-4202-94aa-a9d35a4b2410-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:41.901453 master-0 kubenswrapper[7864]: I0224 02:04:41.901315 7864 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:41.901453 master-0 kubenswrapper[7864]: I0224 02:04:41.901332 7864 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:41.901453 master-0 kubenswrapper[7864]: I0224 02:04:41.901346 7864 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:41.901453 master-0 kubenswrapper[7864]: I0224 02:04:41.901362 7864 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxwmm\" (UniqueName: \"kubernetes.io/projected/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-kube-api-access-dxwmm\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:41.901453 master-0 kubenswrapper[7864]: I0224 02:04:41.901377 7864 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqbdx\" (UniqueName: \"kubernetes.io/projected/80c4b74e-052d-4202-94aa-a9d35a4b2410-kube-api-access-dqbdx\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:42.350060 master-0 kubenswrapper[7864]: I0224 02:04:42.349984 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dwmm5"] Feb 24 02:04:42.350377 master-0 kubenswrapper[7864]: E0224 02:04:42.350323 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fd14dbd-ebb1-4442-85db-9a8567d3cdcc" containerName="installer" Feb 24 02:04:42.350377 master-0 kubenswrapper[7864]: I0224 02:04:42.350347 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fd14dbd-ebb1-4442-85db-9a8567d3cdcc" containerName="installer" Feb 24 02:04:42.350524 master-0 kubenswrapper[7864]: I0224 02:04:42.350496 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fd14dbd-ebb1-4442-85db-9a8567d3cdcc" containerName="installer" Feb 24 02:04:42.351670 master-0 kubenswrapper[7864]: I0224 02:04:42.351630 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dwmm5" Feb 24 02:04:42.374259 master-0 kubenswrapper[7864]: I0224 02:04:42.374192 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dwmm5"] Feb 24 02:04:42.409654 master-0 kubenswrapper[7864]: I0224 02:04:42.409615 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afda9f0b-a304-490a-a080-0384a0a4e85b-catalog-content\") pod \"certified-operators-dwmm5\" (UID: \"afda9f0b-a304-490a-a080-0384a0a4e85b\") " pod="openshift-marketplace/certified-operators-dwmm5" Feb 24 02:04:42.409918 master-0 kubenswrapper[7864]: I0224 02:04:42.409896 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afda9f0b-a304-490a-a080-0384a0a4e85b-utilities\") pod \"certified-operators-dwmm5\" (UID: \"afda9f0b-a304-490a-a080-0384a0a4e85b\") " pod="openshift-marketplace/certified-operators-dwmm5" Feb 24 02:04:42.410126 master-0 kubenswrapper[7864]: I0224 02:04:42.410106 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rmqn\" (UniqueName: \"kubernetes.io/projected/afda9f0b-a304-490a-a080-0384a0a4e85b-kube-api-access-5rmqn\") pod \"certified-operators-dwmm5\" (UID: \"afda9f0b-a304-490a-a080-0384a0a4e85b\") " pod="openshift-marketplace/certified-operators-dwmm5" Feb 24 02:04:42.441665 master-0 kubenswrapper[7864]: I0224 02:04:42.441554 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 24 02:04:42.442515 master-0 kubenswrapper[7864]: I0224 02:04:42.442479 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 24 02:04:42.445729 master-0 kubenswrapper[7864]: I0224 02:04:42.445660 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 24 02:04:42.462215 master-0 kubenswrapper[7864]: I0224 02:04:42.462170 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 24 02:04:42.513471 master-0 kubenswrapper[7864]: I0224 02:04:42.513362 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afda9f0b-a304-490a-a080-0384a0a4e85b-utilities\") pod \"certified-operators-dwmm5\" (UID: \"afda9f0b-a304-490a-a080-0384a0a4e85b\") " pod="openshift-marketplace/certified-operators-dwmm5" Feb 24 02:04:42.514294 master-0 kubenswrapper[7864]: I0224 02:04:42.513492 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bd02da41-8a48-4436-ae58-6363e7554898-var-lock\") pod \"installer-1-master-0\" (UID: \"bd02da41-8a48-4436-ae58-6363e7554898\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 24 02:04:42.514294 master-0 kubenswrapper[7864]: I0224 02:04:42.513542 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bd02da41-8a48-4436-ae58-6363e7554898-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"bd02da41-8a48-4436-ae58-6363e7554898\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 24 02:04:42.514294 master-0 kubenswrapper[7864]: I0224 02:04:42.513647 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rmqn\" (UniqueName: \"kubernetes.io/projected/afda9f0b-a304-490a-a080-0384a0a4e85b-kube-api-access-5rmqn\") pod \"certified-operators-dwmm5\" (UID: \"afda9f0b-a304-490a-a080-0384a0a4e85b\") " pod="openshift-marketplace/certified-operators-dwmm5" Feb 24 02:04:42.514294 master-0 kubenswrapper[7864]: I0224 02:04:42.513688 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd02da41-8a48-4436-ae58-6363e7554898-kube-api-access\") pod \"installer-1-master-0\" (UID: \"bd02da41-8a48-4436-ae58-6363e7554898\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 24 02:04:42.514294 master-0 kubenswrapper[7864]: I0224 02:04:42.513730 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afda9f0b-a304-490a-a080-0384a0a4e85b-catalog-content\") pod \"certified-operators-dwmm5\" (UID: \"afda9f0b-a304-490a-a080-0384a0a4e85b\") " pod="openshift-marketplace/certified-operators-dwmm5" Feb 24 02:04:42.514712 master-0 kubenswrapper[7864]: I0224 02:04:42.514550 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afda9f0b-a304-490a-a080-0384a0a4e85b-catalog-content\") pod \"certified-operators-dwmm5\" (UID: \"afda9f0b-a304-490a-a080-0384a0a4e85b\") " pod="openshift-marketplace/certified-operators-dwmm5" Feb 24 02:04:42.515147 master-0 kubenswrapper[7864]: I0224 02:04:42.515032 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afda9f0b-a304-490a-a080-0384a0a4e85b-utilities\") pod \"certified-operators-dwmm5\" (UID: \"afda9f0b-a304-490a-a080-0384a0a4e85b\") " pod="openshift-marketplace/certified-operators-dwmm5" Feb 24 02:04:42.533957 master-0 kubenswrapper[7864]: I0224 02:04:42.530216 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rvp5j"] Feb 24 02:04:42.533957 master-0 kubenswrapper[7864]: I0224 02:04:42.532108 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rvp5j" Feb 24 02:04:42.548496 master-0 kubenswrapper[7864]: I0224 02:04:42.548429 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rmqn\" (UniqueName: \"kubernetes.io/projected/afda9f0b-a304-490a-a080-0384a0a4e85b-kube-api-access-5rmqn\") pod \"certified-operators-dwmm5\" (UID: \"afda9f0b-a304-490a-a080-0384a0a4e85b\") " pod="openshift-marketplace/certified-operators-dwmm5" Feb 24 02:04:42.552261 master-0 kubenswrapper[7864]: I0224 02:04:42.552219 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rvp5j"] Feb 24 02:04:42.615170 master-0 kubenswrapper[7864]: I0224 02:04:42.615006 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bd02da41-8a48-4436-ae58-6363e7554898-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"bd02da41-8a48-4436-ae58-6363e7554898\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 24 02:04:42.615334 master-0 kubenswrapper[7864]: I0224 02:04:42.615155 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bd02da41-8a48-4436-ae58-6363e7554898-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"bd02da41-8a48-4436-ae58-6363e7554898\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 24 02:04:42.615410 master-0 kubenswrapper[7864]: I0224 02:04:42.615334 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd02da41-8a48-4436-ae58-6363e7554898-kube-api-access\") pod \"installer-1-master-0\" (UID: \"bd02da41-8a48-4436-ae58-6363e7554898\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 24 02:04:42.615538 master-0 kubenswrapper[7864]: I0224 02:04:42.615480 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9f88\" (UniqueName: \"kubernetes.io/projected/dae6353d-97ee-46f8-8430-0b5211134a03-kube-api-access-n9f88\") pod \"community-operators-rvp5j\" (UID: \"dae6353d-97ee-46f8-8430-0b5211134a03\") " pod="openshift-marketplace/community-operators-rvp5j" Feb 24 02:04:42.615636 master-0 kubenswrapper[7864]: I0224 02:04:42.615547 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dae6353d-97ee-46f8-8430-0b5211134a03-catalog-content\") pod \"community-operators-rvp5j\" (UID: \"dae6353d-97ee-46f8-8430-0b5211134a03\") " pod="openshift-marketplace/community-operators-rvp5j" Feb 24 02:04:42.615752 master-0 kubenswrapper[7864]: I0224 02:04:42.615705 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dae6353d-97ee-46f8-8430-0b5211134a03-utilities\") pod \"community-operators-rvp5j\" (UID: \"dae6353d-97ee-46f8-8430-0b5211134a03\") " pod="openshift-marketplace/community-operators-rvp5j" Feb 24 02:04:42.615820 master-0 kubenswrapper[7864]: I0224 02:04:42.615781 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bd02da41-8a48-4436-ae58-6363e7554898-var-lock\") pod \"installer-1-master-0\" (UID: \"bd02da41-8a48-4436-ae58-6363e7554898\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 24 02:04:42.615934 master-0 kubenswrapper[7864]: I0224 02:04:42.615899 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bd02da41-8a48-4436-ae58-6363e7554898-var-lock\") pod \"installer-1-master-0\" (UID: \"bd02da41-8a48-4436-ae58-6363e7554898\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 24 02:04:42.637088 master-0 kubenswrapper[7864]: I0224 02:04:42.637039 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd02da41-8a48-4436-ae58-6363e7554898-kube-api-access\") pod \"installer-1-master-0\" (UID: \"bd02da41-8a48-4436-ae58-6363e7554898\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 24 02:04:42.677322 master-0 kubenswrapper[7864]: I0224 02:04:42.677279 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dwmm5" Feb 24 02:04:42.701778 master-0 kubenswrapper[7864]: I0224 02:04:42.701710 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"598f691a-1472-4198-bcd5-6956217d30f9","Type":"ContainerStarted","Data":"6c4eac387e8d79e14cfd6b834810c2855eadf37f4e9c2bf332be97551df50535"} Feb 24 02:04:42.708278 master-0 kubenswrapper[7864]: I0224 02:04:42.708211 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"683deae1-94b1-4c17-a73f-ad628a09134b","Type":"ContainerStarted","Data":"94401de1842b75a4dd153e2d7cb3bd01f3f26706beddf59514cdea6c0eb4a139"} Feb 24 02:04:42.729102 master-0 kubenswrapper[7864]: I0224 02:04:42.713853 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56767fb5d4-2ghfz" Feb 24 02:04:42.729102 master-0 kubenswrapper[7864]: I0224 02:04:42.714304 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-975858db4-g96fv" Feb 24 02:04:42.729102 master-0 kubenswrapper[7864]: I0224 02:04:42.717757 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9f88\" (UniqueName: \"kubernetes.io/projected/dae6353d-97ee-46f8-8430-0b5211134a03-kube-api-access-n9f88\") pod \"community-operators-rvp5j\" (UID: \"dae6353d-97ee-46f8-8430-0b5211134a03\") " pod="openshift-marketplace/community-operators-rvp5j" Feb 24 02:04:42.729102 master-0 kubenswrapper[7864]: I0224 02:04:42.717875 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dae6353d-97ee-46f8-8430-0b5211134a03-catalog-content\") pod \"community-operators-rvp5j\" (UID: \"dae6353d-97ee-46f8-8430-0b5211134a03\") " pod="openshift-marketplace/community-operators-rvp5j" Feb 24 02:04:42.729102 master-0 kubenswrapper[7864]: I0224 02:04:42.717957 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dae6353d-97ee-46f8-8430-0b5211134a03-utilities\") pod \"community-operators-rvp5j\" (UID: \"dae6353d-97ee-46f8-8430-0b5211134a03\") " pod="openshift-marketplace/community-operators-rvp5j" Feb 24 02:04:42.729102 master-0 kubenswrapper[7864]: I0224 02:04:42.718879 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dae6353d-97ee-46f8-8430-0b5211134a03-utilities\") pod \"community-operators-rvp5j\" (UID: \"dae6353d-97ee-46f8-8430-0b5211134a03\") " pod="openshift-marketplace/community-operators-rvp5j" Feb 24 02:04:42.729102 master-0 kubenswrapper[7864]: I0224 02:04:42.718891 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dae6353d-97ee-46f8-8430-0b5211134a03-catalog-content\") pod \"community-operators-rvp5j\" (UID: \"dae6353d-97ee-46f8-8430-0b5211134a03\") " pod="openshift-marketplace/community-operators-rvp5j" Feb 24 02:04:42.741146 master-0 kubenswrapper[7864]: I0224 02:04:42.740480 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-0" podStartSLOduration=3.740456717 podStartE2EDuration="3.740456717s" podCreationTimestamp="2026-02-24 02:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:04:42.735966534 +0000 UTC m=+47.063620186" watchObservedRunningTime="2026-02-24 02:04:42.740456717 +0000 UTC m=+47.068110379" Feb 24 02:04:42.769894 master-0 kubenswrapper[7864]: I0224 02:04:42.769829 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9f88\" (UniqueName: \"kubernetes.io/projected/dae6353d-97ee-46f8-8430-0b5211134a03-kube-api-access-n9f88\") pod \"community-operators-rvp5j\" (UID: \"dae6353d-97ee-46f8-8430-0b5211134a03\") " pod="openshift-marketplace/community-operators-rvp5j" Feb 24 02:04:42.781612 master-0 kubenswrapper[7864]: I0224 02:04:42.776690 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 24 02:04:42.812780 master-0 kubenswrapper[7864]: I0224 02:04:42.811721 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-0" podStartSLOduration=2.811687028 podStartE2EDuration="2.811687028s" podCreationTimestamp="2026-02-24 02:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:04:42.764338592 +0000 UTC m=+47.091992244" watchObservedRunningTime="2026-02-24 02:04:42.811687028 +0000 UTC m=+47.139340690" Feb 24 02:04:42.820564 master-0 kubenswrapper[7864]: I0224 02:04:42.819152 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56767fb5d4-2ghfz"] Feb 24 02:04:42.826239 master-0 kubenswrapper[7864]: I0224 02:04:42.825860 7864 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-56767fb5d4-2ghfz"] Feb 24 02:04:42.848045 master-0 kubenswrapper[7864]: I0224 02:04:42.847992 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-975858db4-g96fv"] Feb 24 02:04:42.852786 master-0 kubenswrapper[7864]: I0224 02:04:42.852743 7864 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-975858db4-g96fv"] Feb 24 02:04:42.886660 master-0 kubenswrapper[7864]: I0224 02:04:42.886138 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rvp5j" Feb 24 02:04:42.923176 master-0 kubenswrapper[7864]: I0224 02:04:42.923119 7864 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/80c4b74e-052d-4202-94aa-a9d35a4b2410-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:42.923176 master-0 kubenswrapper[7864]: I0224 02:04:42.923160 7864 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:43.139605 master-0 kubenswrapper[7864]: I0224 02:04:43.139412 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dwmm5"] Feb 24 02:04:43.152153 master-0 kubenswrapper[7864]: W0224 02:04:43.152100 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafda9f0b_a304_490a_a080_0384a0a4e85b.slice/crio-d63981b48a66be59479eff1aa85e6d210bec4551670260a1b35a8bdeca808631 WatchSource:0}: Error finding container d63981b48a66be59479eff1aa85e6d210bec4551670260a1b35a8bdeca808631: Status 404 returned error can't find the container with id d63981b48a66be59479eff1aa85e6d210bec4551670260a1b35a8bdeca808631 Feb 24 02:04:43.263869 master-0 kubenswrapper[7864]: I0224 02:04:43.262456 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 24 02:04:43.282056 master-0 kubenswrapper[7864]: W0224 02:04:43.281986 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podbd02da41_8a48_4436_ae58_6363e7554898.slice/crio-66e7c8048611a89e5a8013562e4ce272875677cb82b5701298ff4ad3f7e01366 WatchSource:0}: Error finding container 66e7c8048611a89e5a8013562e4ce272875677cb82b5701298ff4ad3f7e01366: Status 404 returned error can't find the container with id 66e7c8048611a89e5a8013562e4ce272875677cb82b5701298ff4ad3f7e01366 Feb 24 02:04:43.396627 master-0 kubenswrapper[7864]: W0224 02:04:43.396542 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddae6353d_97ee_46f8_8430_0b5211134a03.slice/crio-50356bfb3d2696621a889bb656126f097ae204f6e3e58121f4dfd1103a714bae WatchSource:0}: Error finding container 50356bfb3d2696621a889bb656126f097ae204f6e3e58121f4dfd1103a714bae: Status 404 returned error can't find the container with id 50356bfb3d2696621a889bb656126f097ae204f6e3e58121f4dfd1103a714bae Feb 24 02:04:43.404248 master-0 kubenswrapper[7864]: I0224 02:04:43.404183 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rvp5j"] Feb 24 02:04:43.518793 master-0 kubenswrapper[7864]: I0224 02:04:43.518722 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c"] Feb 24 02:04:43.519865 master-0 kubenswrapper[7864]: I0224 02:04:43.519434 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" Feb 24 02:04:43.521396 master-0 kubenswrapper[7864]: I0224 02:04:43.521341 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 24 02:04:43.552694 master-0 kubenswrapper[7864]: I0224 02:04:43.552220 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c"] Feb 24 02:04:43.634098 master-0 kubenswrapper[7864]: I0224 02:04:43.633838 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9cad383a-cb69-41a8-aec8-23ee1c930430-webhook-cert\") pod \"packageserver-597975fc65-xcl6c\" (UID: \"9cad383a-cb69-41a8-aec8-23ee1c930430\") " pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" Feb 24 02:04:43.634098 master-0 kubenswrapper[7864]: I0224 02:04:43.633921 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9cad383a-cb69-41a8-aec8-23ee1c930430-tmpfs\") pod \"packageserver-597975fc65-xcl6c\" (UID: \"9cad383a-cb69-41a8-aec8-23ee1c930430\") " pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" Feb 24 02:04:43.634098 master-0 kubenswrapper[7864]: I0224 02:04:43.633956 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svc78\" (UniqueName: \"kubernetes.io/projected/9cad383a-cb69-41a8-aec8-23ee1c930430-kube-api-access-svc78\") pod \"packageserver-597975fc65-xcl6c\" (UID: \"9cad383a-cb69-41a8-aec8-23ee1c930430\") " pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" Feb 24 02:04:43.634098 master-0 kubenswrapper[7864]: I0224 02:04:43.633996 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9cad383a-cb69-41a8-aec8-23ee1c930430-apiservice-cert\") pod \"packageserver-597975fc65-xcl6c\" (UID: \"9cad383a-cb69-41a8-aec8-23ee1c930430\") " pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" Feb 24 02:04:43.722359 master-0 kubenswrapper[7864]: I0224 02:04:43.722287 7864 generic.go:334] "Generic (PLEG): container finished" podID="afda9f0b-a304-490a-a080-0384a0a4e85b" containerID="2c80afcf16dd6a27d5d95ff164c146d6550b41c092afe1dfc56432c7b3cf9a3d" exitCode=0 Feb 24 02:04:43.722599 master-0 kubenswrapper[7864]: I0224 02:04:43.722407 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dwmm5" event={"ID":"afda9f0b-a304-490a-a080-0384a0a4e85b","Type":"ContainerDied","Data":"2c80afcf16dd6a27d5d95ff164c146d6550b41c092afe1dfc56432c7b3cf9a3d"} Feb 24 02:04:43.722599 master-0 kubenswrapper[7864]: I0224 02:04:43.722470 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dwmm5" event={"ID":"afda9f0b-a304-490a-a080-0384a0a4e85b","Type":"ContainerStarted","Data":"d63981b48a66be59479eff1aa85e6d210bec4551670260a1b35a8bdeca808631"} Feb 24 02:04:43.725015 master-0 kubenswrapper[7864]: I0224 02:04:43.724974 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"bd02da41-8a48-4436-ae58-6363e7554898","Type":"ContainerStarted","Data":"66e7c8048611a89e5a8013562e4ce272875677cb82b5701298ff4ad3f7e01366"} Feb 24 02:04:43.728249 master-0 kubenswrapper[7864]: I0224 02:04:43.728203 7864 generic.go:334] "Generic (PLEG): container finished" podID="dae6353d-97ee-46f8-8430-0b5211134a03" containerID="efcc1a9e753b5314b68914ac8cee7200d3873607fe40f63d396a194371722d62" exitCode=0 Feb 24 02:04:43.728413 master-0 kubenswrapper[7864]: I0224 02:04:43.728383 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rvp5j" event={"ID":"dae6353d-97ee-46f8-8430-0b5211134a03","Type":"ContainerDied","Data":"efcc1a9e753b5314b68914ac8cee7200d3873607fe40f63d396a194371722d62"} Feb 24 02:04:43.728458 master-0 kubenswrapper[7864]: I0224 02:04:43.728422 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rvp5j" event={"ID":"dae6353d-97ee-46f8-8430-0b5211134a03","Type":"ContainerStarted","Data":"50356bfb3d2696621a889bb656126f097ae204f6e3e58121f4dfd1103a714bae"} Feb 24 02:04:43.739590 master-0 kubenswrapper[7864]: I0224 02:04:43.738616 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9cad383a-cb69-41a8-aec8-23ee1c930430-tmpfs\") pod \"packageserver-597975fc65-xcl6c\" (UID: \"9cad383a-cb69-41a8-aec8-23ee1c930430\") " pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" Feb 24 02:04:43.739590 master-0 kubenswrapper[7864]: I0224 02:04:43.738723 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svc78\" (UniqueName: \"kubernetes.io/projected/9cad383a-cb69-41a8-aec8-23ee1c930430-kube-api-access-svc78\") pod \"packageserver-597975fc65-xcl6c\" (UID: \"9cad383a-cb69-41a8-aec8-23ee1c930430\") " pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" Feb 24 02:04:43.739590 master-0 kubenswrapper[7864]: I0224 02:04:43.738777 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9cad383a-cb69-41a8-aec8-23ee1c930430-apiservice-cert\") pod \"packageserver-597975fc65-xcl6c\" (UID: \"9cad383a-cb69-41a8-aec8-23ee1c930430\") " pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" Feb 24 02:04:43.739590 master-0 kubenswrapper[7864]: I0224 02:04:43.738860 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9cad383a-cb69-41a8-aec8-23ee1c930430-webhook-cert\") pod \"packageserver-597975fc65-xcl6c\" (UID: \"9cad383a-cb69-41a8-aec8-23ee1c930430\") " pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" Feb 24 02:04:43.739590 master-0 kubenswrapper[7864]: I0224 02:04:43.739029 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9cad383a-cb69-41a8-aec8-23ee1c930430-tmpfs\") pod \"packageserver-597975fc65-xcl6c\" (UID: \"9cad383a-cb69-41a8-aec8-23ee1c930430\") " pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" Feb 24 02:04:43.743413 master-0 kubenswrapper[7864]: I0224 02:04:43.743356 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9cad383a-cb69-41a8-aec8-23ee1c930430-webhook-cert\") pod \"packageserver-597975fc65-xcl6c\" (UID: \"9cad383a-cb69-41a8-aec8-23ee1c930430\") " pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" Feb 24 02:04:43.743856 master-0 kubenswrapper[7864]: I0224 02:04:43.743826 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9cad383a-cb69-41a8-aec8-23ee1c930430-apiservice-cert\") pod \"packageserver-597975fc65-xcl6c\" (UID: \"9cad383a-cb69-41a8-aec8-23ee1c930430\") " pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" Feb 24 02:04:43.766278 master-0 kubenswrapper[7864]: I0224 02:04:43.766234 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svc78\" (UniqueName: \"kubernetes.io/projected/9cad383a-cb69-41a8-aec8-23ee1c930430-kube-api-access-svc78\") pod \"packageserver-597975fc65-xcl6c\" (UID: \"9cad383a-cb69-41a8-aec8-23ee1c930430\") " pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" Feb 24 02:04:43.876605 master-0 kubenswrapper[7864]: I0224 02:04:43.876532 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" Feb 24 02:04:43.884230 master-0 kubenswrapper[7864]: I0224 02:04:43.884171 7864 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80c4b74e-052d-4202-94aa-a9d35a4b2410" path="/var/lib/kubelet/pods/80c4b74e-052d-4202-94aa-a9d35a4b2410/volumes" Feb 24 02:04:43.884986 master-0 kubenswrapper[7864]: I0224 02:04:43.884952 7864 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b5bb0d2-d0a5-414b-88ea-e585a8a6f471" path="/var/lib/kubelet/pods/8b5bb0d2-d0a5-414b-88ea-e585a8a6f471/volumes" Feb 24 02:04:43.938793 master-0 kubenswrapper[7864]: I0224 02:04:43.933406 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hrmdr"] Feb 24 02:04:43.938793 master-0 kubenswrapper[7864]: I0224 02:04:43.935527 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrmdr" Feb 24 02:04:43.953109 master-0 kubenswrapper[7864]: I0224 02:04:43.953043 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrmdr"] Feb 24 02:04:44.060821 master-0 kubenswrapper[7864]: I0224 02:04:44.060534 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37e3de57-34a2-4d55-9200-1bb94530c4ba-catalog-content\") pod \"redhat-marketplace-hrmdr\" (UID: \"37e3de57-34a2-4d55-9200-1bb94530c4ba\") " pod="openshift-marketplace/redhat-marketplace-hrmdr" Feb 24 02:04:44.060821 master-0 kubenswrapper[7864]: I0224 02:04:44.060648 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37e3de57-34a2-4d55-9200-1bb94530c4ba-utilities\") pod \"redhat-marketplace-hrmdr\" (UID: \"37e3de57-34a2-4d55-9200-1bb94530c4ba\") " pod="openshift-marketplace/redhat-marketplace-hrmdr" Feb 24 02:04:44.061285 master-0 kubenswrapper[7864]: I0224 02:04:44.061164 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z6d6\" (UniqueName: \"kubernetes.io/projected/37e3de57-34a2-4d55-9200-1bb94530c4ba-kube-api-access-5z6d6\") pod \"redhat-marketplace-hrmdr\" (UID: \"37e3de57-34a2-4d55-9200-1bb94530c4ba\") " pod="openshift-marketplace/redhat-marketplace-hrmdr" Feb 24 02:04:44.162296 master-0 kubenswrapper[7864]: I0224 02:04:44.162088 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37e3de57-34a2-4d55-9200-1bb94530c4ba-catalog-content\") pod \"redhat-marketplace-hrmdr\" (UID: \"37e3de57-34a2-4d55-9200-1bb94530c4ba\") " pod="openshift-marketplace/redhat-marketplace-hrmdr" Feb 24 02:04:44.162296 master-0 kubenswrapper[7864]: I0224 02:04:44.162155 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37e3de57-34a2-4d55-9200-1bb94530c4ba-utilities\") pod \"redhat-marketplace-hrmdr\" (UID: \"37e3de57-34a2-4d55-9200-1bb94530c4ba\") " pod="openshift-marketplace/redhat-marketplace-hrmdr" Feb 24 02:04:44.162296 master-0 kubenswrapper[7864]: I0224 02:04:44.162189 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z6d6\" (UniqueName: \"kubernetes.io/projected/37e3de57-34a2-4d55-9200-1bb94530c4ba-kube-api-access-5z6d6\") pod \"redhat-marketplace-hrmdr\" (UID: \"37e3de57-34a2-4d55-9200-1bb94530c4ba\") " pod="openshift-marketplace/redhat-marketplace-hrmdr" Feb 24 02:04:44.163130 master-0 kubenswrapper[7864]: I0224 02:04:44.162976 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37e3de57-34a2-4d55-9200-1bb94530c4ba-utilities\") pod \"redhat-marketplace-hrmdr\" (UID: \"37e3de57-34a2-4d55-9200-1bb94530c4ba\") " pod="openshift-marketplace/redhat-marketplace-hrmdr" Feb 24 02:04:44.163130 master-0 kubenswrapper[7864]: I0224 02:04:44.163081 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37e3de57-34a2-4d55-9200-1bb94530c4ba-catalog-content\") pod \"redhat-marketplace-hrmdr\" (UID: \"37e3de57-34a2-4d55-9200-1bb94530c4ba\") " pod="openshift-marketplace/redhat-marketplace-hrmdr" Feb 24 02:04:44.196268 master-0 kubenswrapper[7864]: I0224 02:04:44.196158 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z6d6\" (UniqueName: \"kubernetes.io/projected/37e3de57-34a2-4d55-9200-1bb94530c4ba-kube-api-access-5z6d6\") pod \"redhat-marketplace-hrmdr\" (UID: \"37e3de57-34a2-4d55-9200-1bb94530c4ba\") " pod="openshift-marketplace/redhat-marketplace-hrmdr" Feb 24 02:04:44.309601 master-0 kubenswrapper[7864]: I0224 02:04:44.306861 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrmdr" Feb 24 02:04:44.428158 master-0 kubenswrapper[7864]: I0224 02:04:44.427832 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c"] Feb 24 02:04:44.747171 master-0 kubenswrapper[7864]: I0224 02:04:44.746487 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" event={"ID":"9cad383a-cb69-41a8-aec8-23ee1c930430","Type":"ContainerStarted","Data":"29a0512f64a48cd44b18e37c89ec77f34d9d3c4b94ceaef45526fc50d80bc784"} Feb 24 02:04:44.747171 master-0 kubenswrapper[7864]: I0224 02:04:44.746547 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" event={"ID":"9cad383a-cb69-41a8-aec8-23ee1c930430","Type":"ContainerStarted","Data":"61e5e77001e1e5b4b53f6c82868401419bbcf0e5600dbe4c283c403c8bc8a720"} Feb 24 02:04:44.747171 master-0 kubenswrapper[7864]: I0224 02:04:44.746770 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" Feb 24 02:04:44.761440 master-0 kubenswrapper[7864]: I0224 02:04:44.761074 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"bd02da41-8a48-4436-ae58-6363e7554898","Type":"ContainerStarted","Data":"beff9cdd09dcda0a6932e333a63d749970c5574701c511858c571df2f87fa178"} Feb 24 02:04:44.778400 master-0 kubenswrapper[7864]: I0224 02:04:44.777138 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-57df7db547-2v9c5"] Feb 24 02:04:44.778617 master-0 kubenswrapper[7864]: I0224 02:04:44.778494 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" Feb 24 02:04:44.788476 master-0 kubenswrapper[7864]: I0224 02:04:44.788408 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56cd46585c-nhkd9"] Feb 24 02:04:44.791861 master-0 kubenswrapper[7864]: I0224 02:04:44.790321 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56cd46585c-nhkd9" Feb 24 02:04:44.796502 master-0 kubenswrapper[7864]: I0224 02:04:44.796370 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 02:04:44.799867 master-0 kubenswrapper[7864]: I0224 02:04:44.799412 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 02:04:44.799867 master-0 kubenswrapper[7864]: I0224 02:04:44.799536 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 02:04:44.799867 master-0 kubenswrapper[7864]: I0224 02:04:44.799603 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 02:04:44.799867 master-0 kubenswrapper[7864]: I0224 02:04:44.799829 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 02:04:44.811738 master-0 kubenswrapper[7864]: I0224 02:04:44.804446 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 02:04:44.811738 master-0 kubenswrapper[7864]: I0224 02:04:44.804417 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" podStartSLOduration=1.804392264 podStartE2EDuration="1.804392264s" podCreationTimestamp="2026-02-24 02:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:04:44.800069637 +0000 UTC m=+49.127723259" watchObservedRunningTime="2026-02-24 02:04:44.804392264 +0000 UTC m=+49.132045886" Feb 24 02:04:44.811738 master-0 kubenswrapper[7864]: I0224 02:04:44.804650 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 02:04:44.811738 master-0 kubenswrapper[7864]: I0224 02:04:44.804961 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 02:04:44.811738 master-0 kubenswrapper[7864]: I0224 02:04:44.805216 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 02:04:44.811738 master-0 kubenswrapper[7864]: I0224 02:04:44.805389 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 02:04:44.811738 master-0 kubenswrapper[7864]: I0224 02:04:44.806646 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 02:04:44.811738 master-0 kubenswrapper[7864]: I0224 02:04:44.808639 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrmdr"] Feb 24 02:04:44.811738 master-0 kubenswrapper[7864]: I0224 02:04:44.811389 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57df7db547-2v9c5"] Feb 24 02:04:44.812227 master-0 kubenswrapper[7864]: I0224 02:04:44.812116 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56cd46585c-nhkd9"] Feb 24 02:04:44.866961 master-0 kubenswrapper[7864]: I0224 02:04:44.866850 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-0" podStartSLOduration=2.866791915 podStartE2EDuration="2.866791915s" podCreationTimestamp="2026-02-24 02:04:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:04:44.861170792 +0000 UTC m=+49.188824414" watchObservedRunningTime="2026-02-24 02:04:44.866791915 +0000 UTC m=+49.194445537" Feb 24 02:04:44.876142 master-0 kubenswrapper[7864]: I0224 02:04:44.875145 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e6fd0d2-d629-4399-b008-979f28390943-serving-cert\") pod \"route-controller-manager-56cd46585c-nhkd9\" (UID: \"8e6fd0d2-d629-4399-b008-979f28390943\") " pod="openshift-route-controller-manager/route-controller-manager-56cd46585c-nhkd9" Feb 24 02:04:44.876142 master-0 kubenswrapper[7864]: I0224 02:04:44.875195 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd1a99d5-e213-42b3-9538-44f68d993184-serving-cert\") pod \"controller-manager-57df7db547-2v9c5\" (UID: \"bd1a99d5-e213-42b3-9538-44f68d993184\") " pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" Feb 24 02:04:44.876142 master-0 kubenswrapper[7864]: I0224 02:04:44.875228 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd1a99d5-e213-42b3-9538-44f68d993184-config\") pod \"controller-manager-57df7db547-2v9c5\" (UID: \"bd1a99d5-e213-42b3-9538-44f68d993184\") " pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" Feb 24 02:04:44.876142 master-0 kubenswrapper[7864]: I0224 02:04:44.875248 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd1a99d5-e213-42b3-9538-44f68d993184-client-ca\") pod \"controller-manager-57df7db547-2v9c5\" (UID: \"bd1a99d5-e213-42b3-9538-44f68d993184\") " pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" Feb 24 02:04:44.876142 master-0 kubenswrapper[7864]: I0224 02:04:44.875269 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqr2q\" (UniqueName: \"kubernetes.io/projected/8e6fd0d2-d629-4399-b008-979f28390943-kube-api-access-bqr2q\") pod \"route-controller-manager-56cd46585c-nhkd9\" (UID: \"8e6fd0d2-d629-4399-b008-979f28390943\") " pod="openshift-route-controller-manager/route-controller-manager-56cd46585c-nhkd9" Feb 24 02:04:44.876142 master-0 kubenswrapper[7864]: I0224 02:04:44.875292 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd4jm\" (UniqueName: \"kubernetes.io/projected/bd1a99d5-e213-42b3-9538-44f68d993184-kube-api-access-xd4jm\") pod \"controller-manager-57df7db547-2v9c5\" (UID: \"bd1a99d5-e213-42b3-9538-44f68d993184\") " pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" Feb 24 02:04:44.876142 master-0 kubenswrapper[7864]: I0224 02:04:44.875338 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd1a99d5-e213-42b3-9538-44f68d993184-proxy-ca-bundles\") pod \"controller-manager-57df7db547-2v9c5\" (UID: \"bd1a99d5-e213-42b3-9538-44f68d993184\") " pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" Feb 24 02:04:44.876142 master-0 kubenswrapper[7864]: I0224 02:04:44.875355 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e6fd0d2-d629-4399-b008-979f28390943-client-ca\") pod \"route-controller-manager-56cd46585c-nhkd9\" (UID: \"8e6fd0d2-d629-4399-b008-979f28390943\") " pod="openshift-route-controller-manager/route-controller-manager-56cd46585c-nhkd9" Feb 24 02:04:44.876142 master-0 kubenswrapper[7864]: I0224 02:04:44.875392 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e6fd0d2-d629-4399-b008-979f28390943-config\") pod \"route-controller-manager-56cd46585c-nhkd9\" (UID: \"8e6fd0d2-d629-4399-b008-979f28390943\") " pod="openshift-route-controller-manager/route-controller-manager-56cd46585c-nhkd9" Feb 24 02:04:44.976459 master-0 kubenswrapper[7864]: I0224 02:04:44.976405 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd1a99d5-e213-42b3-9538-44f68d993184-serving-cert\") pod \"controller-manager-57df7db547-2v9c5\" (UID: \"bd1a99d5-e213-42b3-9538-44f68d993184\") " pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" Feb 24 02:04:44.976526 master-0 kubenswrapper[7864]: I0224 02:04:44.976469 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd1a99d5-e213-42b3-9538-44f68d993184-client-ca\") pod \"controller-manager-57df7db547-2v9c5\" (UID: \"bd1a99d5-e213-42b3-9538-44f68d993184\") " pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" Feb 24 02:04:44.976526 master-0 kubenswrapper[7864]: I0224 02:04:44.976489 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd1a99d5-e213-42b3-9538-44f68d993184-config\") pod \"controller-manager-57df7db547-2v9c5\" (UID: \"bd1a99d5-e213-42b3-9538-44f68d993184\") " pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" Feb 24 02:04:44.976526 master-0 kubenswrapper[7864]: I0224 02:04:44.976515 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqr2q\" (UniqueName: \"kubernetes.io/projected/8e6fd0d2-d629-4399-b008-979f28390943-kube-api-access-bqr2q\") pod \"route-controller-manager-56cd46585c-nhkd9\" (UID: \"8e6fd0d2-d629-4399-b008-979f28390943\") " pod="openshift-route-controller-manager/route-controller-manager-56cd46585c-nhkd9" Feb 24 02:04:44.976632 master-0 kubenswrapper[7864]: I0224 02:04:44.976534 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd4jm\" (UniqueName: \"kubernetes.io/projected/bd1a99d5-e213-42b3-9538-44f68d993184-kube-api-access-xd4jm\") pod \"controller-manager-57df7db547-2v9c5\" (UID: \"bd1a99d5-e213-42b3-9538-44f68d993184\") " pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" Feb 24 02:04:44.976632 master-0 kubenswrapper[7864]: I0224 02:04:44.976556 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd1a99d5-e213-42b3-9538-44f68d993184-proxy-ca-bundles\") pod \"controller-manager-57df7db547-2v9c5\" (UID: \"bd1a99d5-e213-42b3-9538-44f68d993184\") " pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" Feb 24 02:04:44.976632 master-0 kubenswrapper[7864]: I0224 02:04:44.976588 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e6fd0d2-d629-4399-b008-979f28390943-client-ca\") pod \"route-controller-manager-56cd46585c-nhkd9\" (UID: \"8e6fd0d2-d629-4399-b008-979f28390943\") " pod="openshift-route-controller-manager/route-controller-manager-56cd46585c-nhkd9" Feb 24 02:04:44.976632 master-0 kubenswrapper[7864]: I0224 02:04:44.976613 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e6fd0d2-d629-4399-b008-979f28390943-config\") pod \"route-controller-manager-56cd46585c-nhkd9\" (UID: \"8e6fd0d2-d629-4399-b008-979f28390943\") " pod="openshift-route-controller-manager/route-controller-manager-56cd46585c-nhkd9" Feb 24 02:04:44.976740 master-0 kubenswrapper[7864]: I0224 02:04:44.976660 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e6fd0d2-d629-4399-b008-979f28390943-serving-cert\") pod \"route-controller-manager-56cd46585c-nhkd9\" (UID: \"8e6fd0d2-d629-4399-b008-979f28390943\") " pod="openshift-route-controller-manager/route-controller-manager-56cd46585c-nhkd9" Feb 24 02:04:44.978294 master-0 kubenswrapper[7864]: I0224 02:04:44.978251 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd1a99d5-e213-42b3-9538-44f68d993184-client-ca\") pod \"controller-manager-57df7db547-2v9c5\" (UID: \"bd1a99d5-e213-42b3-9538-44f68d993184\") " pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" Feb 24 02:04:44.978716 master-0 kubenswrapper[7864]: I0224 02:04:44.978672 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e6fd0d2-d629-4399-b008-979f28390943-client-ca\") pod \"route-controller-manager-56cd46585c-nhkd9\" (UID: \"8e6fd0d2-d629-4399-b008-979f28390943\") " pod="openshift-route-controller-manager/route-controller-manager-56cd46585c-nhkd9" Feb 24 02:04:44.979184 master-0 kubenswrapper[7864]: I0224 02:04:44.979146 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e6fd0d2-d629-4399-b008-979f28390943-config\") pod \"route-controller-manager-56cd46585c-nhkd9\" (UID: \"8e6fd0d2-d629-4399-b008-979f28390943\") " pod="openshift-route-controller-manager/route-controller-manager-56cd46585c-nhkd9" Feb 24 02:04:44.979342 master-0 kubenswrapper[7864]: I0224 02:04:44.979296 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd1a99d5-e213-42b3-9538-44f68d993184-config\") pod \"controller-manager-57df7db547-2v9c5\" (UID: \"bd1a99d5-e213-42b3-9538-44f68d993184\") " pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" Feb 24 02:04:44.980456 master-0 kubenswrapper[7864]: I0224 02:04:44.980427 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd1a99d5-e213-42b3-9538-44f68d993184-proxy-ca-bundles\") pod \"controller-manager-57df7db547-2v9c5\" (UID: \"bd1a99d5-e213-42b3-9538-44f68d993184\") " pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" Feb 24 02:04:44.982252 master-0 kubenswrapper[7864]: I0224 02:04:44.982223 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd1a99d5-e213-42b3-9538-44f68d993184-serving-cert\") pod \"controller-manager-57df7db547-2v9c5\" (UID: \"bd1a99d5-e213-42b3-9538-44f68d993184\") " pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" Feb 24 02:04:44.982564 master-0 kubenswrapper[7864]: I0224 02:04:44.982518 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e6fd0d2-d629-4399-b008-979f28390943-serving-cert\") pod \"route-controller-manager-56cd46585c-nhkd9\" (UID: \"8e6fd0d2-d629-4399-b008-979f28390943\") " pod="openshift-route-controller-manager/route-controller-manager-56cd46585c-nhkd9" Feb 24 02:04:44.997016 master-0 kubenswrapper[7864]: I0224 02:04:44.996945 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd4jm\" (UniqueName: \"kubernetes.io/projected/bd1a99d5-e213-42b3-9538-44f68d993184-kube-api-access-xd4jm\") pod \"controller-manager-57df7db547-2v9c5\" (UID: \"bd1a99d5-e213-42b3-9538-44f68d993184\") " pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" Feb 24 02:04:44.997016 master-0 kubenswrapper[7864]: I0224 02:04:44.996983 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqr2q\" (UniqueName: \"kubernetes.io/projected/8e6fd0d2-d629-4399-b008-979f28390943-kube-api-access-bqr2q\") pod \"route-controller-manager-56cd46585c-nhkd9\" (UID: \"8e6fd0d2-d629-4399-b008-979f28390943\") " pod="openshift-route-controller-manager/route-controller-manager-56cd46585c-nhkd9" Feb 24 02:04:45.125462 master-0 kubenswrapper[7864]: I0224 02:04:45.125377 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g862w"] Feb 24 02:04:45.127606 master-0 kubenswrapper[7864]: I0224 02:04:45.127549 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g862w" Feb 24 02:04:45.135185 master-0 kubenswrapper[7864]: I0224 02:04:45.135049 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" Feb 24 02:04:45.142058 master-0 kubenswrapper[7864]: I0224 02:04:45.141323 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g862w"] Feb 24 02:04:45.170647 master-0 kubenswrapper[7864]: I0224 02:04:45.168571 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56cd46585c-nhkd9" Feb 24 02:04:45.178942 master-0 kubenswrapper[7864]: I0224 02:04:45.178718 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zf97\" (UniqueName: \"kubernetes.io/projected/66fc4bf9-47d0-4530-a49e-912a61cc35fd-kube-api-access-7zf97\") pod \"redhat-operators-g862w\" (UID: \"66fc4bf9-47d0-4530-a49e-912a61cc35fd\") " pod="openshift-marketplace/redhat-operators-g862w" Feb 24 02:04:45.178942 master-0 kubenswrapper[7864]: I0224 02:04:45.178855 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66fc4bf9-47d0-4530-a49e-912a61cc35fd-utilities\") pod \"redhat-operators-g862w\" (UID: \"66fc4bf9-47d0-4530-a49e-912a61cc35fd\") " pod="openshift-marketplace/redhat-operators-g862w" Feb 24 02:04:45.178942 master-0 kubenswrapper[7864]: I0224 02:04:45.178890 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66fc4bf9-47d0-4530-a49e-912a61cc35fd-catalog-content\") pod \"redhat-operators-g862w\" (UID: \"66fc4bf9-47d0-4530-a49e-912a61cc35fd\") " pod="openshift-marketplace/redhat-operators-g862w" Feb 24 02:04:45.284061 master-0 kubenswrapper[7864]: I0224 02:04:45.283469 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66fc4bf9-47d0-4530-a49e-912a61cc35fd-utilities\") pod \"redhat-operators-g862w\" (UID: \"66fc4bf9-47d0-4530-a49e-912a61cc35fd\") " pod="openshift-marketplace/redhat-operators-g862w" Feb 24 02:04:45.284061 master-0 kubenswrapper[7864]: I0224 02:04:45.283797 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66fc4bf9-47d0-4530-a49e-912a61cc35fd-catalog-content\") pod \"redhat-operators-g862w\" (UID: \"66fc4bf9-47d0-4530-a49e-912a61cc35fd\") " pod="openshift-marketplace/redhat-operators-g862w" Feb 24 02:04:45.284061 master-0 kubenswrapper[7864]: I0224 02:04:45.283929 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zf97\" (UniqueName: \"kubernetes.io/projected/66fc4bf9-47d0-4530-a49e-912a61cc35fd-kube-api-access-7zf97\") pod \"redhat-operators-g862w\" (UID: \"66fc4bf9-47d0-4530-a49e-912a61cc35fd\") " pod="openshift-marketplace/redhat-operators-g862w" Feb 24 02:04:45.286105 master-0 kubenswrapper[7864]: I0224 02:04:45.284943 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66fc4bf9-47d0-4530-a49e-912a61cc35fd-utilities\") pod \"redhat-operators-g862w\" (UID: \"66fc4bf9-47d0-4530-a49e-912a61cc35fd\") " pod="openshift-marketplace/redhat-operators-g862w" Feb 24 02:04:45.286105 master-0 kubenswrapper[7864]: I0224 02:04:45.285388 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66fc4bf9-47d0-4530-a49e-912a61cc35fd-catalog-content\") pod \"redhat-operators-g862w\" (UID: \"66fc4bf9-47d0-4530-a49e-912a61cc35fd\") " pod="openshift-marketplace/redhat-operators-g862w" Feb 24 02:04:45.313582 master-0 kubenswrapper[7864]: I0224 02:04:45.313533 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zf97\" (UniqueName: \"kubernetes.io/projected/66fc4bf9-47d0-4530-a49e-912a61cc35fd-kube-api-access-7zf97\") pod \"redhat-operators-g862w\" (UID: \"66fc4bf9-47d0-4530-a49e-912a61cc35fd\") " pod="openshift-marketplace/redhat-operators-g862w" Feb 24 02:04:45.435359 master-0 kubenswrapper[7864]: I0224 02:04:45.433028 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" Feb 24 02:04:45.512619 master-0 kubenswrapper[7864]: I0224 02:04:45.511932 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g862w" Feb 24 02:04:45.617091 master-0 kubenswrapper[7864]: I0224 02:04:45.616371 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57df7db547-2v9c5"] Feb 24 02:04:45.627224 master-0 kubenswrapper[7864]: W0224 02:04:45.627131 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd1a99d5_e213_42b3_9538_44f68d993184.slice/crio-4ead8768953af0cfa689ea585f04601b758642042fcf9f682eda77679bdd25c2 WatchSource:0}: Error finding container 4ead8768953af0cfa689ea585f04601b758642042fcf9f682eda77679bdd25c2: Status 404 returned error can't find the container with id 4ead8768953af0cfa689ea585f04601b758642042fcf9f682eda77679bdd25c2 Feb 24 02:04:45.664536 master-0 kubenswrapper[7864]: I0224 02:04:45.664469 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56cd46585c-nhkd9"] Feb 24 02:04:45.666245 master-0 kubenswrapper[7864]: W0224 02:04:45.665895 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e6fd0d2_d629_4399_b008_979f28390943.slice/crio-9ab3d7488d1528aaa7ffd06e009913bbe758abf8174f66f8797cbc3ed9bce292 WatchSource:0}: Error finding container 9ab3d7488d1528aaa7ffd06e009913bbe758abf8174f66f8797cbc3ed9bce292: Status 404 returned error can't find the container with id 9ab3d7488d1528aaa7ffd06e009913bbe758abf8174f66f8797cbc3ed9bce292 Feb 24 02:04:45.678267 master-0 kubenswrapper[7864]: I0224 02:04:45.678242 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:04:45.781285 master-0 kubenswrapper[7864]: I0224 02:04:45.781143 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56cd46585c-nhkd9" event={"ID":"8e6fd0d2-d629-4399-b008-979f28390943","Type":"ContainerStarted","Data":"9ab3d7488d1528aaa7ffd06e009913bbe758abf8174f66f8797cbc3ed9bce292"} Feb 24 02:04:45.783346 master-0 kubenswrapper[7864]: I0224 02:04:45.783282 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" event={"ID":"bd1a99d5-e213-42b3-9538-44f68d993184","Type":"ContainerStarted","Data":"4ead8768953af0cfa689ea585f04601b758642042fcf9f682eda77679bdd25c2"} Feb 24 02:04:45.785420 master-0 kubenswrapper[7864]: I0224 02:04:45.785375 7864 generic.go:334] "Generic (PLEG): container finished" podID="37e3de57-34a2-4d55-9200-1bb94530c4ba" containerID="3a7cb7544ef9adc2d8aa9406b02bf17a505621aa7f06c4c4f21130b77ce2cb96" exitCode=0 Feb 24 02:04:45.788529 master-0 kubenswrapper[7864]: I0224 02:04:45.788482 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrmdr" event={"ID":"37e3de57-34a2-4d55-9200-1bb94530c4ba","Type":"ContainerDied","Data":"3a7cb7544ef9adc2d8aa9406b02bf17a505621aa7f06c4c4f21130b77ce2cb96"} Feb 24 02:04:45.788529 master-0 kubenswrapper[7864]: I0224 02:04:45.788525 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrmdr" event={"ID":"37e3de57-34a2-4d55-9200-1bb94530c4ba","Type":"ContainerStarted","Data":"4dae55f823ae7282c11d348d0d8dbe2c0d2f0c63af0a55678ff766d1c40919a2"} Feb 24 02:04:46.126602 master-0 kubenswrapper[7864]: I0224 02:04:46.126499 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g862w"] Feb 24 02:04:46.145141 master-0 kubenswrapper[7864]: W0224 02:04:46.145052 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66fc4bf9_47d0_4530_a49e_912a61cc35fd.slice/crio-854b5254b3aab6aa03516df6e3f68bfe61a7f44a97343b1454b52cf5fb8570fb WatchSource:0}: Error finding container 854b5254b3aab6aa03516df6e3f68bfe61a7f44a97343b1454b52cf5fb8570fb: Status 404 returned error can't find the container with id 854b5254b3aab6aa03516df6e3f68bfe61a7f44a97343b1454b52cf5fb8570fb Feb 24 02:04:46.796723 master-0 kubenswrapper[7864]: I0224 02:04:46.796614 7864 generic.go:334] "Generic (PLEG): container finished" podID="66fc4bf9-47d0-4530-a49e-912a61cc35fd" containerID="62c61ea685817e2cfa906ea1d16d68da17c02d130b5c19c33206e9b53300e193" exitCode=0 Feb 24 02:04:46.797524 master-0 kubenswrapper[7864]: I0224 02:04:46.796768 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g862w" event={"ID":"66fc4bf9-47d0-4530-a49e-912a61cc35fd","Type":"ContainerDied","Data":"62c61ea685817e2cfa906ea1d16d68da17c02d130b5c19c33206e9b53300e193"} Feb 24 02:04:46.797524 master-0 kubenswrapper[7864]: I0224 02:04:46.796904 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g862w" event={"ID":"66fc4bf9-47d0-4530-a49e-912a61cc35fd","Type":"ContainerStarted","Data":"854b5254b3aab6aa03516df6e3f68bfe61a7f44a97343b1454b52cf5fb8570fb"} Feb 24 02:04:52.531881 master-0 kubenswrapper[7864]: I0224 02:04:52.531773 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 24 02:04:52.532181 master-0 kubenswrapper[7864]: I0224 02:04:52.532013 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-1-master-0" podUID="598f691a-1472-4198-bcd5-6956217d30f9" containerName="installer" containerID="cri-o://6c4eac387e8d79e14cfd6b834810c2855eadf37f4e9c2bf332be97551df50535" gracePeriod=30 Feb 24 02:04:53.288595 master-0 kubenswrapper[7864]: I0224 02:04:53.282790 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_8436c0c2-ba30-462c-a003-ce076ff59ee1/installer/0.log" Feb 24 02:04:53.288595 master-0 kubenswrapper[7864]: I0224 02:04:53.282860 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 24 02:04:53.384934 master-0 kubenswrapper[7864]: I0224 02:04:53.381324 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8436c0c2-ba30-462c-a003-ce076ff59ee1-var-lock\") pod \"8436c0c2-ba30-462c-a003-ce076ff59ee1\" (UID: \"8436c0c2-ba30-462c-a003-ce076ff59ee1\") " Feb 24 02:04:53.384934 master-0 kubenswrapper[7864]: I0224 02:04:53.381403 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8436c0c2-ba30-462c-a003-ce076ff59ee1-kube-api-access\") pod \"8436c0c2-ba30-462c-a003-ce076ff59ee1\" (UID: \"8436c0c2-ba30-462c-a003-ce076ff59ee1\") " Feb 24 02:04:53.384934 master-0 kubenswrapper[7864]: I0224 02:04:53.381443 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8436c0c2-ba30-462c-a003-ce076ff59ee1-kubelet-dir\") pod \"8436c0c2-ba30-462c-a003-ce076ff59ee1\" (UID: \"8436c0c2-ba30-462c-a003-ce076ff59ee1\") " Feb 24 02:04:53.384934 master-0 kubenswrapper[7864]: I0224 02:04:53.381507 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8436c0c2-ba30-462c-a003-ce076ff59ee1-var-lock" (OuterVolumeSpecName: "var-lock") pod "8436c0c2-ba30-462c-a003-ce076ff59ee1" (UID: "8436c0c2-ba30-462c-a003-ce076ff59ee1"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:04:53.384934 master-0 kubenswrapper[7864]: I0224 02:04:53.381610 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8436c0c2-ba30-462c-a003-ce076ff59ee1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8436c0c2-ba30-462c-a003-ce076ff59ee1" (UID: "8436c0c2-ba30-462c-a003-ce076ff59ee1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:04:53.384934 master-0 kubenswrapper[7864]: I0224 02:04:53.381875 7864 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8436c0c2-ba30-462c-a003-ce076ff59ee1-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:53.384934 master-0 kubenswrapper[7864]: I0224 02:04:53.381889 7864 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8436c0c2-ba30-462c-a003-ce076ff59ee1-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:53.385297 master-0 kubenswrapper[7864]: I0224 02:04:53.385001 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8436c0c2-ba30-462c-a003-ce076ff59ee1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8436c0c2-ba30-462c-a003-ce076ff59ee1" (UID: "8436c0c2-ba30-462c-a003-ce076ff59ee1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:04:53.492679 master-0 kubenswrapper[7864]: I0224 02:04:53.482646 7864 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8436c0c2-ba30-462c-a003-ce076ff59ee1-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 02:04:53.492679 master-0 kubenswrapper[7864]: I0224 02:04:53.490961 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_8436c0c2-ba30-462c-a003-ce076ff59ee1/installer/0.log" Feb 24 02:04:53.492679 master-0 kubenswrapper[7864]: I0224 02:04:53.491032 7864 generic.go:334] "Generic (PLEG): container finished" podID="8436c0c2-ba30-462c-a003-ce076ff59ee1" containerID="591a05566cc7e2ccfb09e536118bdde145566fe70602687d004847303c0950b4" exitCode=1 Feb 24 02:04:53.492679 master-0 kubenswrapper[7864]: I0224 02:04:53.491108 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 24 02:04:53.492679 master-0 kubenswrapper[7864]: I0224 02:04:53.491147 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"8436c0c2-ba30-462c-a003-ce076ff59ee1","Type":"ContainerDied","Data":"591a05566cc7e2ccfb09e536118bdde145566fe70602687d004847303c0950b4"} Feb 24 02:04:53.492679 master-0 kubenswrapper[7864]: I0224 02:04:53.491195 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"8436c0c2-ba30-462c-a003-ce076ff59ee1","Type":"ContainerDied","Data":"eae8d87dbece0b375f133f5b2d88b4f7a5bcf77c6a392134712a99310e0394fe"} Feb 24 02:04:53.492679 master-0 kubenswrapper[7864]: I0224 02:04:53.491214 7864 scope.go:117] "RemoveContainer" containerID="591a05566cc7e2ccfb09e536118bdde145566fe70602687d004847303c0950b4" Feb 24 02:04:53.509938 master-0 kubenswrapper[7864]: I0224 02:04:53.508708 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56cd46585c-nhkd9" event={"ID":"8e6fd0d2-d629-4399-b008-979f28390943","Type":"ContainerStarted","Data":"f07b242f8321209046afc90d0d2ec30b10004f1058240cc14bdc59e604994c55"} Feb 24 02:04:53.509938 master-0 kubenswrapper[7864]: I0224 02:04:53.509027 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-56cd46585c-nhkd9" Feb 24 02:04:53.522679 master-0 kubenswrapper[7864]: I0224 02:04:53.522633 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-56cd46585c-nhkd9" Feb 24 02:04:53.525971 master-0 kubenswrapper[7864]: I0224 02:04:53.525740 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" event={"ID":"bd1a99d5-e213-42b3-9538-44f68d993184","Type":"ContainerStarted","Data":"5ce0ca46b5707d074f514e4d89c259a98815b6c015e08d177ffa0a7a40772a21"} Feb 24 02:04:53.526280 master-0 kubenswrapper[7864]: I0224 02:04:53.526236 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" Feb 24 02:04:53.538643 master-0 kubenswrapper[7864]: I0224 02:04:53.536900 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" Feb 24 02:04:53.541680 master-0 kubenswrapper[7864]: I0224 02:04:53.541527 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-56cd46585c-nhkd9" podStartSLOduration=5.695883511 podStartE2EDuration="12.541516145s" podCreationTimestamp="2026-02-24 02:04:41 +0000 UTC" firstStartedPulling="2026-02-24 02:04:45.669489926 +0000 UTC m=+49.997143548" lastFinishedPulling="2026-02-24 02:04:52.51512256 +0000 UTC m=+56.842776182" observedRunningTime="2026-02-24 02:04:53.540427026 +0000 UTC m=+57.868080648" watchObservedRunningTime="2026-02-24 02:04:53.541516145 +0000 UTC m=+57.869169767" Feb 24 02:04:53.559494 master-0 kubenswrapper[7864]: I0224 02:04:53.559359 7864 scope.go:117] "RemoveContainer" containerID="591a05566cc7e2ccfb09e536118bdde145566fe70602687d004847303c0950b4" Feb 24 02:04:53.560653 master-0 kubenswrapper[7864]: E0224 02:04:53.559880 7864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"591a05566cc7e2ccfb09e536118bdde145566fe70602687d004847303c0950b4\": container with ID starting with 591a05566cc7e2ccfb09e536118bdde145566fe70602687d004847303c0950b4 not found: ID does not exist" containerID="591a05566cc7e2ccfb09e536118bdde145566fe70602687d004847303c0950b4" Feb 24 02:04:53.560653 master-0 kubenswrapper[7864]: I0224 02:04:53.559921 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"591a05566cc7e2ccfb09e536118bdde145566fe70602687d004847303c0950b4"} err="failed to get container status \"591a05566cc7e2ccfb09e536118bdde145566fe70602687d004847303c0950b4\": rpc error: code = NotFound desc = could not find container \"591a05566cc7e2ccfb09e536118bdde145566fe70602687d004847303c0950b4\": container with ID starting with 591a05566cc7e2ccfb09e536118bdde145566fe70602687d004847303c0950b4 not found: ID does not exist" Feb 24 02:04:53.563433 master-0 kubenswrapper[7864]: I0224 02:04:53.563390 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 24 02:04:53.572677 master-0 kubenswrapper[7864]: I0224 02:04:53.572617 7864 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 24 02:04:53.887840 master-0 kubenswrapper[7864]: I0224 02:04:53.884869 7864 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8436c0c2-ba30-462c-a003-ce076ff59ee1" path="/var/lib/kubelet/pods/8436c0c2-ba30-462c-a003-ce076ff59ee1/volumes" Feb 24 02:04:53.918594 master-0 kubenswrapper[7864]: I0224 02:04:53.916911 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" podStartSLOduration=7.02260203 podStartE2EDuration="13.916887568s" podCreationTimestamp="2026-02-24 02:04:40 +0000 UTC" firstStartedPulling="2026-02-24 02:04:45.629717384 +0000 UTC m=+49.957370996" lastFinishedPulling="2026-02-24 02:04:52.524002912 +0000 UTC m=+56.851656534" observedRunningTime="2026-02-24 02:04:53.615507777 +0000 UTC m=+57.943161399" watchObservedRunningTime="2026-02-24 02:04:53.916887568 +0000 UTC m=+58.244541190" Feb 24 02:04:53.918990 master-0 kubenswrapper[7864]: I0224 02:04:53.918966 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-686847ff5f-ckntz"] Feb 24 02:04:53.922589 master-0 kubenswrapper[7864]: E0224 02:04:53.919144 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8436c0c2-ba30-462c-a003-ce076ff59ee1" containerName="installer" Feb 24 02:04:53.922589 master-0 kubenswrapper[7864]: I0224 02:04:53.919159 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="8436c0c2-ba30-462c-a003-ce076ff59ee1" containerName="installer" Feb 24 02:04:53.922589 master-0 kubenswrapper[7864]: I0224 02:04:53.919253 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="8436c0c2-ba30-462c-a003-ce076ff59ee1" containerName="installer" Feb 24 02:04:53.922734 master-0 kubenswrapper[7864]: I0224 02:04:53.922678 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-ckntz" Feb 24 02:04:53.926139 master-0 kubenswrapper[7864]: I0224 02:04:53.926090 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-thdws" Feb 24 02:04:53.927219 master-0 kubenswrapper[7864]: I0224 02:04:53.926994 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 24 02:04:53.977656 master-0 kubenswrapper[7864]: I0224 02:04:53.977605 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-686847ff5f-ckntz"] Feb 24 02:04:53.995536 master-0 kubenswrapper[7864]: I0224 02:04:53.995460 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57x9m\" (UniqueName: \"kubernetes.io/projected/a4cea44a-1c6e-465f-97df-2c951056cb85-kube-api-access-57x9m\") pod \"control-plane-machine-set-operator-686847ff5f-ckntz\" (UID: \"a4cea44a-1c6e-465f-97df-2c951056cb85\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-ckntz" Feb 24 02:04:53.995536 master-0 kubenswrapper[7864]: I0224 02:04:53.995527 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a4cea44a-1c6e-465f-97df-2c951056cb85-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-ckntz\" (UID: \"a4cea44a-1c6e-465f-97df-2c951056cb85\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-ckntz" Feb 24 02:04:54.096433 master-0 kubenswrapper[7864]: I0224 02:04:54.096386 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a4cea44a-1c6e-465f-97df-2c951056cb85-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-ckntz\" (UID: \"a4cea44a-1c6e-465f-97df-2c951056cb85\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-ckntz" Feb 24 02:04:54.096692 master-0 kubenswrapper[7864]: I0224 02:04:54.096614 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57x9m\" (UniqueName: \"kubernetes.io/projected/a4cea44a-1c6e-465f-97df-2c951056cb85-kube-api-access-57x9m\") pod \"control-plane-machine-set-operator-686847ff5f-ckntz\" (UID: \"a4cea44a-1c6e-465f-97df-2c951056cb85\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-ckntz" Feb 24 02:04:54.101433 master-0 kubenswrapper[7864]: I0224 02:04:54.101402 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a4cea44a-1c6e-465f-97df-2c951056cb85-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-ckntz\" (UID: \"a4cea44a-1c6e-465f-97df-2c951056cb85\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-ckntz" Feb 24 02:04:54.128331 master-0 kubenswrapper[7864]: I0224 02:04:54.128285 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57x9m\" (UniqueName: \"kubernetes.io/projected/a4cea44a-1c6e-465f-97df-2c951056cb85-kube-api-access-57x9m\") pod \"control-plane-machine-set-operator-686847ff5f-ckntz\" (UID: \"a4cea44a-1c6e-465f-97df-2c951056cb85\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-ckntz" Feb 24 02:04:54.244951 master-0 kubenswrapper[7864]: I0224 02:04:54.244857 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-ckntz" Feb 24 02:04:54.758182 master-0 kubenswrapper[7864]: I0224 02:04:54.758135 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-686847ff5f-ckntz"] Feb 24 02:04:54.770156 master-0 kubenswrapper[7864]: W0224 02:04:54.770041 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4cea44a_1c6e_465f_97df_2c951056cb85.slice/crio-cee49e60dc37b41a9c1559a523e9c4e3b09f5f3e76df27a36cc4a9d63ff6bee9 WatchSource:0}: Error finding container cee49e60dc37b41a9c1559a523e9c4e3b09f5f3e76df27a36cc4a9d63ff6bee9: Status 404 returned error can't find the container with id cee49e60dc37b41a9c1559a523e9c4e3b09f5f3e76df27a36cc4a9d63ff6bee9 Feb 24 02:04:54.960217 master-0 kubenswrapper[7864]: I0224 02:04:54.960167 7864 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 24 02:04:54.960467 master-0 kubenswrapper[7864]: I0224 02:04:54.960430 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcdctl" containerID="cri-o://5dc9293fc7d43ce4e59058df58bcbd64acfc36ec310c06377528876b47e64e22" gracePeriod=30 Feb 24 02:04:54.960555 master-0 kubenswrapper[7864]: I0224 02:04:54.960500 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcd" containerID="cri-o://e4194596cf49a5fc19a191ab2dc31b30d69af67944bd0d82e51ad0a2c8b76803" gracePeriod=30 Feb 24 02:04:54.962033 master-0 kubenswrapper[7864]: I0224 02:04:54.962009 7864 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Feb 24 02:04:54.962233 master-0 kubenswrapper[7864]: E0224 02:04:54.962217 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcd" Feb 24 02:04:54.962233 master-0 kubenswrapper[7864]: I0224 02:04:54.962233 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcd" Feb 24 02:04:54.962309 master-0 kubenswrapper[7864]: E0224 02:04:54.962247 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcdctl" Feb 24 02:04:54.962309 master-0 kubenswrapper[7864]: I0224 02:04:54.962254 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcdctl" Feb 24 02:04:54.962358 master-0 kubenswrapper[7864]: I0224 02:04:54.962347 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcdctl" Feb 24 02:04:54.962386 master-0 kubenswrapper[7864]: I0224 02:04:54.962362 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcd" Feb 24 02:04:54.963798 master-0 kubenswrapper[7864]: I0224 02:04:54.963777 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 24 02:04:55.015711 master-0 kubenswrapper[7864]: I0224 02:04:55.015525 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:04:55.015711 master-0 kubenswrapper[7864]: I0224 02:04:55.015658 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:04:55.015711 master-0 kubenswrapper[7864]: I0224 02:04:55.015686 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:04:55.015992 master-0 kubenswrapper[7864]: I0224 02:04:55.015721 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:04:55.015992 master-0 kubenswrapper[7864]: I0224 02:04:55.015746 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:04:55.015992 master-0 kubenswrapper[7864]: I0224 02:04:55.015824 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:04:55.118127 master-0 kubenswrapper[7864]: I0224 02:04:55.118068 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:04:55.118127 master-0 kubenswrapper[7864]: I0224 02:04:55.118130 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:04:55.118383 master-0 kubenswrapper[7864]: I0224 02:04:55.118282 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:04:55.118383 master-0 kubenswrapper[7864]: I0224 02:04:55.118339 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:04:55.118440 master-0 kubenswrapper[7864]: I0224 02:04:55.118405 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:04:55.118469 master-0 kubenswrapper[7864]: I0224 02:04:55.118433 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:04:55.118568 master-0 kubenswrapper[7864]: I0224 02:04:55.118540 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:04:55.118629 master-0 kubenswrapper[7864]: I0224 02:04:55.118543 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:04:55.118629 master-0 kubenswrapper[7864]: I0224 02:04:55.118598 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:04:55.118711 master-0 kubenswrapper[7864]: I0224 02:04:55.118669 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:04:55.118711 master-0 kubenswrapper[7864]: I0224 02:04:55.118702 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:04:55.118772 master-0 kubenswrapper[7864]: I0224 02:04:55.118681 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:04:55.540027 master-0 kubenswrapper[7864]: I0224 02:04:55.539984 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-ckntz" event={"ID":"a4cea44a-1c6e-465f-97df-2c951056cb85","Type":"ContainerStarted","Data":"cee49e60dc37b41a9c1559a523e9c4e3b09f5f3e76df27a36cc4a9d63ff6bee9"} Feb 24 02:05:06.602811 master-0 kubenswrapper[7864]: I0224 02:05:06.602757 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g862w" event={"ID":"66fc4bf9-47d0-4530-a49e-912a61cc35fd","Type":"ContainerStarted","Data":"b78034075f91df2edeb691499dd0e273ccdf1af852814089e22a4e04132f7e74"} Feb 24 02:05:06.606253 master-0 kubenswrapper[7864]: I0224 02:05:06.606193 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rvp5j" event={"ID":"dae6353d-97ee-46f8-8430-0b5211134a03","Type":"ContainerStarted","Data":"71bc8bbb7d9892bd409de5b6757bf0fbd43d044cbbf9e4957d9d2d3633dcae38"} Feb 24 02:05:06.610208 master-0 kubenswrapper[7864]: I0224 02:05:06.610153 7864 generic.go:334] "Generic (PLEG): container finished" podID="afda9f0b-a304-490a-a080-0384a0a4e85b" containerID="256f4504e3d4b82fdb34220ca0fb4428b8d341655ed1ff64de7c87b7b80ba566" exitCode=0 Feb 24 02:05:06.610332 master-0 kubenswrapper[7864]: I0224 02:05:06.610223 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dwmm5" event={"ID":"afda9f0b-a304-490a-a080-0384a0a4e85b","Type":"ContainerDied","Data":"256f4504e3d4b82fdb34220ca0fb4428b8d341655ed1ff64de7c87b7b80ba566"} Feb 24 02:05:06.613089 master-0 kubenswrapper[7864]: I0224 02:05:06.613032 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-ckntz" event={"ID":"a4cea44a-1c6e-465f-97df-2c951056cb85","Type":"ContainerStarted","Data":"333b5a9659128a52ffaed5da3c25d8feb0986d4e855c20f96e40ad31f9cb9171"} Feb 24 02:05:06.619776 master-0 kubenswrapper[7864]: I0224 02:05:06.619716 7864 generic.go:334] "Generic (PLEG): container finished" podID="37e3de57-34a2-4d55-9200-1bb94530c4ba" containerID="070c8f9bc2ed166f63fc4c2e18cc7b61c41cc8c4f62eef10cba0347b004004dd" exitCode=0 Feb 24 02:05:06.620017 master-0 kubenswrapper[7864]: I0224 02:05:06.619777 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrmdr" event={"ID":"37e3de57-34a2-4d55-9200-1bb94530c4ba","Type":"ContainerDied","Data":"070c8f9bc2ed166f63fc4c2e18cc7b61c41cc8c4f62eef10cba0347b004004dd"} Feb 24 02:05:07.629637 master-0 kubenswrapper[7864]: I0224 02:05:07.629433 7864 generic.go:334] "Generic (PLEG): container finished" podID="dae6353d-97ee-46f8-8430-0b5211134a03" containerID="71bc8bbb7d9892bd409de5b6757bf0fbd43d044cbbf9e4957d9d2d3633dcae38" exitCode=0 Feb 24 02:05:07.630424 master-0 kubenswrapper[7864]: I0224 02:05:07.629621 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rvp5j" event={"ID":"dae6353d-97ee-46f8-8430-0b5211134a03","Type":"ContainerDied","Data":"71bc8bbb7d9892bd409de5b6757bf0fbd43d044cbbf9e4957d9d2d3633dcae38"} Feb 24 02:05:07.633681 master-0 kubenswrapper[7864]: I0224 02:05:07.633629 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dwmm5" event={"ID":"afda9f0b-a304-490a-a080-0384a0a4e85b","Type":"ContainerStarted","Data":"0b73ffd7abb9891a785ae20eb828e6d67f31fc7de0f336bc6f4c9a7fd4183589"} Feb 24 02:05:07.639044 master-0 kubenswrapper[7864]: I0224 02:05:07.638971 7864 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="bb19051dc2b31dca07092846d1f69e1993bc40ba384cd5a5b58d6d990afdcb5d" exitCode=1 Feb 24 02:05:07.639162 master-0 kubenswrapper[7864]: I0224 02:05:07.639082 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"bb19051dc2b31dca07092846d1f69e1993bc40ba384cd5a5b58d6d990afdcb5d"} Feb 24 02:05:07.639162 master-0 kubenswrapper[7864]: I0224 02:05:07.639146 7864 scope.go:117] "RemoveContainer" containerID="25efea610d9bc2514ea6f62c6d5763641769d0262eae03f839a9b98d1e3382eb" Feb 24 02:05:07.640599 master-0 kubenswrapper[7864]: I0224 02:05:07.639892 7864 scope.go:117] "RemoveContainer" containerID="bb19051dc2b31dca07092846d1f69e1993bc40ba384cd5a5b58d6d990afdcb5d" Feb 24 02:05:07.645099 master-0 kubenswrapper[7864]: I0224 02:05:07.645045 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrmdr" event={"ID":"37e3de57-34a2-4d55-9200-1bb94530c4ba","Type":"ContainerStarted","Data":"322843bc1875642e381de77fa85af19267060376aa5ec0dd58286deaec2ce5ae"} Feb 24 02:05:07.648224 master-0 kubenswrapper[7864]: I0224 02:05:07.648172 7864 generic.go:334] "Generic (PLEG): container finished" podID="66fc4bf9-47d0-4530-a49e-912a61cc35fd" containerID="b78034075f91df2edeb691499dd0e273ccdf1af852814089e22a4e04132f7e74" exitCode=0 Feb 24 02:05:07.649654 master-0 kubenswrapper[7864]: I0224 02:05:07.649611 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g862w" event={"ID":"66fc4bf9-47d0-4530-a49e-912a61cc35fd","Type":"ContainerDied","Data":"b78034075f91df2edeb691499dd0e273ccdf1af852814089e22a4e04132f7e74"} Feb 24 02:05:07.778768 master-0 kubenswrapper[7864]: E0224 02:05:07.778710 7864 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:05:07.993739 master-0 kubenswrapper[7864]: E0224 02:05:07.993695 7864 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 24 02:05:07.994467 master-0 kubenswrapper[7864]: I0224 02:05:07.994440 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 24 02:05:08.656736 master-0 kubenswrapper[7864]: I0224 02:05:08.656649 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g862w" event={"ID":"66fc4bf9-47d0-4530-a49e-912a61cc35fd","Type":"ContainerStarted","Data":"2532aa7e9c30e14420bff6125a2af5695149f5e5139bfadc9ef461f7fd4adc89"} Feb 24 02:05:08.659100 master-0 kubenswrapper[7864]: I0224 02:05:08.659050 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rvp5j" event={"ID":"dae6353d-97ee-46f8-8430-0b5211134a03","Type":"ContainerStarted","Data":"205f85ade9249d8470ee4fc1fdb70f84501cc4dfa70d448bf2660e191d096077"} Feb 24 02:05:08.662390 master-0 kubenswrapper[7864]: I0224 02:05:08.662338 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"00825b03893f0ea1af43852d35dd4ae7ab3698ccf0fdf722de5de648ffb1c108"} Feb 24 02:05:08.664511 master-0 kubenswrapper[7864]: I0224 02:05:08.664462 7864 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="dd25a51f43f2a6b08e822686b9a5c9a9800c428fe9e69fa2ec6e4544069e9236" exitCode=0 Feb 24 02:05:08.664650 master-0 kubenswrapper[7864]: I0224 02:05:08.664610 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerDied","Data":"dd25a51f43f2a6b08e822686b9a5c9a9800c428fe9e69fa2ec6e4544069e9236"} Feb 24 02:05:08.664747 master-0 kubenswrapper[7864]: I0224 02:05:08.664686 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"16c4d0ed7ca297fee04f04f0e276b3ec6a75e1aafd2dd8706665758071e7df12"} Feb 24 02:05:09.673742 master-0 kubenswrapper[7864]: I0224 02:05:09.673694 7864 generic.go:334] "Generic (PLEG): container finished" podID="64b7ea36-8849-4955-80b5-c7e7c12fcc29" containerID="266ce948594252c2399468918fec845a74da7e6fcd999550c798b018f78a387f" exitCode=0 Feb 24 02:05:09.674408 master-0 kubenswrapper[7864]: I0224 02:05:09.674384 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"64b7ea36-8849-4955-80b5-c7e7c12fcc29","Type":"ContainerDied","Data":"266ce948594252c2399468918fec845a74da7e6fcd999550c798b018f78a387f"} Feb 24 02:05:10.213113 master-0 kubenswrapper[7864]: I0224 02:05:10.213013 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:05:11.089477 master-0 kubenswrapper[7864]: I0224 02:05:11.089383 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 24 02:05:11.158097 master-0 kubenswrapper[7864]: I0224 02:05:11.157387 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/64b7ea36-8849-4955-80b5-c7e7c12fcc29-var-lock\") pod \"64b7ea36-8849-4955-80b5-c7e7c12fcc29\" (UID: \"64b7ea36-8849-4955-80b5-c7e7c12fcc29\") " Feb 24 02:05:11.158097 master-0 kubenswrapper[7864]: I0224 02:05:11.157552 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/64b7ea36-8849-4955-80b5-c7e7c12fcc29-kube-api-access\") pod \"64b7ea36-8849-4955-80b5-c7e7c12fcc29\" (UID: \"64b7ea36-8849-4955-80b5-c7e7c12fcc29\") " Feb 24 02:05:11.158097 master-0 kubenswrapper[7864]: I0224 02:05:11.157606 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64b7ea36-8849-4955-80b5-c7e7c12fcc29-var-lock" (OuterVolumeSpecName: "var-lock") pod "64b7ea36-8849-4955-80b5-c7e7c12fcc29" (UID: "64b7ea36-8849-4955-80b5-c7e7c12fcc29"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:05:11.158097 master-0 kubenswrapper[7864]: I0224 02:05:11.157675 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/64b7ea36-8849-4955-80b5-c7e7c12fcc29-kubelet-dir\") pod \"64b7ea36-8849-4955-80b5-c7e7c12fcc29\" (UID: \"64b7ea36-8849-4955-80b5-c7e7c12fcc29\") " Feb 24 02:05:11.158097 master-0 kubenswrapper[7864]: I0224 02:05:11.157983 7864 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/64b7ea36-8849-4955-80b5-c7e7c12fcc29-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 02:05:11.158097 master-0 kubenswrapper[7864]: I0224 02:05:11.158039 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64b7ea36-8849-4955-80b5-c7e7c12fcc29-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "64b7ea36-8849-4955-80b5-c7e7c12fcc29" (UID: "64b7ea36-8849-4955-80b5-c7e7c12fcc29"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:05:11.160836 master-0 kubenswrapper[7864]: I0224 02:05:11.160774 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64b7ea36-8849-4955-80b5-c7e7c12fcc29-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "64b7ea36-8849-4955-80b5-c7e7c12fcc29" (UID: "64b7ea36-8849-4955-80b5-c7e7c12fcc29"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:05:11.259955 master-0 kubenswrapper[7864]: I0224 02:05:11.259902 7864 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/64b7ea36-8849-4955-80b5-c7e7c12fcc29-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 02:05:11.259955 master-0 kubenswrapper[7864]: I0224 02:05:11.259951 7864 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/64b7ea36-8849-4955-80b5-c7e7c12fcc29-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:05:11.698674 master-0 kubenswrapper[7864]: I0224 02:05:11.698517 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"64b7ea36-8849-4955-80b5-c7e7c12fcc29","Type":"ContainerDied","Data":"88f76db39d71bd25ceb20f4306d7f26b67459cb15713885a8eb24d8304cfae77"} Feb 24 02:05:11.698674 master-0 kubenswrapper[7864]: I0224 02:05:11.698603 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 24 02:05:11.699081 master-0 kubenswrapper[7864]: I0224 02:05:11.698635 7864 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88f76db39d71bd25ceb20f4306d7f26b67459cb15713885a8eb24d8304cfae77" Feb 24 02:05:11.701640 master-0 kubenswrapper[7864]: I0224 02:05:11.701555 7864 generic.go:334] "Generic (PLEG): container finished" podID="56c3cb71c9851003c8de7e7c5db4b87e" containerID="0be92c811440a62e120f914b735c96a60861f339574fbe9008068727fac04419" exitCode=1 Feb 24 02:05:11.701640 master-0 kubenswrapper[7864]: I0224 02:05:11.701620 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerDied","Data":"0be92c811440a62e120f914b735c96a60861f339574fbe9008068727fac04419"} Feb 24 02:05:11.702317 master-0 kubenswrapper[7864]: I0224 02:05:11.702258 7864 scope.go:117] "RemoveContainer" containerID="0be92c811440a62e120f914b735c96a60861f339574fbe9008068727fac04419" Feb 24 02:05:12.082636 master-0 kubenswrapper[7864]: I0224 02:05:12.081201 7864 patch_prober.go:28] interesting pod/authentication-operator-5bd7c86784-46vmq container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" start-of-body= Feb 24 02:05:12.082636 master-0 kubenswrapper[7864]: I0224 02:05:12.081290 7864 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" podUID="cabdddba-5507-4e47-98ef-a00c6d0f305d" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" Feb 24 02:05:12.678266 master-0 kubenswrapper[7864]: I0224 02:05:12.678196 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dwmm5" Feb 24 02:05:12.678266 master-0 kubenswrapper[7864]: I0224 02:05:12.678254 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dwmm5" Feb 24 02:05:12.720936 master-0 kubenswrapper[7864]: I0224 02:05:12.720868 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"5fbafce85063f872b1786e48e809b15f5aa08369e9d34c7d53d1c636ed17075e"} Feb 24 02:05:12.749967 master-0 kubenswrapper[7864]: W0224 02:05:12.749668 7864 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18a83278819db2092fa26d8274eb3f00.slice/crio-conmon-dd25a51f43f2a6b08e822686b9a5c9a9800c428fe9e69fa2ec6e4544069e9236.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18a83278819db2092fa26d8274eb3f00.slice/crio-conmon-dd25a51f43f2a6b08e822686b9a5c9a9800c428fe9e69fa2ec6e4544069e9236.scope: no such file or directory Feb 24 02:05:12.750662 master-0 kubenswrapper[7864]: W0224 02:05:12.750513 7864 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18a83278819db2092fa26d8274eb3f00.slice/crio-dd25a51f43f2a6b08e822686b9a5c9a9800c428fe9e69fa2ec6e4544069e9236.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18a83278819db2092fa26d8274eb3f00.slice/crio-dd25a51f43f2a6b08e822686b9a5c9a9800c428fe9e69fa2ec6e4544069e9236.scope: no such file or directory Feb 24 02:05:12.754518 master-0 kubenswrapper[7864]: I0224 02:05:12.754456 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dwmm5" Feb 24 02:05:12.828904 master-0 kubenswrapper[7864]: I0224 02:05:12.828843 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dwmm5" Feb 24 02:05:12.880155 master-0 kubenswrapper[7864]: E0224 02:05:12.880097 7864 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod598f691a_1472_4198_bcd5_6956217d30f9.slice/crio-conmon-6c4eac387e8d79e14cfd6b834810c2855eadf37f4e9c2bf332be97551df50535.scope\": RecentStats: unable to find data in memory cache]" Feb 24 02:05:12.887125 master-0 kubenswrapper[7864]: I0224 02:05:12.887060 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rvp5j" Feb 24 02:05:12.889416 master-0 kubenswrapper[7864]: I0224 02:05:12.889372 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rvp5j" Feb 24 02:05:12.965366 master-0 kubenswrapper[7864]: I0224 02:05:12.965310 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rvp5j" Feb 24 02:05:13.730784 master-0 kubenswrapper[7864]: I0224 02:05:13.730633 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_598f691a-1472-4198-bcd5-6956217d30f9/installer/0.log" Feb 24 02:05:13.730784 master-0 kubenswrapper[7864]: I0224 02:05:13.730721 7864 generic.go:334] "Generic (PLEG): container finished" podID="598f691a-1472-4198-bcd5-6956217d30f9" containerID="6c4eac387e8d79e14cfd6b834810c2855eadf37f4e9c2bf332be97551df50535" exitCode=1 Feb 24 02:05:13.731663 master-0 kubenswrapper[7864]: I0224 02:05:13.731610 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"598f691a-1472-4198-bcd5-6956217d30f9","Type":"ContainerDied","Data":"6c4eac387e8d79e14cfd6b834810c2855eadf37f4e9c2bf332be97551df50535"} Feb 24 02:05:13.804733 master-0 kubenswrapper[7864]: I0224 02:05:13.804688 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rvp5j" Feb 24 02:05:13.833813 master-0 kubenswrapper[7864]: I0224 02:05:13.833755 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_598f691a-1472-4198-bcd5-6956217d30f9/installer/0.log" Feb 24 02:05:13.833948 master-0 kubenswrapper[7864]: I0224 02:05:13.833864 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 24 02:05:13.903052 master-0 kubenswrapper[7864]: I0224 02:05:13.902978 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/598f691a-1472-4198-bcd5-6956217d30f9-kubelet-dir\") pod \"598f691a-1472-4198-bcd5-6956217d30f9\" (UID: \"598f691a-1472-4198-bcd5-6956217d30f9\") " Feb 24 02:05:13.903172 master-0 kubenswrapper[7864]: I0224 02:05:13.903156 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/598f691a-1472-4198-bcd5-6956217d30f9-kube-api-access\") pod \"598f691a-1472-4198-bcd5-6956217d30f9\" (UID: \"598f691a-1472-4198-bcd5-6956217d30f9\") " Feb 24 02:05:13.903257 master-0 kubenswrapper[7864]: I0224 02:05:13.903192 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/598f691a-1472-4198-bcd5-6956217d30f9-var-lock\") pod \"598f691a-1472-4198-bcd5-6956217d30f9\" (UID: \"598f691a-1472-4198-bcd5-6956217d30f9\") " Feb 24 02:05:13.903257 master-0 kubenswrapper[7864]: I0224 02:05:13.903189 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/598f691a-1472-4198-bcd5-6956217d30f9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "598f691a-1472-4198-bcd5-6956217d30f9" (UID: "598f691a-1472-4198-bcd5-6956217d30f9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:05:13.903385 master-0 kubenswrapper[7864]: I0224 02:05:13.903320 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/598f691a-1472-4198-bcd5-6956217d30f9-var-lock" (OuterVolumeSpecName: "var-lock") pod "598f691a-1472-4198-bcd5-6956217d30f9" (UID: "598f691a-1472-4198-bcd5-6956217d30f9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:05:13.904305 master-0 kubenswrapper[7864]: I0224 02:05:13.904253 7864 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/598f691a-1472-4198-bcd5-6956217d30f9-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 02:05:13.904529 master-0 kubenswrapper[7864]: I0224 02:05:13.904473 7864 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/598f691a-1472-4198-bcd5-6956217d30f9-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:05:13.907797 master-0 kubenswrapper[7864]: I0224 02:05:13.907746 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/598f691a-1472-4198-bcd5-6956217d30f9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "598f691a-1472-4198-bcd5-6956217d30f9" (UID: "598f691a-1472-4198-bcd5-6956217d30f9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:05:14.005980 master-0 kubenswrapper[7864]: I0224 02:05:14.005852 7864 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/598f691a-1472-4198-bcd5-6956217d30f9-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 02:05:14.307491 master-0 kubenswrapper[7864]: I0224 02:05:14.307350 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hrmdr" Feb 24 02:05:14.308076 master-0 kubenswrapper[7864]: I0224 02:05:14.308007 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hrmdr" Feb 24 02:05:14.376861 master-0 kubenswrapper[7864]: I0224 02:05:14.376807 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hrmdr" Feb 24 02:05:14.745200 master-0 kubenswrapper[7864]: I0224 02:05:14.745100 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_598f691a-1472-4198-bcd5-6956217d30f9/installer/0.log" Feb 24 02:05:14.747141 master-0 kubenswrapper[7864]: I0224 02:05:14.746669 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"598f691a-1472-4198-bcd5-6956217d30f9","Type":"ContainerDied","Data":"2b23e83f678bb636a25eee04dbc168b7185773cd9d9737c0ad45b8d37c504e40"} Feb 24 02:05:14.747141 master-0 kubenswrapper[7864]: I0224 02:05:14.746741 7864 scope.go:117] "RemoveContainer" containerID="6c4eac387e8d79e14cfd6b834810c2855eadf37f4e9c2bf332be97551df50535" Feb 24 02:05:14.747141 master-0 kubenswrapper[7864]: I0224 02:05:14.746897 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 24 02:05:14.809852 master-0 kubenswrapper[7864]: I0224 02:05:14.809786 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hrmdr" Feb 24 02:05:15.513669 master-0 kubenswrapper[7864]: I0224 02:05:15.513549 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g862w" Feb 24 02:05:15.513994 master-0 kubenswrapper[7864]: I0224 02:05:15.513817 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g862w" Feb 24 02:05:16.577319 master-0 kubenswrapper[7864]: I0224 02:05:16.577233 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g862w" podUID="66fc4bf9-47d0-4530-a49e-912a61cc35fd" containerName="registry-server" probeResult="failure" output=< Feb 24 02:05:16.577319 master-0 kubenswrapper[7864]: timeout: failed to connect service ":50051" within 1s Feb 24 02:05:16.577319 master-0 kubenswrapper[7864]: > Feb 24 02:05:17.217292 master-0 kubenswrapper[7864]: I0224 02:05:17.217219 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:05:17.610956 master-0 kubenswrapper[7864]: E0224 02:05:17.610606 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T02:05:07Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T02:05:07Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T02:05:07Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T02:05:07Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6\\\"],\\\"sizeBytes\\\":470717179},{\\\"names\\\":[],\\\"sizeBytes\\\":470575802},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce89154fa3fe1e87c660e644b58cf125fede575869fd5841600082c0d1f858a3\\\"],\\\"sizeBytes\\\":468159025},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\\\"],\\\"sizeBytes\\\":464984427},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9\\\"],\\\"sizeBytes\\\":463600445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656\\\"],\\\"sizeBytes\\\":458025547},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf\\\"],\\\"sizeBytes\\\":456470711},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de\\\"],\\\"sizeBytes\\\":448723134},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2\\\"],\\\"sizeBytes\\\":447940744},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eef7d0364bb9259fdc66e57df6df3a59ce7bf957a77d0ca25d4fedb5f122015\\\"],\\\"sizeBytes\\\":443170136},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86ce6c3977c663ad9ad9a5d627bb08727af38fd3153a0a463a10b534030ee126\\\"],\\\"sizeBytes\\\":438548891},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b1d840665bf310fa455ddaff9b262dd0649440ca9ecf34d49b340ce669885568\\\"],\\\"sizeBytes\\\":411485245},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16ea15164e7d71550d4c0e2c90d17f96edda4ab77123947b2e188ffb23951fa0\\\"],\\\"sizeBytes\\\":407241636}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:05:17.779349 master-0 kubenswrapper[7864]: E0224 02:05:17.779225 7864 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:05:18.948287 master-0 kubenswrapper[7864]: I0224 02:05:18.948220 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" Feb 24 02:05:20.217761 master-0 kubenswrapper[7864]: I0224 02:05:20.217648 7864 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 02:05:21.674724 master-0 kubenswrapper[7864]: E0224 02:05:21.674566 7864 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 24 02:05:22.081704 master-0 kubenswrapper[7864]: I0224 02:05:22.081627 7864 patch_prober.go:28] interesting pod/authentication-operator-5bd7c86784-46vmq container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" start-of-body= Feb 24 02:05:22.081893 master-0 kubenswrapper[7864]: I0224 02:05:22.081721 7864 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" podUID="cabdddba-5507-4e47-98ef-a00c6d0f305d" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" Feb 24 02:05:22.799893 master-0 kubenswrapper[7864]: I0224 02:05:22.799838 7864 generic.go:334] "Generic (PLEG): container finished" podID="12dab5d350ebc129b0bfa4714d330b15" containerID="e4194596cf49a5fc19a191ab2dc31b30d69af67944bd0d82e51ad0a2c8b76803" exitCode=0 Feb 24 02:05:22.803955 master-0 kubenswrapper[7864]: I0224 02:05:22.803887 7864 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="40cdc6ec977c48256a7824f9e0a45c0db3de1ee50c08fc0891aa0798ab321016" exitCode=0 Feb 24 02:05:22.804075 master-0 kubenswrapper[7864]: I0224 02:05:22.803966 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerDied","Data":"40cdc6ec977c48256a7824f9e0a45c0db3de1ee50c08fc0891aa0798ab321016"} Feb 24 02:05:25.098659 master-0 kubenswrapper[7864]: I0224 02:05:25.098563 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_12dab5d350ebc129b0bfa4714d330b15/etcdctl/0.log" Feb 24 02:05:25.099327 master-0 kubenswrapper[7864]: I0224 02:05:25.098715 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 24 02:05:25.142444 master-0 kubenswrapper[7864]: I0224 02:05:25.142374 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"12dab5d350ebc129b0bfa4714d330b15\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " Feb 24 02:05:25.142444 master-0 kubenswrapper[7864]: I0224 02:05:25.142444 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"12dab5d350ebc129b0bfa4714d330b15\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " Feb 24 02:05:25.143021 master-0 kubenswrapper[7864]: I0224 02:05:25.142970 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs" (OuterVolumeSpecName: "certs") pod "12dab5d350ebc129b0bfa4714d330b15" (UID: "12dab5d350ebc129b0bfa4714d330b15"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:05:25.143235 master-0 kubenswrapper[7864]: I0224 02:05:25.143061 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir" (OuterVolumeSpecName: "data-dir") pod "12dab5d350ebc129b0bfa4714d330b15" (UID: "12dab5d350ebc129b0bfa4714d330b15"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:05:25.245045 master-0 kubenswrapper[7864]: I0224 02:05:25.244916 7864 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:05:25.245045 master-0 kubenswrapper[7864]: I0224 02:05:25.244964 7864 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") on node \"master-0\" DevicePath \"\"" Feb 24 02:05:25.563895 master-0 kubenswrapper[7864]: I0224 02:05:25.563752 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g862w" Feb 24 02:05:25.630268 master-0 kubenswrapper[7864]: I0224 02:05:25.630190 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g862w" Feb 24 02:05:25.828731 master-0 kubenswrapper[7864]: I0224 02:05:25.828473 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_12dab5d350ebc129b0bfa4714d330b15/etcdctl/0.log" Feb 24 02:05:25.828731 master-0 kubenswrapper[7864]: I0224 02:05:25.828562 7864 generic.go:334] "Generic (PLEG): container finished" podID="12dab5d350ebc129b0bfa4714d330b15" containerID="5dc9293fc7d43ce4e59058df58bcbd64acfc36ec310c06377528876b47e64e22" exitCode=137 Feb 24 02:05:25.829067 master-0 kubenswrapper[7864]: I0224 02:05:25.828729 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 24 02:05:25.829067 master-0 kubenswrapper[7864]: I0224 02:05:25.828742 7864 scope.go:117] "RemoveContainer" containerID="e4194596cf49a5fc19a191ab2dc31b30d69af67944bd0d82e51ad0a2c8b76803" Feb 24 02:05:25.854374 master-0 kubenswrapper[7864]: I0224 02:05:25.854327 7864 scope.go:117] "RemoveContainer" containerID="5dc9293fc7d43ce4e59058df58bcbd64acfc36ec310c06377528876b47e64e22" Feb 24 02:05:25.874180 master-0 kubenswrapper[7864]: I0224 02:05:25.874140 7864 scope.go:117] "RemoveContainer" containerID="e4194596cf49a5fc19a191ab2dc31b30d69af67944bd0d82e51ad0a2c8b76803" Feb 24 02:05:25.874718 master-0 kubenswrapper[7864]: E0224 02:05:25.874646 7864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4194596cf49a5fc19a191ab2dc31b30d69af67944bd0d82e51ad0a2c8b76803\": container with ID starting with e4194596cf49a5fc19a191ab2dc31b30d69af67944bd0d82e51ad0a2c8b76803 not found: ID does not exist" containerID="e4194596cf49a5fc19a191ab2dc31b30d69af67944bd0d82e51ad0a2c8b76803" Feb 24 02:05:25.874924 master-0 kubenswrapper[7864]: I0224 02:05:25.874741 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4194596cf49a5fc19a191ab2dc31b30d69af67944bd0d82e51ad0a2c8b76803"} err="failed to get container status \"e4194596cf49a5fc19a191ab2dc31b30d69af67944bd0d82e51ad0a2c8b76803\": rpc error: code = NotFound desc = could not find container \"e4194596cf49a5fc19a191ab2dc31b30d69af67944bd0d82e51ad0a2c8b76803\": container with ID starting with e4194596cf49a5fc19a191ab2dc31b30d69af67944bd0d82e51ad0a2c8b76803 not found: ID does not exist" Feb 24 02:05:25.874924 master-0 kubenswrapper[7864]: I0224 02:05:25.874778 7864 scope.go:117] "RemoveContainer" containerID="5dc9293fc7d43ce4e59058df58bcbd64acfc36ec310c06377528876b47e64e22" Feb 24 02:05:25.875336 master-0 kubenswrapper[7864]: E0224 02:05:25.875296 7864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dc9293fc7d43ce4e59058df58bcbd64acfc36ec310c06377528876b47e64e22\": container with ID starting with 5dc9293fc7d43ce4e59058df58bcbd64acfc36ec310c06377528876b47e64e22 not found: ID does not exist" containerID="5dc9293fc7d43ce4e59058df58bcbd64acfc36ec310c06377528876b47e64e22" Feb 24 02:05:25.875447 master-0 kubenswrapper[7864]: I0224 02:05:25.875343 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dc9293fc7d43ce4e59058df58bcbd64acfc36ec310c06377528876b47e64e22"} err="failed to get container status \"5dc9293fc7d43ce4e59058df58bcbd64acfc36ec310c06377528876b47e64e22\": rpc error: code = NotFound desc = could not find container \"5dc9293fc7d43ce4e59058df58bcbd64acfc36ec310c06377528876b47e64e22\": container with ID starting with 5dc9293fc7d43ce4e59058df58bcbd64acfc36ec310c06377528876b47e64e22 not found: ID does not exist" Feb 24 02:05:25.885399 master-0 kubenswrapper[7864]: I0224 02:05:25.885312 7864 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12dab5d350ebc129b0bfa4714d330b15" path="/var/lib/kubelet/pods/12dab5d350ebc129b0bfa4714d330b15/volumes" Feb 24 02:05:25.886120 master-0 kubenswrapper[7864]: I0224 02:05:25.886070 7864 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 24 02:05:27.611410 master-0 kubenswrapper[7864]: E0224 02:05:27.611315 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:05:27.780761 master-0 kubenswrapper[7864]: E0224 02:05:27.780664 7864 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:05:28.850722 master-0 kubenswrapper[7864]: I0224 02:05:28.850644 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_bd02da41-8a48-4436-ae58-6363e7554898/installer/0.log" Feb 24 02:05:28.850722 master-0 kubenswrapper[7864]: I0224 02:05:28.850713 7864 generic.go:334] "Generic (PLEG): container finished" podID="bd02da41-8a48-4436-ae58-6363e7554898" containerID="beff9cdd09dcda0a6932e333a63d749970c5574701c511858c571df2f87fa178" exitCode=1 Feb 24 02:05:28.969174 master-0 kubenswrapper[7864]: E0224 02:05:28.967616 7864 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0-master-0.18970c79a318bdde openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:04:54.960463326 +0000 UTC m=+59.288116948,LastTimestamp:2026-02-24 02:04:54.960463326 +0000 UTC m=+59.288116948,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:05:30.218085 master-0 kubenswrapper[7864]: I0224 02:05:30.217920 7864 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 02:05:32.081488 master-0 kubenswrapper[7864]: I0224 02:05:32.081374 7864 patch_prober.go:28] interesting pod/authentication-operator-5bd7c86784-46vmq container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" start-of-body= Feb 24 02:05:32.081488 master-0 kubenswrapper[7864]: I0224 02:05:32.081468 7864 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" podUID="cabdddba-5507-4e47-98ef-a00c6d0f305d" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" Feb 24 02:05:35.812908 master-0 kubenswrapper[7864]: E0224 02:05:35.812798 7864 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 24 02:05:36.912435 master-0 kubenswrapper[7864]: I0224 02:05:36.912348 7864 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="a8c66d27d61884b6fe77ab4fbf5e74a8d795491882a22f1608be7b10b5068b90" exitCode=0 Feb 24 02:05:37.612124 master-0 kubenswrapper[7864]: E0224 02:05:37.612059 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:05:37.782053 master-0 kubenswrapper[7864]: E0224 02:05:37.781952 7864 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:05:39.937222 master-0 kubenswrapper[7864]: I0224 02:05:39.937148 7864 generic.go:334] "Generic (PLEG): container finished" podID="cabdddba-5507-4e47-98ef-a00c6d0f305d" containerID="5667f053bf6054763921df7da9ea78c08243cf4ba68ece9b8b2d0028467d46ce" exitCode=0 Feb 24 02:05:40.217820 master-0 kubenswrapper[7864]: I0224 02:05:40.217719 7864 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 02:05:44.980986 master-0 kubenswrapper[7864]: I0224 02:05:44.980898 7864 generic.go:334] "Generic (PLEG): container finished" podID="9b5620d6-a5fe-45d7-b39e-8bed7f602a17" containerID="18e36dffd25cf50db60c55874ae5e83aa35faa4fa1dff2c477ec4899a01aa1f0" exitCode=0 Feb 24 02:05:47.612778 master-0 kubenswrapper[7864]: E0224 02:05:47.612693 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:05:47.782619 master-0 kubenswrapper[7864]: E0224 02:05:47.782507 7864 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:05:47.782906 master-0 kubenswrapper[7864]: I0224 02:05:47.782885 7864 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 24 02:05:48.003565 master-0 kubenswrapper[7864]: I0224 02:05:48.003499 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-drrqm_3332acec-1553-4594-a903-a322399f6d9d/network-operator/0.log" Feb 24 02:05:48.003755 master-0 kubenswrapper[7864]: I0224 02:05:48.003595 7864 generic.go:334] "Generic (PLEG): container finished" podID="3332acec-1553-4594-a903-a322399f6d9d" containerID="bd04ca4878f34a8e0c0c455c1d43cdf6ed71c1c4d7bdddea11524004be4de241" exitCode=255 Feb 24 02:05:50.018087 master-0 kubenswrapper[7864]: I0224 02:05:50.018038 7864 generic.go:334] "Generic (PLEG): container finished" podID="b36d8451-0fda-4d9d-a850-d05c8f847016" containerID="681796145fac101487f620de46e9725e65a37fa800e21480a31dfc70bdc40485" exitCode=0 Feb 24 02:05:50.021616 master-0 kubenswrapper[7864]: I0224 02:05:50.021526 7864 generic.go:334] "Generic (PLEG): container finished" podID="f5463fbf-ac21-4058-9a3b-30d0e5ea31b7" containerID="68ada84fc11ef9772ab5e035538455741610ae1dd423fbf673e921c253973049" exitCode=0 Feb 24 02:05:52.748822 master-0 kubenswrapper[7864]: I0224 02:05:52.748751 7864 patch_prober.go:28] interesting pod/etcd-operator-545bf96f4d-jb9vb container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Feb 24 02:05:52.749539 master-0 kubenswrapper[7864]: I0224 02:05:52.748839 7864 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" podUID="fbe9964a-9e82-48e9-82b0-7c07e4cec3a2" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Feb 24 02:05:53.044448 master-0 kubenswrapper[7864]: I0224 02:05:53.044296 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-p5b6q_adc1097b-c1ab-4f09-965d-1c819671475b/approver/0.log" Feb 24 02:05:53.045354 master-0 kubenswrapper[7864]: I0224 02:05:53.045294 7864 generic.go:334] "Generic (PLEG): container finished" podID="adc1097b-c1ab-4f09-965d-1c819671475b" containerID="40959528c0e652134371f4afb20a4ee849f4f1c1c0599ddd64b9076a7771bc13" exitCode=1 Feb 24 02:05:55.060215 master-0 kubenswrapper[7864]: I0224 02:05:55.059987 7864 generic.go:334] "Generic (PLEG): container finished" podID="a02536a3-7d3e-4e74-9625-aefed518ec35" containerID="7511febd7f596c7a27d0ee29de7073df96d352c94420e8b91ab376f0f8ffbe84" exitCode=0 Feb 24 02:05:55.062165 master-0 kubenswrapper[7864]: I0224 02:05:55.062081 7864 generic.go:334] "Generic (PLEG): container finished" podID="fbe9964a-9e82-48e9-82b0-7c07e4cec3a2" containerID="fcdcbf5a149a0b578453f994965bc5ee1ca152377a3fb51c3ba2512d342fe454" exitCode=0 Feb 24 02:05:55.878682 master-0 kubenswrapper[7864]: I0224 02:05:55.878568 7864 status_manager.go:851] "Failed to get status for pod" podUID="a4cea44a-1c6e-465f-97df-2c951056cb85" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-ckntz" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods control-plane-machine-set-operator-686847ff5f-ckntz)" Feb 24 02:05:57.613177 master-0 kubenswrapper[7864]: E0224 02:05:57.613023 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:05:57.613177 master-0 kubenswrapper[7864]: E0224 02:05:57.613083 7864 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 02:05:57.784305 master-0 kubenswrapper[7864]: E0224 02:05:57.784209 7864 controller.go:145] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io master-0)" interval="200ms" Feb 24 02:05:59.890169 master-0 kubenswrapper[7864]: E0224 02:05:59.890091 7864 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 24 02:05:59.891022 master-0 kubenswrapper[7864]: E0224 02:05:59.890335 7864 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.016s" Feb 24 02:05:59.891022 master-0 kubenswrapper[7864]: I0224 02:05:59.890377 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:05:59.891022 master-0 kubenswrapper[7864]: I0224 02:05:59.890412 7864 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:05:59.892219 master-0 kubenswrapper[7864]: I0224 02:05:59.892150 7864 scope.go:117] "RemoveContainer" containerID="5667f053bf6054763921df7da9ea78c08243cf4ba68ece9b8b2d0028467d46ce" Feb 24 02:05:59.896223 master-0 kubenswrapper[7864]: I0224 02:05:59.896147 7864 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"00825b03893f0ea1af43852d35dd4ae7ab3698ccf0fdf722de5de648ffb1c108"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 24 02:05:59.896351 master-0 kubenswrapper[7864]: I0224 02:05:59.896303 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" containerID="cri-o://00825b03893f0ea1af43852d35dd4ae7ab3698ccf0fdf722de5de648ffb1c108" gracePeriod=30 Feb 24 02:05:59.906810 master-0 kubenswrapper[7864]: I0224 02:05:59.906766 7864 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 24 02:06:00.104113 master-0 kubenswrapper[7864]: I0224 02:06:00.104053 7864 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="00825b03893f0ea1af43852d35dd4ae7ab3698ccf0fdf722de5de648ffb1c108" exitCode=2 Feb 24 02:06:00.111878 master-0 kubenswrapper[7864]: I0224 02:06:00.111825 7864 generic.go:334] "Generic (PLEG): container finished" podID="fcbda577-b943-4b5c-b041-948aece8e40f" containerID="34c020b5f77acec103ffa53ff06b99869f8141239314c8cafabb68cd7a9b73bc" exitCode=0 Feb 24 02:06:00.566073 master-0 kubenswrapper[7864]: I0224 02:06:00.566025 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_bd02da41-8a48-4436-ae58-6363e7554898/installer/0.log" Feb 24 02:06:00.566256 master-0 kubenswrapper[7864]: I0224 02:06:00.566129 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 24 02:06:00.608736 master-0 kubenswrapper[7864]: I0224 02:06:00.608697 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bd02da41-8a48-4436-ae58-6363e7554898-var-lock\") pod \"bd02da41-8a48-4436-ae58-6363e7554898\" (UID: \"bd02da41-8a48-4436-ae58-6363e7554898\") " Feb 24 02:06:00.608913 master-0 kubenswrapper[7864]: I0224 02:06:00.608779 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd02da41-8a48-4436-ae58-6363e7554898-var-lock" (OuterVolumeSpecName: "var-lock") pod "bd02da41-8a48-4436-ae58-6363e7554898" (UID: "bd02da41-8a48-4436-ae58-6363e7554898"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:06:00.608913 master-0 kubenswrapper[7864]: I0224 02:06:00.608816 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd02da41-8a48-4436-ae58-6363e7554898-kube-api-access\") pod \"bd02da41-8a48-4436-ae58-6363e7554898\" (UID: \"bd02da41-8a48-4436-ae58-6363e7554898\") " Feb 24 02:06:00.608913 master-0 kubenswrapper[7864]: I0224 02:06:00.608905 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bd02da41-8a48-4436-ae58-6363e7554898-kubelet-dir\") pod \"bd02da41-8a48-4436-ae58-6363e7554898\" (UID: \"bd02da41-8a48-4436-ae58-6363e7554898\") " Feb 24 02:06:00.609125 master-0 kubenswrapper[7864]: I0224 02:06:00.609055 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd02da41-8a48-4436-ae58-6363e7554898-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bd02da41-8a48-4436-ae58-6363e7554898" (UID: "bd02da41-8a48-4436-ae58-6363e7554898"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:06:00.609312 master-0 kubenswrapper[7864]: I0224 02:06:00.609274 7864 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bd02da41-8a48-4436-ae58-6363e7554898-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:06:00.609312 master-0 kubenswrapper[7864]: I0224 02:06:00.609311 7864 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bd02da41-8a48-4436-ae58-6363e7554898-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 02:06:00.613991 master-0 kubenswrapper[7864]: I0224 02:06:00.613907 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd02da41-8a48-4436-ae58-6363e7554898-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bd02da41-8a48-4436-ae58-6363e7554898" (UID: "bd02da41-8a48-4436-ae58-6363e7554898"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:06:00.710872 master-0 kubenswrapper[7864]: I0224 02:06:00.710825 7864 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd02da41-8a48-4436-ae58-6363e7554898-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 02:06:01.119963 master-0 kubenswrapper[7864]: I0224 02:06:01.119891 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_bd02da41-8a48-4436-ae58-6363e7554898/installer/0.log" Feb 24 02:06:01.120814 master-0 kubenswrapper[7864]: I0224 02:06:01.120095 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 24 02:06:01.122737 master-0 kubenswrapper[7864]: I0224 02:06:01.122676 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-c7fgn_7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c/openshift-controller-manager-operator/0.log" Feb 24 02:06:01.122909 master-0 kubenswrapper[7864]: I0224 02:06:01.122738 7864 generic.go:334] "Generic (PLEG): container finished" podID="7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c" containerID="e9d2b8b0026aada75f8d27003b5a7df3b3f0253b60da8ae01339ca6b74582705" exitCode=1 Feb 24 02:06:02.971239 master-0 kubenswrapper[7864]: E0224 02:06:02.971066 7864 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{control-plane-machine-set-operator-686847ff5f-ckntz.18970c7c31b01417 openshift-machine-api 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-api,Name:control-plane-machine-set-operator-686847ff5f-ckntz,UID:a4cea44a-1c6e-465f-97df-2c951056cb85,APIVersion:v1,ResourceVersion:8471,FieldPath:spec.containers{control-plane-machine-set-operator},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac\" in 11.169s (11.169s including waiting). Image size: 470575802 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:05:05.942680599 +0000 UTC m=+70.270334271,LastTimestamp:2026-02-24 02:05:05.942680599 +0000 UTC m=+70.270334271,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:06:03.166748 master-0 kubenswrapper[7864]: I0224 02:06:03.166695 7864 generic.go:334] "Generic (PLEG): container finished" podID="f85222bf-f51a-4232-8db1-1e6ee593617b" containerID="2434dea5e725cbfa7b0fd89dd34a4a36539029bcba3b05703f8e046b7372d369" exitCode=0 Feb 24 02:06:07.985460 master-0 kubenswrapper[7864]: E0224 02:06:07.985288 7864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Feb 24 02:06:11.222963 master-0 kubenswrapper[7864]: I0224 02:06:11.222887 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_683deae1-94b1-4c17-a73f-ad628a09134b/installer/0.log" Feb 24 02:06:11.223799 master-0 kubenswrapper[7864]: I0224 02:06:11.222964 7864 generic.go:334] "Generic (PLEG): container finished" podID="683deae1-94b1-4c17-a73f-ad628a09134b" containerID="94401de1842b75a4dd153e2d7cb3bd01f3f26706beddf59514cdea6c0eb4a139" exitCode=1 Feb 24 02:06:17.782474 master-0 kubenswrapper[7864]: E0224 02:06:17.782169 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T02:06:07Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T02:06:07Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T02:06:07Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T02:06:07Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:08cff7c9164822cf90c1ddc99284f5fd3c4efbfdf7ff5d2da94ff20f03d57215\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8665346de3cec5b1443fb1e3bf6389962210affa684e5c1b521ec342f56e0901\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1703852494},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:10e72e1dffd75bda73d89a11e18d98c99255c0f2c54d81f82a2a48b0b86b96b5\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:d64168b357c44a3e5febdd4d99c285c68217a6568f9de2371d72e8a089d42b69\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1238591178},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:155018f64a4d43025cb88586009847bd0f7844afa3e1aa81639d31b96bebd68e\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:4154e7856e2578eae0af7bc7ade3338a49c179e8e0b9d8b5167540e580ffc22b\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210563790},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7\\\"],\\\"sizeBytes\\\":918153745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6\\\"],\\\"sizeBytes\\\":470717179},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac\\\"],\\\"sizeBytes\\\":470575802},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce89154fa3fe1e87c660e644b58cf125fede575869fd5841600082c0d1f858a3\\\"],\\\"sizeBytes\\\":468159025},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\\\"],\\\"sizeBytes\\\":464984427},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9\\\"],\\\"sizeBytes\\\":463600445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656\\\"],\\\"sizeBytes\\\":458025547},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf\\\"],\\\"sizeBytes\\\":456470711},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de\\\"],\\\"sizeBytes\\\":448723134}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:06:18.387435 master-0 kubenswrapper[7864]: E0224 02:06:18.387196 7864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Feb 24 02:06:27.783221 master-0 kubenswrapper[7864]: E0224 02:06:27.783155 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:06:29.189653 master-0 kubenswrapper[7864]: E0224 02:06:29.189504 7864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Feb 24 02:06:33.910316 master-0 kubenswrapper[7864]: E0224 02:06:33.910238 7864 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 24 02:06:33.911279 master-0 kubenswrapper[7864]: E0224 02:06:33.910518 7864 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.02s" Feb 24 02:06:33.911530 master-0 kubenswrapper[7864]: I0224 02:06:33.911456 7864 scope.go:117] "RemoveContainer" containerID="bd04ca4878f34a8e0c0c455c1d43cdf6ed71c1c4d7bdddea11524004be4de241" Feb 24 02:06:33.911808 master-0 kubenswrapper[7864]: I0224 02:06:33.911740 7864 scope.go:117] "RemoveContainer" containerID="34c020b5f77acec103ffa53ff06b99869f8141239314c8cafabb68cd7a9b73bc" Feb 24 02:06:33.912155 master-0 kubenswrapper[7864]: I0224 02:06:33.912091 7864 scope.go:117] "RemoveContainer" containerID="2434dea5e725cbfa7b0fd89dd34a4a36539029bcba3b05703f8e046b7372d369" Feb 24 02:06:33.913394 master-0 kubenswrapper[7864]: I0224 02:06:33.913129 7864 scope.go:117] "RemoveContainer" containerID="681796145fac101487f620de46e9725e65a37fa800e21480a31dfc70bdc40485" Feb 24 02:06:33.916566 master-0 kubenswrapper[7864]: I0224 02:06:33.915834 7864 scope.go:117] "RemoveContainer" containerID="7511febd7f596c7a27d0ee29de7073df96d352c94420e8b91ab376f0f8ffbe84" Feb 24 02:06:33.916566 master-0 kubenswrapper[7864]: I0224 02:06:33.916082 7864 scope.go:117] "RemoveContainer" containerID="68ada84fc11ef9772ab5e035538455741610ae1dd423fbf673e921c253973049" Feb 24 02:06:33.921116 master-0 kubenswrapper[7864]: I0224 02:06:33.921064 7864 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 24 02:06:34.384498 master-0 kubenswrapper[7864]: I0224 02:06:34.384448 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-drrqm_3332acec-1553-4594-a903-a322399f6d9d/network-operator/0.log" Feb 24 02:06:36.974508 master-0 kubenswrapper[7864]: E0224 02:06:36.974293 7864 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{redhat-marketplace-hrmdr.18970c7c34c90a12 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-hrmdr,UID:37e3de57-34a2-4d55-9200-1bb94530c4ba,APIVersion:v1,ResourceVersion:7739,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\" in 20.204s (20.205s including waiting). Image size: 1202767548 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:05:05.994648082 +0000 UTC m=+70.322301724,LastTimestamp:2026-02-24 02:05:05.994648082 +0000 UTC m=+70.322301724,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:06:37.784795 master-0 kubenswrapper[7864]: E0224 02:06:37.784632 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:06:40.791076 master-0 kubenswrapper[7864]: E0224 02:06:40.790938 7864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Feb 24 02:06:41.441344 master-0 kubenswrapper[7864]: I0224 02:06:41.441238 7864 generic.go:334] "Generic (PLEG): container finished" podID="91d16f7b-390a-4d9d-99d6-cc8e210801d1" containerID="f88738c7cb3808e8ebb5ddd209f4e28577d6aec5e69f689e145c36b78a77fe4b" exitCode=0 Feb 24 02:06:44.464593 master-0 kubenswrapper[7864]: I0224 02:06:44.464515 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-hvr8b_4a2d8ef6-14ac-490d-a931-7082344d3f46/manager/0.log" Feb 24 02:06:44.465359 master-0 kubenswrapper[7864]: I0224 02:06:44.464624 7864 generic.go:334] "Generic (PLEG): container finished" podID="4a2d8ef6-14ac-490d-a931-7082344d3f46" containerID="e69376d98cee67244b069177748eb8161f1ffee16e9b9f5abd63b6aff145de6c" exitCode=1 Feb 24 02:06:44.466984 master-0 kubenswrapper[7864]: I0224 02:06:44.466927 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-jhklz_4f5b3b93-a59d-495c-a311-8913fa6000fc/manager/0.log" Feb 24 02:06:44.467648 master-0 kubenswrapper[7864]: I0224 02:06:44.467530 7864 generic.go:334] "Generic (PLEG): container finished" podID="4f5b3b93-a59d-495c-a311-8913fa6000fc" containerID="2a70331e31f309db225d3996274bc257195cff624763144e3200d4a89257b219" exitCode=1 Feb 24 02:06:45.477317 master-0 kubenswrapper[7864]: I0224 02:06:45.477263 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-8l58x_f6e7b773-7ecd-4a5c-8bef-d672f371e7e5/snapshot-controller/0.log" Feb 24 02:06:45.478068 master-0 kubenswrapper[7864]: I0224 02:06:45.477336 7864 generic.go:334] "Generic (PLEG): container finished" podID="f6e7b773-7ecd-4a5c-8bef-d672f371e7e5" containerID="1dd68f4f64e0c62e01d0497cf59111173fe627d06971140a305a4032c20cc485" exitCode=1 Feb 24 02:06:45.674986 master-0 kubenswrapper[7864]: I0224 02:06:45.674935 7864 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-jhklz container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Feb 24 02:06:45.675279 master-0 kubenswrapper[7864]: I0224 02:06:45.675234 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" podUID="4f5b3b93-a59d-495c-a311-8913fa6000fc" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Feb 24 02:06:45.675467 master-0 kubenswrapper[7864]: I0224 02:06:45.675001 7864 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-jhklz container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Feb 24 02:06:45.675561 master-0 kubenswrapper[7864]: I0224 02:06:45.675521 7864 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" podUID="4f5b3b93-a59d-495c-a311-8913fa6000fc" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" Feb 24 02:06:45.850041 master-0 kubenswrapper[7864]: I0224 02:06:45.849916 7864 patch_prober.go:28] interesting pod/operator-controller-controller-manager-9cc7d7bb-hvr8b container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.43:8081/healthz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Feb 24 02:06:45.850180 master-0 kubenswrapper[7864]: I0224 02:06:45.850027 7864 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" podUID="4a2d8ef6-14ac-490d-a931-7082344d3f46" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/healthz\": dial tcp 10.128.0.43:8081: connect: connection refused" Feb 24 02:06:45.850180 master-0 kubenswrapper[7864]: I0224 02:06:45.850059 7864 patch_prober.go:28] interesting pod/operator-controller-controller-manager-9cc7d7bb-hvr8b container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Feb 24 02:06:45.850180 master-0 kubenswrapper[7864]: I0224 02:06:45.850148 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" podUID="4a2d8ef6-14ac-490d-a931-7082344d3f46" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Feb 24 02:06:47.786000 master-0 kubenswrapper[7864]: E0224 02:06:47.785867 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:06:48.939814 master-0 kubenswrapper[7864]: I0224 02:06:48.939671 7864 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-4qf9p container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" start-of-body= Feb 24 02:06:48.941344 master-0 kubenswrapper[7864]: I0224 02:06:48.939839 7864 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" podUID="91d16f7b-390a-4d9d-99d6-cc8e210801d1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" Feb 24 02:06:48.941344 master-0 kubenswrapper[7864]: I0224 02:06:48.940842 7864 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-4qf9p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" start-of-body= Feb 24 02:06:48.941344 master-0 kubenswrapper[7864]: I0224 02:06:48.940918 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" podUID="91d16f7b-390a-4d9d-99d6-cc8e210801d1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" Feb 24 02:06:52.748827 master-0 kubenswrapper[7864]: I0224 02:06:52.748768 7864 patch_prober.go:28] interesting pod/etcd-operator-545bf96f4d-jb9vb container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Feb 24 02:06:52.749514 master-0 kubenswrapper[7864]: I0224 02:06:52.749477 7864 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" podUID="fbe9964a-9e82-48e9-82b0-7c07e4cec3a2" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Feb 24 02:06:53.532953 master-0 kubenswrapper[7864]: I0224 02:06:53.532884 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-6dlqb_c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/ingress-operator/0.log" Feb 24 02:06:53.533404 master-0 kubenswrapper[7864]: I0224 02:06:53.532961 7864 generic.go:334] "Generic (PLEG): container finished" podID="c3278a82-ee70-4d6c-9c96-f8cb1bcb9334" containerID="bb8e1724e77d6ceb463e444b223fcd8637d9a803be2af1a8dcbebbfedcda21d8" exitCode=1 Feb 24 02:06:53.992544 master-0 kubenswrapper[7864]: E0224 02:06:53.992437 7864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Feb 24 02:06:55.675067 master-0 kubenswrapper[7864]: I0224 02:06:55.674833 7864 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-jhklz container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Feb 24 02:06:55.681321 master-0 kubenswrapper[7864]: I0224 02:06:55.675763 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" podUID="4f5b3b93-a59d-495c-a311-8913fa6000fc" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Feb 24 02:06:55.849674 master-0 kubenswrapper[7864]: I0224 02:06:55.849595 7864 patch_prober.go:28] interesting pod/operator-controller-controller-manager-9cc7d7bb-hvr8b container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Feb 24 02:06:55.849674 master-0 kubenswrapper[7864]: I0224 02:06:55.849677 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" podUID="4a2d8ef6-14ac-490d-a931-7082344d3f46" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Feb 24 02:06:55.883486 master-0 kubenswrapper[7864]: I0224 02:06:55.883371 7864 status_manager.go:851] "Failed to get status for pod" podUID="37e3de57-34a2-4d55-9200-1bb94530c4ba" pod="openshift-marketplace/redhat-marketplace-hrmdr" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods redhat-marketplace-hrmdr)" Feb 24 02:06:57.786500 master-0 kubenswrapper[7864]: E0224 02:06:57.786366 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:06:57.786500 master-0 kubenswrapper[7864]: E0224 02:06:57.786430 7864 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 02:06:58.941705 master-0 kubenswrapper[7864]: I0224 02:06:58.941639 7864 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-4qf9p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" start-of-body= Feb 24 02:06:58.942762 master-0 kubenswrapper[7864]: I0224 02:06:58.942699 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" podUID="91d16f7b-390a-4d9d-99d6-cc8e210801d1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" Feb 24 02:06:58.942987 master-0 kubenswrapper[7864]: I0224 02:06:58.941693 7864 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-4qf9p container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" start-of-body= Feb 24 02:06:58.943232 master-0 kubenswrapper[7864]: I0224 02:06:58.943190 7864 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" podUID="91d16f7b-390a-4d9d-99d6-cc8e210801d1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" Feb 24 02:07:01.590688 master-0 kubenswrapper[7864]: I0224 02:07:01.590615 7864 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="28d78d14185433406f5d6be1256f4efc7cd117cd145b616d5b8ccdbc8f03929c" exitCode=1 Feb 24 02:07:05.675385 master-0 kubenswrapper[7864]: I0224 02:07:05.675278 7864 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-jhklz container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Feb 24 02:07:05.675385 master-0 kubenswrapper[7864]: I0224 02:07:05.675329 7864 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-jhklz container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Feb 24 02:07:05.676306 master-0 kubenswrapper[7864]: I0224 02:07:05.675380 7864 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" podUID="4f5b3b93-a59d-495c-a311-8913fa6000fc" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" Feb 24 02:07:05.676306 master-0 kubenswrapper[7864]: I0224 02:07:05.675387 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" podUID="4f5b3b93-a59d-495c-a311-8913fa6000fc" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Feb 24 02:07:05.849394 master-0 kubenswrapper[7864]: I0224 02:07:05.849306 7864 patch_prober.go:28] interesting pod/operator-controller-controller-manager-9cc7d7bb-hvr8b container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.43:8081/healthz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Feb 24 02:07:05.849394 master-0 kubenswrapper[7864]: I0224 02:07:05.849364 7864 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" podUID="4a2d8ef6-14ac-490d-a931-7082344d3f46" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/healthz\": dial tcp 10.128.0.43:8081: connect: connection refused" Feb 24 02:07:05.849794 master-0 kubenswrapper[7864]: I0224 02:07:05.849459 7864 patch_prober.go:28] interesting pod/operator-controller-controller-manager-9cc7d7bb-hvr8b container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Feb 24 02:07:05.849794 master-0 kubenswrapper[7864]: I0224 02:07:05.849549 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" podUID="4a2d8ef6-14ac-490d-a931-7082344d3f46" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Feb 24 02:07:07.923840 master-0 kubenswrapper[7864]: E0224 02:07:07.923742 7864 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 24 02:07:07.924888 master-0 kubenswrapper[7864]: E0224 02:07:07.924023 7864 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.013s" Feb 24 02:07:07.924888 master-0 kubenswrapper[7864]: I0224 02:07:07.924059 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"bd02da41-8a48-4436-ae58-6363e7554898","Type":"ContainerDied","Data":"beff9cdd09dcda0a6932e333a63d749970c5574701c511858c571df2f87fa178"} Feb 24 02:07:07.935564 master-0 kubenswrapper[7864]: I0224 02:07:07.935505 7864 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 24 02:07:08.940358 master-0 kubenswrapper[7864]: I0224 02:07:08.940250 7864 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-4qf9p container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" start-of-body= Feb 24 02:07:08.940948 master-0 kubenswrapper[7864]: I0224 02:07:08.940351 7864 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" podUID="91d16f7b-390a-4d9d-99d6-cc8e210801d1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" Feb 24 02:07:08.941143 master-0 kubenswrapper[7864]: I0224 02:07:08.941029 7864 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-4qf9p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" start-of-body= Feb 24 02:07:08.941294 master-0 kubenswrapper[7864]: I0224 02:07:08.941179 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" podUID="91d16f7b-390a-4d9d-99d6-cc8e210801d1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" Feb 24 02:07:10.394914 master-0 kubenswrapper[7864]: E0224 02:07:10.394467 7864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 24 02:07:10.977570 master-0 kubenswrapper[7864]: E0224 02:07:10.977336 7864 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{community-operators-rvp5j.18970c7c364e1142 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-rvp5j,UID:dae6353d-97ee-46f8-8430-0b5211134a03,APIVersion:v1,ResourceVersion:7629,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/community-operator-index:v4.18\" in 22.289s (22.289s including waiting). Image size: 1210563790 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:05:06.020143426 +0000 UTC m=+70.347797078,LastTimestamp:2026-02-24 02:05:06.020143426 +0000 UTC m=+70.347797078,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:07:15.675222 master-0 kubenswrapper[7864]: I0224 02:07:15.675107 7864 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-jhklz container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Feb 24 02:07:15.676269 master-0 kubenswrapper[7864]: I0224 02:07:15.675214 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" podUID="4f5b3b93-a59d-495c-a311-8913fa6000fc" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Feb 24 02:07:15.849828 master-0 kubenswrapper[7864]: I0224 02:07:15.849722 7864 patch_prober.go:28] interesting pod/operator-controller-controller-manager-9cc7d7bb-hvr8b container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Feb 24 02:07:15.849988 master-0 kubenswrapper[7864]: I0224 02:07:15.849844 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" podUID="4a2d8ef6-14ac-490d-a931-7082344d3f46" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Feb 24 02:07:17.820675 master-0 kubenswrapper[7864]: E0224 02:07:17.820390 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T02:07:07Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T02:07:07Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T02:07:07Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T02:07:07Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:08cff7c9164822cf90c1ddc99284f5fd3c4efbfdf7ff5d2da94ff20f03d57215\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8665346de3cec5b1443fb1e3bf6389962210affa684e5c1b521ec342f56e0901\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1703852494},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:10e72e1dffd75bda73d89a11e18d98c99255c0f2c54d81f82a2a48b0b86b96b5\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:d64168b357c44a3e5febdd4d99c285c68217a6568f9de2371d72e8a089d42b69\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1238591178},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:155018f64a4d43025cb88586009847bd0f7844afa3e1aa81639d31b96bebd68e\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:4154e7856e2578eae0af7bc7ade3338a49c179e8e0b9d8b5167540e580ffc22b\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210563790},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7\\\"],\\\"sizeBytes\\\":918153745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6\\\"],\\\"sizeBytes\\\":470717179},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac\\\"],\\\"sizeBytes\\\":470575802},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce89154fa3fe1e87c660e644b58cf125fede575869fd5841600082c0d1f858a3\\\"],\\\"sizeBytes\\\":468159025},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\\\"],\\\"sizeBytes\\\":464984427},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9\\\"],\\\"sizeBytes\\\":463600445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656\\\"],\\\"sizeBytes\\\":458025547},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf\\\"],\\\"sizeBytes\\\":456470711},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de\\\"],\\\"sizeBytes\\\":448723134}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:07:18.940185 master-0 kubenswrapper[7864]: I0224 02:07:18.939959 7864 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-4qf9p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" start-of-body= Feb 24 02:07:18.940185 master-0 kubenswrapper[7864]: I0224 02:07:18.940059 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" podUID="91d16f7b-390a-4d9d-99d6-cc8e210801d1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" Feb 24 02:07:25.674420 master-0 kubenswrapper[7864]: I0224 02:07:25.674250 7864 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-jhklz container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Feb 24 02:07:25.674420 master-0 kubenswrapper[7864]: I0224 02:07:25.674361 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" podUID="4f5b3b93-a59d-495c-a311-8913fa6000fc" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Feb 24 02:07:25.675442 master-0 kubenswrapper[7864]: I0224 02:07:25.674267 7864 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-jhklz container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Feb 24 02:07:25.675442 master-0 kubenswrapper[7864]: I0224 02:07:25.674507 7864 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" podUID="4f5b3b93-a59d-495c-a311-8913fa6000fc" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" Feb 24 02:07:25.849326 master-0 kubenswrapper[7864]: I0224 02:07:25.849204 7864 patch_prober.go:28] interesting pod/operator-controller-controller-manager-9cc7d7bb-hvr8b container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Feb 24 02:07:25.849326 master-0 kubenswrapper[7864]: I0224 02:07:25.849300 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" podUID="4a2d8ef6-14ac-490d-a931-7082344d3f46" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Feb 24 02:07:25.849540 master-0 kubenswrapper[7864]: I0224 02:07:25.849223 7864 patch_prober.go:28] interesting pod/operator-controller-controller-manager-9cc7d7bb-hvr8b container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.43:8081/healthz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Feb 24 02:07:25.849540 master-0 kubenswrapper[7864]: I0224 02:07:25.849398 7864 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" podUID="4a2d8ef6-14ac-490d-a931-7082344d3f46" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/healthz\": dial tcp 10.128.0.43:8081: connect: connection refused" Feb 24 02:07:27.396635 master-0 kubenswrapper[7864]: E0224 02:07:27.396497 7864 controller.go:145] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io master-0)" interval="7s" Feb 24 02:07:27.821837 master-0 kubenswrapper[7864]: E0224 02:07:27.821706 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded" Feb 24 02:07:28.940138 master-0 kubenswrapper[7864]: I0224 02:07:28.939994 7864 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-4qf9p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" start-of-body= Feb 24 02:07:28.940138 master-0 kubenswrapper[7864]: I0224 02:07:28.940085 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" podUID="91d16f7b-390a-4d9d-99d6-cc8e210801d1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" Feb 24 02:07:30.799106 master-0 kubenswrapper[7864]: I0224 02:07:30.798977 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-46vmq_cabdddba-5507-4e47-98ef-a00c6d0f305d/authentication-operator/1.log" Feb 24 02:07:30.800112 master-0 kubenswrapper[7864]: I0224 02:07:30.799800 7864 generic.go:334] "Generic (PLEG): container finished" podID="cabdddba-5507-4e47-98ef-a00c6d0f305d" containerID="3b9fb79825f01ed4c00a7769132bc445aa326532ee27382a22aec874d90be7a4" exitCode=255 Feb 24 02:07:32.815890 master-0 kubenswrapper[7864]: I0224 02:07:32.815763 7864 generic.go:334] "Generic (PLEG): container finished" podID="523033b8-4101-4a55-8320-55bef04ddaaf" containerID="1d78e51e0a1da7f353fa2fc0c8e9c9a46d124e7c769ba9917e9138703d244089" exitCode=0 Feb 24 02:07:35.675141 master-0 kubenswrapper[7864]: I0224 02:07:35.675081 7864 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-jhklz container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Feb 24 02:07:35.675967 master-0 kubenswrapper[7864]: I0224 02:07:35.675799 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" podUID="4f5b3b93-a59d-495c-a311-8913fa6000fc" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Feb 24 02:07:35.838285 master-0 kubenswrapper[7864]: I0224 02:07:35.838170 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k98fq_7b4e3ba0-5194-4e20-8f12-dea4b67504fe/cluster-baremetal-operator/0.log" Feb 24 02:07:35.838285 master-0 kubenswrapper[7864]: I0224 02:07:35.838248 7864 generic.go:334] "Generic (PLEG): container finished" podID="7b4e3ba0-5194-4e20-8f12-dea4b67504fe" containerID="7144c5e947ad686471e67b52048230854640c3d324dfe4c40330e542a4803eda" exitCode=1 Feb 24 02:07:35.849969 master-0 kubenswrapper[7864]: I0224 02:07:35.849856 7864 patch_prober.go:28] interesting pod/operator-controller-controller-manager-9cc7d7bb-hvr8b container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Feb 24 02:07:35.849969 master-0 kubenswrapper[7864]: I0224 02:07:35.849942 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" podUID="4a2d8ef6-14ac-490d-a931-7082344d3f46" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Feb 24 02:07:37.822897 master-0 kubenswrapper[7864]: E0224 02:07:37.822762 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:07:38.940174 master-0 kubenswrapper[7864]: I0224 02:07:38.940047 7864 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-4qf9p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" start-of-body= Feb 24 02:07:38.940174 master-0 kubenswrapper[7864]: I0224 02:07:38.940137 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" podUID="91d16f7b-390a-4d9d-99d6-cc8e210801d1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" Feb 24 02:07:40.873671 master-0 kubenswrapper[7864]: I0224 02:07:40.873535 7864 generic.go:334] "Generic (PLEG): container finished" podID="bd1a99d5-e213-42b3-9538-44f68d993184" containerID="5ce0ca46b5707d074f514e4d89c259a98815b6c015e08d177ffa0a7a40772a21" exitCode=0 Feb 24 02:07:41.938112 master-0 kubenswrapper[7864]: E0224 02:07:41.938012 7864 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 24 02:07:41.939299 master-0 kubenswrapper[7864]: E0224 02:07:41.938262 7864 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.014s" Feb 24 02:07:41.949066 master-0 kubenswrapper[7864]: I0224 02:07:41.949000 7864 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 24 02:07:44.398946 master-0 kubenswrapper[7864]: E0224 02:07:44.398821 7864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 24 02:07:44.981263 master-0 kubenswrapper[7864]: E0224 02:07:44.981038 7864 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{certified-operators-dwmm5.18970c7c368d4006 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-dwmm5,UID:afda9f0b-a304-490a-a080-0384a0a4e85b,APIVersion:v1,ResourceVersion:7608,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/certified-operator-index:v4.18\" in 22.3s (22.3s including waiting). Image size: 1238591178 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:05:06.024284166 +0000 UTC m=+70.351937818,LastTimestamp:2026-02-24 02:05:06.024284166 +0000 UTC m=+70.351937818,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:07:45.136075 master-0 kubenswrapper[7864]: I0224 02:07:45.135869 7864 patch_prober.go:28] interesting pod/controller-manager-57df7db547-2v9c5 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" start-of-body= Feb 24 02:07:45.136075 master-0 kubenswrapper[7864]: I0224 02:07:45.135956 7864 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" podUID="bd1a99d5-e213-42b3-9538-44f68d993184" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" Feb 24 02:07:45.136075 master-0 kubenswrapper[7864]: I0224 02:07:45.135873 7864 patch_prober.go:28] interesting pod/controller-manager-57df7db547-2v9c5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" start-of-body= Feb 24 02:07:45.136372 master-0 kubenswrapper[7864]: I0224 02:07:45.136136 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" podUID="bd1a99d5-e213-42b3-9538-44f68d993184" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" Feb 24 02:07:45.675297 master-0 kubenswrapper[7864]: I0224 02:07:45.675208 7864 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-jhklz container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Feb 24 02:07:45.676137 master-0 kubenswrapper[7864]: I0224 02:07:45.675314 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" podUID="4f5b3b93-a59d-495c-a311-8913fa6000fc" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Feb 24 02:07:45.849786 master-0 kubenswrapper[7864]: I0224 02:07:45.849662 7864 patch_prober.go:28] interesting pod/operator-controller-controller-manager-9cc7d7bb-hvr8b container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Feb 24 02:07:45.849786 master-0 kubenswrapper[7864]: I0224 02:07:45.849757 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" podUID="4a2d8ef6-14ac-490d-a931-7082344d3f46" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Feb 24 02:07:47.823664 master-0 kubenswrapper[7864]: E0224 02:07:47.823471 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:07:48.942146 master-0 kubenswrapper[7864]: I0224 02:07:48.942025 7864 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-4qf9p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" start-of-body= Feb 24 02:07:48.943913 master-0 kubenswrapper[7864]: I0224 02:07:48.942136 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" podUID="91d16f7b-390a-4d9d-99d6-cc8e210801d1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" Feb 24 02:07:52.748198 master-0 kubenswrapper[7864]: I0224 02:07:52.748116 7864 patch_prober.go:28] interesting pod/etcd-operator-545bf96f4d-jb9vb container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" start-of-body= Feb 24 02:07:52.749048 master-0 kubenswrapper[7864]: I0224 02:07:52.748211 7864 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" podUID="fbe9964a-9e82-48e9-82b0-7c07e4cec3a2" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.14:8443/healthz\": dial tcp 10.128.0.14:8443: connect: connection refused" Feb 24 02:07:55.135864 master-0 kubenswrapper[7864]: I0224 02:07:55.135740 7864 patch_prober.go:28] interesting pod/controller-manager-57df7db547-2v9c5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" start-of-body= Feb 24 02:07:55.135864 master-0 kubenswrapper[7864]: I0224 02:07:55.135828 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" podUID="bd1a99d5-e213-42b3-9538-44f68d993184" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" Feb 24 02:07:55.136881 master-0 kubenswrapper[7864]: I0224 02:07:55.136307 7864 patch_prober.go:28] interesting pod/controller-manager-57df7db547-2v9c5 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" start-of-body= Feb 24 02:07:55.136881 master-0 kubenswrapper[7864]: I0224 02:07:55.136403 7864 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" podUID="bd1a99d5-e213-42b3-9538-44f68d993184" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" Feb 24 02:07:55.674995 master-0 kubenswrapper[7864]: I0224 02:07:55.674799 7864 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-jhklz container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Feb 24 02:07:55.674995 master-0 kubenswrapper[7864]: I0224 02:07:55.674905 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" podUID="4f5b3b93-a59d-495c-a311-8913fa6000fc" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Feb 24 02:07:55.849779 master-0 kubenswrapper[7864]: I0224 02:07:55.849651 7864 patch_prober.go:28] interesting pod/operator-controller-controller-manager-9cc7d7bb-hvr8b container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Feb 24 02:07:55.849779 master-0 kubenswrapper[7864]: I0224 02:07:55.849757 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" podUID="4a2d8ef6-14ac-490d-a931-7082344d3f46" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Feb 24 02:07:55.885192 master-0 kubenswrapper[7864]: I0224 02:07:55.885051 7864 status_manager.go:851] "Failed to get status for pod" podUID="64b7ea36-8849-4955-80b5-c7e7c12fcc29" pod="openshift-etcd/installer-1-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-1-master-0)" Feb 24 02:07:57.824462 master-0 kubenswrapper[7864]: E0224 02:07:57.824369 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:07:57.824462 master-0 kubenswrapper[7864]: E0224 02:07:57.824438 7864 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 02:07:58.941432 master-0 kubenswrapper[7864]: I0224 02:07:58.941293 7864 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-4qf9p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" start-of-body= Feb 24 02:07:58.941432 master-0 kubenswrapper[7864]: I0224 02:07:58.941386 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" podUID="91d16f7b-390a-4d9d-99d6-cc8e210801d1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" Feb 24 02:08:01.400096 master-0 kubenswrapper[7864]: E0224 02:08:01.399954 7864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 24 02:08:05.045956 master-0 kubenswrapper[7864]: I0224 02:08:05.045889 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-drrqm_3332acec-1553-4594-a903-a322399f6d9d/network-operator/1.log" Feb 24 02:08:05.046969 master-0 kubenswrapper[7864]: I0224 02:08:05.046915 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-drrqm_3332acec-1553-4594-a903-a322399f6d9d/network-operator/0.log" Feb 24 02:08:05.047066 master-0 kubenswrapper[7864]: I0224 02:08:05.046995 7864 generic.go:334] "Generic (PLEG): container finished" podID="3332acec-1553-4594-a903-a322399f6d9d" containerID="ab02a48b627b8d33941cca436e701fe8cca5e7a818343839afa499d5ab3abe6a" exitCode=255 Feb 24 02:08:05.049420 master-0 kubenswrapper[7864]: I0224 02:08:05.049378 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-8586dccc9b-sl5hz_b36d8451-0fda-4d9d-a850-d05c8f847016/openshift-apiserver-operator/1.log" Feb 24 02:08:05.050146 master-0 kubenswrapper[7864]: I0224 02:08:05.050087 7864 generic.go:334] "Generic (PLEG): container finished" podID="b36d8451-0fda-4d9d-a850-d05c8f847016" containerID="9d6fec3fa582bb40f876b3cafc2f570058ac361dce1068e53e87a7e383e88cf2" exitCode=255 Feb 24 02:08:05.052689 master-0 kubenswrapper[7864]: I0224 02:08:05.052629 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-tl97n_a02536a3-7d3e-4e74-9625-aefed518ec35/kube-controller-manager-operator/1.log" Feb 24 02:08:05.053349 master-0 kubenswrapper[7864]: I0224 02:08:05.053298 7864 generic.go:334] "Generic (PLEG): container finished" podID="a02536a3-7d3e-4e74-9625-aefed518ec35" containerID="fe82991d620c953a66413bff375a2b214f4a5b8652aa8341d49741ccabb41961" exitCode=255 Feb 24 02:08:05.055551 master-0 kubenswrapper[7864]: I0224 02:08:05.055505 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-2492q_f85222bf-f51a-4232-8db1-1e6ee593617b/kube-apiserver-operator/1.log" Feb 24 02:08:05.056321 master-0 kubenswrapper[7864]: I0224 02:08:05.056265 7864 generic.go:334] "Generic (PLEG): container finished" podID="f85222bf-f51a-4232-8db1-1e6ee593617b" containerID="35a7a7e510655bf960c0ebb22ba6e0c6db1f746f1a9047b38fb6bb5a0b24bc60" exitCode=255 Feb 24 02:08:05.059028 master-0 kubenswrapper[7864]: I0224 02:08:05.058995 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-77cd4d9559-8tttg_f5463fbf-ac21-4058-9a3b-30d0e5ea31b7/kube-scheduler-operator-container/1.log" Feb 24 02:08:05.059841 master-0 kubenswrapper[7864]: I0224 02:08:05.059800 7864 generic.go:334] "Generic (PLEG): container finished" podID="f5463fbf-ac21-4058-9a3b-30d0e5ea31b7" containerID="5d2191d76e599e2fb849008551eb68ecc941d4be08a06831c47e2e57561783d8" exitCode=255 Feb 24 02:08:05.062109 master-0 kubenswrapper[7864]: I0224 02:08:05.062068 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-fc889cfd5-xdws2_fcbda577-b943-4b5c-b041-948aece8e40f/kube-storage-version-migrator-operator/1.log" Feb 24 02:08:05.062791 master-0 kubenswrapper[7864]: I0224 02:08:05.062756 7864 generic.go:334] "Generic (PLEG): container finished" podID="fcbda577-b943-4b5c-b041-948aece8e40f" containerID="b052a287fe47bb0fa4d703a983f03a367ec9eff6f9c816432ac44ee4c3a812f3" exitCode=255 Feb 24 02:08:05.136330 master-0 kubenswrapper[7864]: I0224 02:08:05.136239 7864 patch_prober.go:28] interesting pod/controller-manager-57df7db547-2v9c5 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" start-of-body= Feb 24 02:08:05.136553 master-0 kubenswrapper[7864]: I0224 02:08:05.136511 7864 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" podUID="bd1a99d5-e213-42b3-9538-44f68d993184" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" Feb 24 02:08:05.136789 master-0 kubenswrapper[7864]: I0224 02:08:05.136243 7864 patch_prober.go:28] interesting pod/controller-manager-57df7db547-2v9c5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" start-of-body= Feb 24 02:08:05.136882 master-0 kubenswrapper[7864]: I0224 02:08:05.136818 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" podUID="bd1a99d5-e213-42b3-9538-44f68d993184" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" Feb 24 02:08:05.675120 master-0 kubenswrapper[7864]: I0224 02:08:05.675041 7864 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-jhklz container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Feb 24 02:08:05.675389 master-0 kubenswrapper[7864]: I0224 02:08:05.675140 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" podUID="4f5b3b93-a59d-495c-a311-8913fa6000fc" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Feb 24 02:08:05.849693 master-0 kubenswrapper[7864]: I0224 02:08:05.849641 7864 patch_prober.go:28] interesting pod/operator-controller-controller-manager-9cc7d7bb-hvr8b container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Feb 24 02:08:05.849951 master-0 kubenswrapper[7864]: I0224 02:08:05.849907 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" podUID="4a2d8ef6-14ac-490d-a931-7082344d3f46" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Feb 24 02:08:08.940235 master-0 kubenswrapper[7864]: I0224 02:08:08.940069 7864 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-4qf9p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" start-of-body= Feb 24 02:08:08.940235 master-0 kubenswrapper[7864]: I0224 02:08:08.940156 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" podUID="91d16f7b-390a-4d9d-99d6-cc8e210801d1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" Feb 24 02:08:15.136363 master-0 kubenswrapper[7864]: I0224 02:08:15.136238 7864 patch_prober.go:28] interesting pod/controller-manager-57df7db547-2v9c5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" start-of-body= Feb 24 02:08:15.136363 master-0 kubenswrapper[7864]: I0224 02:08:15.136348 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" podUID="bd1a99d5-e213-42b3-9538-44f68d993184" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" Feb 24 02:08:15.674808 master-0 kubenswrapper[7864]: I0224 02:08:15.674724 7864 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-jhklz container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Feb 24 02:08:15.675173 master-0 kubenswrapper[7864]: I0224 02:08:15.674827 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" podUID="4f5b3b93-a59d-495c-a311-8913fa6000fc" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Feb 24 02:08:15.850095 master-0 kubenswrapper[7864]: I0224 02:08:15.850006 7864 patch_prober.go:28] interesting pod/operator-controller-controller-manager-9cc7d7bb-hvr8b container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Feb 24 02:08:15.850265 master-0 kubenswrapper[7864]: I0224 02:08:15.850097 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" podUID="4a2d8ef6-14ac-490d-a931-7082344d3f46" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Feb 24 02:08:15.952333 master-0 kubenswrapper[7864]: E0224 02:08:15.952259 7864 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 24 02:08:15.952618 master-0 kubenswrapper[7864]: E0224 02:08:15.952546 7864 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.014s" Feb 24 02:08:15.952829 master-0 kubenswrapper[7864]: I0224 02:08:15.952635 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerDied","Data":"a8c66d27d61884b6fe77ab4fbf5e74a8d795491882a22f1608be7b10b5068b90"} Feb 24 02:08:15.952829 master-0 kubenswrapper[7864]: I0224 02:08:15.952745 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:08:15.954082 master-0 kubenswrapper[7864]: I0224 02:08:15.953919 7864 scope.go:117] "RemoveContainer" containerID="2a70331e31f309db225d3996274bc257195cff624763144e3200d4a89257b219" Feb 24 02:08:15.965073 master-0 kubenswrapper[7864]: I0224 02:08:15.965001 7864 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 24 02:08:17.141518 master-0 kubenswrapper[7864]: I0224 02:08:17.141431 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-jhklz_4f5b3b93-a59d-495c-a311-8913fa6000fc/manager/0.log" Feb 24 02:08:18.225001 master-0 kubenswrapper[7864]: E0224 02:08:18.224727 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T02:08:08Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T02:08:08Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T02:08:08Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T02:08:08Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:08cff7c9164822cf90c1ddc99284f5fd3c4efbfdf7ff5d2da94ff20f03d57215\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8665346de3cec5b1443fb1e3bf6389962210affa684e5c1b521ec342f56e0901\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1703852494},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:10e72e1dffd75bda73d89a11e18d98c99255c0f2c54d81f82a2a48b0b86b96b5\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:d64168b357c44a3e5febdd4d99c285c68217a6568f9de2371d72e8a089d42b69\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1238591178},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:155018f64a4d43025cb88586009847bd0f7844afa3e1aa81639d31b96bebd68e\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:4154e7856e2578eae0af7bc7ade3338a49c179e8e0b9d8b5167540e580ffc22b\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210563790},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7\\\"],\\\"sizeBytes\\\":918153745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6\\\"],\\\"sizeBytes\\\":470717179},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac\\\"],\\\"sizeBytes\\\":470575802},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce89154fa3fe1e87c660e644b58cf125fede575869fd5841600082c0d1f858a3\\\"],\\\"sizeBytes\\\":468159025},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\\\"],\\\"sizeBytes\\\":464984427},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9\\\"],\\\"sizeBytes\\\":463600445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656\\\"],\\\"sizeBytes\\\":458025547},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf\\\"],\\\"sizeBytes\\\":456470711},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de\\\"],\\\"sizeBytes\\\":448723134}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:08:18.402365 master-0 kubenswrapper[7864]: E0224 02:08:18.402226 7864 controller.go:145] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io master-0)" interval="7s" Feb 24 02:08:18.940557 master-0 kubenswrapper[7864]: I0224 02:08:18.940464 7864 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-4qf9p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" start-of-body= Feb 24 02:08:18.941064 master-0 kubenswrapper[7864]: I0224 02:08:18.940636 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" podUID="91d16f7b-390a-4d9d-99d6-cc8e210801d1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" Feb 24 02:08:18.983947 master-0 kubenswrapper[7864]: E0224 02:08:18.983754 7864 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{redhat-operators-g862w.18970c7c375cb2b6 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-g862w,UID:66fc4bf9-47d0-4530-a49e-912a61cc35fd,APIVersion:v1,ResourceVersion:7800,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/redhat-operator-index:v4.18\" in 19.239s (19.239s including waiting). Image size: 1703852494 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:05:06.037879478 +0000 UTC m=+70.365533140,LastTimestamp:2026-02-24 02:05:06.037879478 +0000 UTC m=+70.365533140,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:08:25.136861 master-0 kubenswrapper[7864]: I0224 02:08:25.136720 7864 patch_prober.go:28] interesting pod/controller-manager-57df7db547-2v9c5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" start-of-body= Feb 24 02:08:25.136861 master-0 kubenswrapper[7864]: I0224 02:08:25.136817 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" podUID="bd1a99d5-e213-42b3-9538-44f68d993184" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" Feb 24 02:08:25.849609 master-0 kubenswrapper[7864]: I0224 02:08:25.849480 7864 patch_prober.go:28] interesting pod/operator-controller-controller-manager-9cc7d7bb-hvr8b container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Feb 24 02:08:25.849888 master-0 kubenswrapper[7864]: I0224 02:08:25.849650 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" podUID="4a2d8ef6-14ac-490d-a931-7082344d3f46" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Feb 24 02:08:28.226118 master-0 kubenswrapper[7864]: E0224 02:08:28.225987 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:08:28.939641 master-0 kubenswrapper[7864]: I0224 02:08:28.939503 7864 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-4qf9p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" start-of-body= Feb 24 02:08:28.939993 master-0 kubenswrapper[7864]: I0224 02:08:28.939719 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" podUID="91d16f7b-390a-4d9d-99d6-cc8e210801d1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" Feb 24 02:08:28.963774 master-0 kubenswrapper[7864]: E0224 02:08:28.963451 7864 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 24 02:08:35.136175 master-0 kubenswrapper[7864]: I0224 02:08:35.136101 7864 patch_prober.go:28] interesting pod/controller-manager-57df7db547-2v9c5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" start-of-body= Feb 24 02:08:35.137188 master-0 kubenswrapper[7864]: I0224 02:08:35.136201 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" podUID="bd1a99d5-e213-42b3-9538-44f68d993184" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" Feb 24 02:08:35.404801 master-0 kubenswrapper[7864]: E0224 02:08:35.404180 7864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 24 02:08:35.850161 master-0 kubenswrapper[7864]: I0224 02:08:35.850036 7864 patch_prober.go:28] interesting pod/operator-controller-controller-manager-9cc7d7bb-hvr8b container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Feb 24 02:08:35.850161 master-0 kubenswrapper[7864]: I0224 02:08:35.850123 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" podUID="4a2d8ef6-14ac-490d-a931-7082344d3f46" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Feb 24 02:08:38.226492 master-0 kubenswrapper[7864]: E0224 02:08:38.226405 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:08:38.940443 master-0 kubenswrapper[7864]: I0224 02:08:38.940373 7864 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-4qf9p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" start-of-body= Feb 24 02:08:38.941945 master-0 kubenswrapper[7864]: I0224 02:08:38.940481 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" podUID="91d16f7b-390a-4d9d-99d6-cc8e210801d1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" Feb 24 02:08:45.136599 master-0 kubenswrapper[7864]: I0224 02:08:45.136482 7864 patch_prober.go:28] interesting pod/controller-manager-57df7db547-2v9c5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" start-of-body= Feb 24 02:08:45.137653 master-0 kubenswrapper[7864]: I0224 02:08:45.136650 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" podUID="bd1a99d5-e213-42b3-9538-44f68d993184" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" Feb 24 02:08:45.849792 master-0 kubenswrapper[7864]: I0224 02:08:45.849676 7864 patch_prober.go:28] interesting pod/operator-controller-controller-manager-9cc7d7bb-hvr8b container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Feb 24 02:08:45.849792 master-0 kubenswrapper[7864]: I0224 02:08:45.849747 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" podUID="4a2d8ef6-14ac-490d-a931-7082344d3f46" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Feb 24 02:08:48.227131 master-0 kubenswrapper[7864]: E0224 02:08:48.227035 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:08:48.940639 master-0 kubenswrapper[7864]: I0224 02:08:48.940443 7864 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-4qf9p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" start-of-body= Feb 24 02:08:48.940639 master-0 kubenswrapper[7864]: I0224 02:08:48.940559 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" podUID="91d16f7b-390a-4d9d-99d6-cc8e210801d1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" Feb 24 02:08:49.968235 master-0 kubenswrapper[7864]: E0224 02:08:49.968137 7864 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 24 02:08:49.969283 master-0 kubenswrapper[7864]: E0224 02:08:49.968500 7864 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.016s" Feb 24 02:08:49.970612 master-0 kubenswrapper[7864]: I0224 02:08:49.970483 7864 scope.go:117] "RemoveContainer" containerID="9d6fec3fa582bb40f876b3cafc2f570058ac361dce1068e53e87a7e383e88cf2" Feb 24 02:08:49.971681 master-0 kubenswrapper[7864]: I0224 02:08:49.971610 7864 scope.go:117] "RemoveContainer" containerID="b052a287fe47bb0fa4d703a983f03a367ec9eff6f9c816432ac44ee4c3a812f3" Feb 24 02:08:49.971852 master-0 kubenswrapper[7864]: I0224 02:08:49.971788 7864 scope.go:117] "RemoveContainer" containerID="e69376d98cee67244b069177748eb8161f1ffee16e9b9f5abd63b6aff145de6c" Feb 24 02:08:49.973162 master-0 kubenswrapper[7864]: I0224 02:08:49.972147 7864 scope.go:117] "RemoveContainer" containerID="e9d2b8b0026aada75f8d27003b5a7df3b3f0253b60da8ae01339ca6b74582705" Feb 24 02:08:49.973162 master-0 kubenswrapper[7864]: I0224 02:08:49.972893 7864 scope.go:117] "RemoveContainer" containerID="fcdcbf5a149a0b578453f994965bc5ee1ca152377a3fb51c3ba2512d342fe454" Feb 24 02:08:49.973841 master-0 kubenswrapper[7864]: I0224 02:08:49.973153 7864 scope.go:117] "RemoveContainer" containerID="28d78d14185433406f5d6be1256f4efc7cd117cd145b616d5b8ccdbc8f03929c" Feb 24 02:08:49.973841 master-0 kubenswrapper[7864]: I0224 02:08:49.973373 7864 scope.go:117] "RemoveContainer" containerID="5ce0ca46b5707d074f514e4d89c259a98815b6c015e08d177ffa0a7a40772a21" Feb 24 02:08:49.974717 master-0 kubenswrapper[7864]: I0224 02:08:49.973908 7864 scope.go:117] "RemoveContainer" containerID="40959528c0e652134371f4afb20a4ee849f4f1c1c0599ddd64b9076a7771bc13" Feb 24 02:08:49.975325 master-0 kubenswrapper[7864]: I0224 02:08:49.975162 7864 scope.go:117] "RemoveContainer" containerID="18e36dffd25cf50db60c55874ae5e83aa35faa4fa1dff2c477ec4899a01aa1f0" Feb 24 02:08:49.975440 master-0 kubenswrapper[7864]: I0224 02:08:49.975414 7864 scope.go:117] "RemoveContainer" containerID="fe82991d620c953a66413bff375a2b214f4a5b8652aa8341d49741ccabb41961" Feb 24 02:08:49.978190 master-0 kubenswrapper[7864]: I0224 02:08:49.976987 7864 scope.go:117] "RemoveContainer" containerID="35a7a7e510655bf960c0ebb22ba6e0c6db1f746f1a9047b38fb6bb5a0b24bc60" Feb 24 02:08:49.979822 master-0 kubenswrapper[7864]: I0224 02:08:49.979721 7864 scope.go:117] "RemoveContainer" containerID="1d78e51e0a1da7f353fa2fc0c8e9c9a46d124e7c769ba9917e9138703d244089" Feb 24 02:08:49.981368 master-0 kubenswrapper[7864]: I0224 02:08:49.980792 7864 scope.go:117] "RemoveContainer" containerID="3b9fb79825f01ed4c00a7769132bc445aa326532ee27382a22aec874d90be7a4" Feb 24 02:08:49.981368 master-0 kubenswrapper[7864]: I0224 02:08:49.981214 7864 scope.go:117] "RemoveContainer" containerID="5d2191d76e599e2fb849008551eb68ecc941d4be08a06831c47e2e57561783d8" Feb 24 02:08:49.981850 master-0 kubenswrapper[7864]: I0224 02:08:49.981615 7864 scope.go:117] "RemoveContainer" containerID="f88738c7cb3808e8ebb5ddd209f4e28577d6aec5e69f689e145c36b78a77fe4b" Feb 24 02:08:49.982033 master-0 kubenswrapper[7864]: I0224 02:08:49.981987 7864 scope.go:117] "RemoveContainer" containerID="7144c5e947ad686471e67b52048230854640c3d324dfe4c40330e542a4803eda" Feb 24 02:08:49.984864 master-0 kubenswrapper[7864]: I0224 02:08:49.984400 7864 scope.go:117] "RemoveContainer" containerID="1dd68f4f64e0c62e01d0497cf59111173fe627d06971140a305a4032c20cc485" Feb 24 02:08:49.984864 master-0 kubenswrapper[7864]: I0224 02:08:49.984565 7864 scope.go:117] "RemoveContainer" containerID="ab02a48b627b8d33941cca436e701fe8cca5e7a818343839afa499d5ab3abe6a" Feb 24 02:08:49.984864 master-0 kubenswrapper[7864]: I0224 02:08:49.984755 7864 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 24 02:08:49.986071 master-0 kubenswrapper[7864]: I0224 02:08:49.985407 7864 scope.go:117] "RemoveContainer" containerID="bb8e1724e77d6ceb463e444b223fcd8637d9a803be2af1a8dcbebbfedcda21d8" Feb 24 02:08:50.409168 master-0 kubenswrapper[7864]: I0224 02:08:50.409123 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-8586dccc9b-sl5hz_b36d8451-0fda-4d9d-a850-d05c8f847016/openshift-apiserver-operator/1.log" Feb 24 02:08:50.412781 master-0 kubenswrapper[7864]: I0224 02:08:50.412755 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-c7fgn_7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c/openshift-controller-manager-operator/0.log" Feb 24 02:08:50.734011 master-0 kubenswrapper[7864]: I0224 02:08:50.733548 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_683deae1-94b1-4c17-a73f-ad628a09134b/installer/0.log" Feb 24 02:08:50.734011 master-0 kubenswrapper[7864]: I0224 02:08:50.733663 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 24 02:08:50.821144 master-0 kubenswrapper[7864]: I0224 02:08:50.821019 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/683deae1-94b1-4c17-a73f-ad628a09134b-var-lock\") pod \"683deae1-94b1-4c17-a73f-ad628a09134b\" (UID: \"683deae1-94b1-4c17-a73f-ad628a09134b\") " Feb 24 02:08:50.821144 master-0 kubenswrapper[7864]: I0224 02:08:50.821110 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/683deae1-94b1-4c17-a73f-ad628a09134b-kube-api-access\") pod \"683deae1-94b1-4c17-a73f-ad628a09134b\" (UID: \"683deae1-94b1-4c17-a73f-ad628a09134b\") " Feb 24 02:08:50.821322 master-0 kubenswrapper[7864]: I0224 02:08:50.821239 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/683deae1-94b1-4c17-a73f-ad628a09134b-kubelet-dir\") pod \"683deae1-94b1-4c17-a73f-ad628a09134b\" (UID: \"683deae1-94b1-4c17-a73f-ad628a09134b\") " Feb 24 02:08:50.821633 master-0 kubenswrapper[7864]: I0224 02:08:50.821592 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/683deae1-94b1-4c17-a73f-ad628a09134b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "683deae1-94b1-4c17-a73f-ad628a09134b" (UID: "683deae1-94b1-4c17-a73f-ad628a09134b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:08:50.821692 master-0 kubenswrapper[7864]: I0224 02:08:50.821647 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/683deae1-94b1-4c17-a73f-ad628a09134b-var-lock" (OuterVolumeSpecName: "var-lock") pod "683deae1-94b1-4c17-a73f-ad628a09134b" (UID: "683deae1-94b1-4c17-a73f-ad628a09134b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:08:50.824849 master-0 kubenswrapper[7864]: I0224 02:08:50.824799 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/683deae1-94b1-4c17-a73f-ad628a09134b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "683deae1-94b1-4c17-a73f-ad628a09134b" (UID: "683deae1-94b1-4c17-a73f-ad628a09134b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:08:50.922934 master-0 kubenswrapper[7864]: I0224 02:08:50.922882 7864 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/683deae1-94b1-4c17-a73f-ad628a09134b-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:08:50.922934 master-0 kubenswrapper[7864]: I0224 02:08:50.922919 7864 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/683deae1-94b1-4c17-a73f-ad628a09134b-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 02:08:50.922934 master-0 kubenswrapper[7864]: I0224 02:08:50.922932 7864 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/683deae1-94b1-4c17-a73f-ad628a09134b-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 02:08:51.426200 master-0 kubenswrapper[7864]: I0224 02:08:51.426114 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-6dlqb_c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/ingress-operator/0.log" Feb 24 02:08:51.430609 master-0 kubenswrapper[7864]: I0224 02:08:51.430391 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-hvr8b_4a2d8ef6-14ac-490d-a931-7082344d3f46/manager/0.log" Feb 24 02:08:51.434065 master-0 kubenswrapper[7864]: I0224 02:08:51.434009 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-p5b6q_adc1097b-c1ab-4f09-965d-1c819671475b/approver/0.log" Feb 24 02:08:51.437437 master-0 kubenswrapper[7864]: I0224 02:08:51.437388 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-2492q_f85222bf-f51a-4232-8db1-1e6ee593617b/kube-apiserver-operator/1.log" Feb 24 02:08:51.440942 master-0 kubenswrapper[7864]: I0224 02:08:51.440909 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-77cd4d9559-8tttg_f5463fbf-ac21-4058-9a3b-30d0e5ea31b7/kube-scheduler-operator-container/1.log" Feb 24 02:08:51.444374 master-0 kubenswrapper[7864]: I0224 02:08:51.444346 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-8l58x_f6e7b773-7ecd-4a5c-8bef-d672f371e7e5/snapshot-controller/0.log" Feb 24 02:08:51.447257 master-0 kubenswrapper[7864]: I0224 02:08:51.447226 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-46vmq_cabdddba-5507-4e47-98ef-a00c6d0f305d/authentication-operator/1.log" Feb 24 02:08:51.458308 master-0 kubenswrapper[7864]: I0224 02:08:51.458262 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k98fq_7b4e3ba0-5194-4e20-8f12-dea4b67504fe/cluster-baremetal-operator/0.log" Feb 24 02:08:51.467207 master-0 kubenswrapper[7864]: I0224 02:08:51.467152 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-drrqm_3332acec-1553-4594-a903-a322399f6d9d/network-operator/1.log" Feb 24 02:08:51.468137 master-0 kubenswrapper[7864]: I0224 02:08:51.468088 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-drrqm_3332acec-1553-4594-a903-a322399f6d9d/network-operator/0.log" Feb 24 02:08:51.470570 master-0 kubenswrapper[7864]: I0224 02:08:51.470507 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_683deae1-94b1-4c17-a73f-ad628a09134b/installer/0.log" Feb 24 02:08:51.470816 master-0 kubenswrapper[7864]: I0224 02:08:51.470768 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 24 02:08:51.473372 master-0 kubenswrapper[7864]: I0224 02:08:51.473338 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-fc889cfd5-xdws2_fcbda577-b943-4b5c-b041-948aece8e40f/kube-storage-version-migrator-operator/1.log" Feb 24 02:08:51.479898 master-0 kubenswrapper[7864]: I0224 02:08:51.479840 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-tl97n_a02536a3-7d3e-4e74-9625-aefed518ec35/kube-controller-manager-operator/1.log" Feb 24 02:08:52.405796 master-0 kubenswrapper[7864]: E0224 02:08:52.405654 7864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 24 02:08:52.987432 master-0 kubenswrapper[7864]: E0224 02:08:52.987199 7864 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{control-plane-machine-set-operator-686847ff5f-ckntz.18970c7c43cbf2b4 openshift-machine-api 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-api,Name:control-plane-machine-set-operator-686847ff5f-ckntz,UID:a4cea44a-1c6e-465f-97df-2c951056cb85,APIVersion:v1,ResourceVersion:8471,FieldPath:spec.containers{control-plane-machine-set-operator},},Reason:Created,Message:Created container: control-plane-machine-set-operator,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:05:06.246496948 +0000 UTC m=+70.574150600,LastTimestamp:2026-02-24 02:05:06.246496948 +0000 UTC m=+70.574150600,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:08:55.886744 master-0 kubenswrapper[7864]: I0224 02:08:55.886657 7864 status_manager.go:851] "Failed to get status for pod" podUID="56c3cb71c9851003c8de7e7c5db4b87e" pod="kube-system/bootstrap-kube-scheduler-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods bootstrap-kube-scheduler-master-0)" Feb 24 02:08:58.227916 master-0 kubenswrapper[7864]: E0224 02:08:58.227842 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:08:58.228798 master-0 kubenswrapper[7864]: E0224 02:08:58.228747 7864 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 02:09:06.015008 master-0 kubenswrapper[7864]: I0224 02:09:06.014936 7864 scope.go:117] "RemoveContainer" containerID="9084ba926bf7975865b803686ed689ae33dbbe263dc377c963e7af79a6dfafbb" Feb 24 02:09:09.406559 master-0 kubenswrapper[7864]: E0224 02:09:09.406403 7864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 24 02:09:18.621258 master-0 kubenswrapper[7864]: E0224 02:09:18.620919 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T02:09:08Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T02:09:08Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T02:09:08Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T02:09:08Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:08cff7c9164822cf90c1ddc99284f5fd3c4efbfdf7ff5d2da94ff20f03d57215\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8665346de3cec5b1443fb1e3bf6389962210affa684e5c1b521ec342f56e0901\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1703852494},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:10e72e1dffd75bda73d89a11e18d98c99255c0f2c54d81f82a2a48b0b86b96b5\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:d64168b357c44a3e5febdd4d99c285c68217a6568f9de2371d72e8a089d42b69\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1238591178},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:155018f64a4d43025cb88586009847bd0f7844afa3e1aa81639d31b96bebd68e\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:4154e7856e2578eae0af7bc7ade3338a49c179e8e0b9d8b5167540e580ffc22b\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210563790},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7\\\"],\\\"sizeBytes\\\":918153745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6\\\"],\\\"sizeBytes\\\":470717179},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac\\\"],\\\"sizeBytes\\\":470575802},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce89154fa3fe1e87c660e644b58cf125fede575869fd5841600082c0d1f858a3\\\"],\\\"sizeBytes\\\":468159025},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\\\"],\\\"sizeBytes\\\":464984427},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9\\\"],\\\"sizeBytes\\\":463600445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656\\\"],\\\"sizeBytes\\\":458025547},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf\\\"],\\\"sizeBytes\\\":456470711},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de\\\"],\\\"sizeBytes\\\":448723134}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:09:20.454779 master-0 kubenswrapper[7864]: E0224 02:09:20.454710 7864 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="30.486s" Feb 24 02:09:20.454779 master-0 kubenswrapper[7864]: I0224 02:09:20.454778 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:09:20.455494 master-0 kubenswrapper[7864]: I0224 02:09:20.454871 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" event={"ID":"cabdddba-5507-4e47-98ef-a00c6d0f305d","Type":"ContainerDied","Data":"5667f053bf6054763921df7da9ea78c08243cf4ba68ece9b8b2d0028467d46ce"} Feb 24 02:09:20.455494 master-0 kubenswrapper[7864]: I0224 02:09:20.455326 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:09:20.455494 master-0 kubenswrapper[7864]: I0224 02:09:20.455357 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:09:20.455494 master-0 kubenswrapper[7864]: I0224 02:09:20.455377 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:09:20.455494 master-0 kubenswrapper[7864]: I0224 02:09:20.455392 7864 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:09:20.455494 master-0 kubenswrapper[7864]: I0224 02:09:20.455410 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:09:20.455494 master-0 kubenswrapper[7864]: I0224 02:09:20.455425 7864 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:09:20.455494 master-0 kubenswrapper[7864]: I0224 02:09:20.455441 7864 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:09:20.455494 master-0 kubenswrapper[7864]: I0224 02:09:20.455458 7864 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:09:20.455494 master-0 kubenswrapper[7864]: I0224 02:09:20.455463 7864 scope.go:117] "RemoveContainer" containerID="5667f053bf6054763921df7da9ea78c08243cf4ba68ece9b8b2d0028467d46ce" Feb 24 02:09:20.478676 master-0 kubenswrapper[7864]: I0224 02:09:20.478536 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g862w" podStartSLOduration=254.106977828 podStartE2EDuration="4m35.478514671s" podCreationTimestamp="2026-02-24 02:04:45 +0000 UTC" firstStartedPulling="2026-02-24 02:04:46.798433119 +0000 UTC m=+51.126086751" lastFinishedPulling="2026-02-24 02:05:08.169969932 +0000 UTC m=+72.497623594" observedRunningTime="2026-02-24 02:09:20.467314324 +0000 UTC m=+324.794967976" watchObservedRunningTime="2026-02-24 02:09:20.478514671 +0000 UTC m=+324.806168323" Feb 24 02:09:20.479460 master-0 kubenswrapper[7864]: I0224 02:09:20.479398 7864 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 24 02:09:20.486524 master-0 kubenswrapper[7864]: I0224 02:09:20.486479 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" Feb 24 02:09:20.486620 master-0 kubenswrapper[7864]: I0224 02:09:20.486538 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:09:20.486620 master-0 kubenswrapper[7864]: I0224 02:09:20.486561 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 24 02:09:20.486620 master-0 kubenswrapper[7864]: I0224 02:09:20.486608 7864 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="86acce7d-3d88-41df-886b-52b692297498" Feb 24 02:09:20.486749 master-0 kubenswrapper[7864]: I0224 02:09:20.486634 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:09:20.486749 master-0 kubenswrapper[7864]: I0224 02:09:20.486658 7864 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 24 02:09:20.486749 master-0 kubenswrapper[7864]: I0224 02:09:20.486673 7864 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="86acce7d-3d88-41df-886b-52b692297498" Feb 24 02:09:20.486749 master-0 kubenswrapper[7864]: I0224 02:09:20.486691 7864 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" Feb 24 02:09:20.486749 master-0 kubenswrapper[7864]: I0224 02:09:20.486709 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr" event={"ID":"9b5620d6-a5fe-45d7-b39e-8bed7f602a17","Type":"ContainerDied","Data":"18e36dffd25cf50db60c55874ae5e83aa35faa4fa1dff2c477ec4899a01aa1f0"} Feb 24 02:09:20.486933 master-0 kubenswrapper[7864]: I0224 02:09:20.486782 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:09:20.486933 master-0 kubenswrapper[7864]: I0224 02:09:20.486840 7864 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:09:20.487016 master-0 kubenswrapper[7864]: I0224 02:09:20.486954 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:09:20.487016 master-0 kubenswrapper[7864]: I0224 02:09:20.486982 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-drrqm" event={"ID":"3332acec-1553-4594-a903-a322399f6d9d","Type":"ContainerDied","Data":"bd04ca4878f34a8e0c0c455c1d43cdf6ed71c1c4d7bdddea11524004be4de241"} Feb 24 02:09:20.487089 master-0 kubenswrapper[7864]: I0224 02:09:20.487017 7864 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:09:20.487089 master-0 kubenswrapper[7864]: I0224 02:09:20.487037 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" event={"ID":"b36d8451-0fda-4d9d-a850-d05c8f847016","Type":"ContainerDied","Data":"681796145fac101487f620de46e9725e65a37fa800e21480a31dfc70bdc40485"} Feb 24 02:09:20.487174 master-0 kubenswrapper[7864]: I0224 02:09:20.487134 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:09:20.487220 master-0 kubenswrapper[7864]: I0224 02:09:20.487194 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:09:20.487220 master-0 kubenswrapper[7864]: I0224 02:09:20.487214 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" Feb 24 02:09:20.488644 master-0 kubenswrapper[7864]: I0224 02:09:20.488599 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" event={"ID":"f5463fbf-ac21-4058-9a3b-30d0e5ea31b7","Type":"ContainerDied","Data":"68ada84fc11ef9772ab5e035538455741610ae1dd423fbf673e921c253973049"} Feb 24 02:09:20.488704 master-0 kubenswrapper[7864]: I0224 02:09:20.488658 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-p5b6q" event={"ID":"adc1097b-c1ab-4f09-965d-1c819671475b","Type":"ContainerDied","Data":"40959528c0e652134371f4afb20a4ee849f4f1c1c0599ddd64b9076a7771bc13"} Feb 24 02:09:20.488704 master-0 kubenswrapper[7864]: I0224 02:09:20.488685 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" event={"ID":"a02536a3-7d3e-4e74-9625-aefed518ec35","Type":"ContainerDied","Data":"7511febd7f596c7a27d0ee29de7073df96d352c94420e8b91ab376f0f8ffbe84"} Feb 24 02:09:20.488783 master-0 kubenswrapper[7864]: I0224 02:09:20.488708 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" event={"ID":"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2","Type":"ContainerDied","Data":"fcdcbf5a149a0b578453f994965bc5ee1ca152377a3fb51c3ba2512d342fe454"} Feb 24 02:09:20.488783 master-0 kubenswrapper[7864]: I0224 02:09:20.488769 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"00825b03893f0ea1af43852d35dd4ae7ab3698ccf0fdf722de5de648ffb1c108"} Feb 24 02:09:20.488863 master-0 kubenswrapper[7864]: I0224 02:09:20.488798 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" event={"ID":"cabdddba-5507-4e47-98ef-a00c6d0f305d","Type":"ContainerStarted","Data":"3b9fb79825f01ed4c00a7769132bc445aa326532ee27382a22aec874d90be7a4"} Feb 24 02:09:20.488903 master-0 kubenswrapper[7864]: I0224 02:09:20.488860 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" event={"ID":"fcbda577-b943-4b5c-b041-948aece8e40f","Type":"ContainerDied","Data":"34c020b5f77acec103ffa53ff06b99869f8141239314c8cafabb68cd7a9b73bc"} Feb 24 02:09:20.488903 master-0 kubenswrapper[7864]: I0224 02:09:20.488885 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"bd02da41-8a48-4436-ae58-6363e7554898","Type":"ContainerDied","Data":"66e7c8048611a89e5a8013562e4ce272875677cb82b5701298ff4ad3f7e01366"} Feb 24 02:09:20.488979 master-0 kubenswrapper[7864]: I0224 02:09:20.488905 7864 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66e7c8048611a89e5a8013562e4ce272875677cb82b5701298ff4ad3f7e01366" Feb 24 02:09:20.488979 master-0 kubenswrapper[7864]: I0224 02:09:20.488922 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn" event={"ID":"7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c","Type":"ContainerDied","Data":"e9d2b8b0026aada75f8d27003b5a7df3b3f0253b60da8ae01339ca6b74582705"} Feb 24 02:09:20.488979 master-0 kubenswrapper[7864]: I0224 02:09:20.488947 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"28d78d14185433406f5d6be1256f4efc7cd117cd145b616d5b8ccdbc8f03929c"} Feb 24 02:09:20.488979 master-0 kubenswrapper[7864]: I0224 02:09:20.488966 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" event={"ID":"f85222bf-f51a-4232-8db1-1e6ee593617b","Type":"ContainerDied","Data":"2434dea5e725cbfa7b0fd89dd34a4a36539029bcba3b05703f8e046b7372d369"} Feb 24 02:09:20.489140 master-0 kubenswrapper[7864]: I0224 02:09:20.488990 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"683deae1-94b1-4c17-a73f-ad628a09134b","Type":"ContainerDied","Data":"94401de1842b75a4dd153e2d7cb3bd01f3f26706beddf59514cdea6c0eb4a139"} Feb 24 02:09:20.489140 master-0 kubenswrapper[7864]: I0224 02:09:20.489010 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" event={"ID":"b36d8451-0fda-4d9d-a850-d05c8f847016","Type":"ContainerStarted","Data":"9d6fec3fa582bb40f876b3cafc2f570058ac361dce1068e53e87a7e383e88cf2"} Feb 24 02:09:20.489140 master-0 kubenswrapper[7864]: I0224 02:09:20.489030 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" event={"ID":"a02536a3-7d3e-4e74-9625-aefed518ec35","Type":"ContainerStarted","Data":"fe82991d620c953a66413bff375a2b214f4a5b8652aa8341d49741ccabb41961"} Feb 24 02:09:20.489140 master-0 kubenswrapper[7864]: I0224 02:09:20.489048 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" event={"ID":"f85222bf-f51a-4232-8db1-1e6ee593617b","Type":"ContainerStarted","Data":"35a7a7e510655bf960c0ebb22ba6e0c6db1f746f1a9047b38fb6bb5a0b24bc60"} Feb 24 02:09:20.489140 master-0 kubenswrapper[7864]: I0224 02:09:20.489065 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" event={"ID":"f5463fbf-ac21-4058-9a3b-30d0e5ea31b7","Type":"ContainerStarted","Data":"5d2191d76e599e2fb849008551eb68ecc941d4be08a06831c47e2e57561783d8"} Feb 24 02:09:20.489140 master-0 kubenswrapper[7864]: I0224 02:09:20.489082 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" event={"ID":"fcbda577-b943-4b5c-b041-948aece8e40f","Type":"ContainerStarted","Data":"b052a287fe47bb0fa4d703a983f03a367ec9eff6f9c816432ac44ee4c3a812f3"} Feb 24 02:09:20.489140 master-0 kubenswrapper[7864]: I0224 02:09:20.489100 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-drrqm" event={"ID":"3332acec-1553-4594-a903-a322399f6d9d","Type":"ContainerStarted","Data":"ab02a48b627b8d33941cca436e701fe8cca5e7a818343839afa499d5ab3abe6a"} Feb 24 02:09:20.489140 master-0 kubenswrapper[7864]: I0224 02:09:20.489117 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" event={"ID":"91d16f7b-390a-4d9d-99d6-cc8e210801d1","Type":"ContainerDied","Data":"f88738c7cb3808e8ebb5ddd209f4e28577d6aec5e69f689e145c36b78a77fe4b"} Feb 24 02:09:20.489140 master-0 kubenswrapper[7864]: I0224 02:09:20.489136 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" event={"ID":"4a2d8ef6-14ac-490d-a931-7082344d3f46","Type":"ContainerDied","Data":"e69376d98cee67244b069177748eb8161f1ffee16e9b9f5abd63b6aff145de6c"} Feb 24 02:09:20.489467 master-0 kubenswrapper[7864]: I0224 02:09:20.489155 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" event={"ID":"4f5b3b93-a59d-495c-a311-8913fa6000fc","Type":"ContainerDied","Data":"2a70331e31f309db225d3996274bc257195cff624763144e3200d4a89257b219"} Feb 24 02:09:20.489467 master-0 kubenswrapper[7864]: I0224 02:09:20.489175 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" event={"ID":"f6e7b773-7ecd-4a5c-8bef-d672f371e7e5","Type":"ContainerDied","Data":"1dd68f4f64e0c62e01d0497cf59111173fe627d06971140a305a4032c20cc485"} Feb 24 02:09:20.489467 master-0 kubenswrapper[7864]: I0224 02:09:20.489195 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" event={"ID":"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334","Type":"ContainerDied","Data":"bb8e1724e77d6ceb463e444b223fcd8637d9a803be2af1a8dcbebbfedcda21d8"} Feb 24 02:09:20.489467 master-0 kubenswrapper[7864]: I0224 02:09:20.489218 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"28d78d14185433406f5d6be1256f4efc7cd117cd145b616d5b8ccdbc8f03929c"} Feb 24 02:09:20.489467 master-0 kubenswrapper[7864]: I0224 02:09:20.489238 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" event={"ID":"cabdddba-5507-4e47-98ef-a00c6d0f305d","Type":"ContainerDied","Data":"3b9fb79825f01ed4c00a7769132bc445aa326532ee27382a22aec874d90be7a4"} Feb 24 02:09:20.489467 master-0 kubenswrapper[7864]: I0224 02:09:20.489258 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" event={"ID":"523033b8-4101-4a55-8320-55bef04ddaaf","Type":"ContainerDied","Data":"1d78e51e0a1da7f353fa2fc0c8e9c9a46d124e7c769ba9917e9138703d244089"} Feb 24 02:09:20.489467 master-0 kubenswrapper[7864]: I0224 02:09:20.489404 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" event={"ID":"7b4e3ba0-5194-4e20-8f12-dea4b67504fe","Type":"ContainerDied","Data":"7144c5e947ad686471e67b52048230854640c3d324dfe4c40330e542a4803eda"} Feb 24 02:09:20.489467 master-0 kubenswrapper[7864]: I0224 02:09:20.489463 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" event={"ID":"bd1a99d5-e213-42b3-9538-44f68d993184","Type":"ContainerDied","Data":"5ce0ca46b5707d074f514e4d89c259a98815b6c015e08d177ffa0a7a40772a21"} Feb 24 02:09:20.489804 master-0 kubenswrapper[7864]: I0224 02:09:20.489496 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-drrqm" event={"ID":"3332acec-1553-4594-a903-a322399f6d9d","Type":"ContainerDied","Data":"ab02a48b627b8d33941cca436e701fe8cca5e7a818343839afa499d5ab3abe6a"} Feb 24 02:09:20.489804 master-0 kubenswrapper[7864]: I0224 02:09:20.489530 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" event={"ID":"b36d8451-0fda-4d9d-a850-d05c8f847016","Type":"ContainerDied","Data":"9d6fec3fa582bb40f876b3cafc2f570058ac361dce1068e53e87a7e383e88cf2"} Feb 24 02:09:20.489804 master-0 kubenswrapper[7864]: I0224 02:09:20.489559 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" event={"ID":"a02536a3-7d3e-4e74-9625-aefed518ec35","Type":"ContainerDied","Data":"fe82991d620c953a66413bff375a2b214f4a5b8652aa8341d49741ccabb41961"} Feb 24 02:09:20.489804 master-0 kubenswrapper[7864]: I0224 02:09:20.489626 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" event={"ID":"f85222bf-f51a-4232-8db1-1e6ee593617b","Type":"ContainerDied","Data":"35a7a7e510655bf960c0ebb22ba6e0c6db1f746f1a9047b38fb6bb5a0b24bc60"} Feb 24 02:09:20.489804 master-0 kubenswrapper[7864]: I0224 02:09:20.489656 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" event={"ID":"f5463fbf-ac21-4058-9a3b-30d0e5ea31b7","Type":"ContainerDied","Data":"5d2191d76e599e2fb849008551eb68ecc941d4be08a06831c47e2e57561783d8"} Feb 24 02:09:20.489804 master-0 kubenswrapper[7864]: I0224 02:09:20.489691 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" event={"ID":"fcbda577-b943-4b5c-b041-948aece8e40f","Type":"ContainerDied","Data":"b052a287fe47bb0fa4d703a983f03a367ec9eff6f9c816432ac44ee4c3a812f3"} Feb 24 02:09:20.489804 master-0 kubenswrapper[7864]: I0224 02:09:20.489725 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" event={"ID":"4f5b3b93-a59d-495c-a311-8913fa6000fc","Type":"ContainerStarted","Data":"8ea7a40a6269170c26a7054768f3ac9bed9d2f95f70afef7c8db1dfa7590c2a4"} Feb 24 02:09:20.489804 master-0 kubenswrapper[7864]: I0224 02:09:20.489749 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"7418acef2878f63a41664398dc64c50d69563b99c0e4935df8104aecdaf495b4"} Feb 24 02:09:20.489804 master-0 kubenswrapper[7864]: I0224 02:09:20.489770 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"0899242d9942257db778aa29a478801ba8d2518e639b0033c7b16a0a42ff10a5"} Feb 24 02:09:20.489804 master-0 kubenswrapper[7864]: I0224 02:09:20.489788 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"e94d6492402ac33f13355d596dfc90617ce3a06153f369c7597c91d9aa0d6092"} Feb 24 02:09:20.489804 master-0 kubenswrapper[7864]: I0224 02:09:20.489805 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"d67efce29a4c7dadf9def673a29b605a940ecf24b2d87b4ec084d429002c032e"} Feb 24 02:09:20.490258 master-0 kubenswrapper[7864]: I0224 02:09:20.489823 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"8e958425dd3f7d3725b8e44d186204361d688e9476e552207418132e8cb6897d"} Feb 24 02:09:20.490258 master-0 kubenswrapper[7864]: I0224 02:09:20.489841 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" event={"ID":"b36d8451-0fda-4d9d-a850-d05c8f847016","Type":"ContainerStarted","Data":"b55dffa76d343a2f86cafe3f911f9b262284cdca97531fab28e85dab3a7157d5"} Feb 24 02:09:20.490258 master-0 kubenswrapper[7864]: I0224 02:09:20.489860 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn" event={"ID":"7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c","Type":"ContainerStarted","Data":"fd1584ac1124bfeb40135ff2c60362b0107b9088e06ac8a200164cc1456f163b"} Feb 24 02:09:20.490258 master-0 kubenswrapper[7864]: I0224 02:09:20.489880 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" event={"ID":"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334","Type":"ContainerStarted","Data":"e132e5edce75784f17a5d6083ace7875ddb1c8bd8a8f79447fddc0374107052b"} Feb 24 02:09:20.490258 master-0 kubenswrapper[7864]: I0224 02:09:20.489898 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" event={"ID":"4a2d8ef6-14ac-490d-a931-7082344d3f46","Type":"ContainerStarted","Data":"8f261203a7d383fb41e07035f6494d12c5ece1ea073bc54bdb848ad1f13db388"} Feb 24 02:09:20.490258 master-0 kubenswrapper[7864]: I0224 02:09:20.489918 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-p5b6q" event={"ID":"adc1097b-c1ab-4f09-965d-1c819671475b","Type":"ContainerStarted","Data":"a75bd245067c2bd96dc6595a801a2c01cb01bd1d3a373e46bfec3a120455dff3"} Feb 24 02:09:20.490258 master-0 kubenswrapper[7864]: I0224 02:09:20.489939 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" event={"ID":"f85222bf-f51a-4232-8db1-1e6ee593617b","Type":"ContainerStarted","Data":"012d6b821fd3ee17fb9f5bf5451fa90bd22ad830cc6d1b88590aa2b80b979353"} Feb 24 02:09:20.490258 master-0 kubenswrapper[7864]: I0224 02:09:20.489959 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" event={"ID":"f5463fbf-ac21-4058-9a3b-30d0e5ea31b7","Type":"ContainerStarted","Data":"e94e209be7ac1e9c90f8d05540fb0675399616e7598a1415ddddef916110f47e"} Feb 24 02:09:20.490258 master-0 kubenswrapper[7864]: I0224 02:09:20.489977 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" event={"ID":"f6e7b773-7ecd-4a5c-8bef-d672f371e7e5","Type":"ContainerStarted","Data":"0a3f515f89392d53020339854399716141f53a518a951acd09091b7c2ee359c5"} Feb 24 02:09:20.490258 master-0 kubenswrapper[7864]: I0224 02:09:20.489999 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" event={"ID":"cabdddba-5507-4e47-98ef-a00c6d0f305d","Type":"ContainerStarted","Data":"a6b3894561c0264e4475f846f6d8a4a1bfc54fcd4a779abb17626105631909c8"} Feb 24 02:09:20.490258 master-0 kubenswrapper[7864]: I0224 02:09:20.490049 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" event={"ID":"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2","Type":"ContainerStarted","Data":"0ebc34731370eef8de7778627bb84636b6e1c8e231035e298fc1c33b8cc5b26c"} Feb 24 02:09:20.490258 master-0 kubenswrapper[7864]: I0224 02:09:20.490069 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"2ee495a11edbad7f6df2bd3a7a0dabaa3f2d4f8c796ece321404428d107c75f1"} Feb 24 02:09:20.490258 master-0 kubenswrapper[7864]: I0224 02:09:20.490088 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" event={"ID":"7b4e3ba0-5194-4e20-8f12-dea4b67504fe","Type":"ContainerStarted","Data":"4302401dd0c403009a73a84bbcac7a1752f583128d1d71005e6d83a2aeed0734"} Feb 24 02:09:20.490258 master-0 kubenswrapper[7864]: I0224 02:09:20.490107 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" event={"ID":"523033b8-4101-4a55-8320-55bef04ddaaf","Type":"ContainerStarted","Data":"6dce9a18bd8067d5b09584fe75915e8862f74f2dfbc221872c96fc50a87d78c5"} Feb 24 02:09:20.490258 master-0 kubenswrapper[7864]: I0224 02:09:20.490129 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" event={"ID":"bd1a99d5-e213-42b3-9538-44f68d993184","Type":"ContainerStarted","Data":"3b3dc3f3343efcc761aa7131cd129c211df9cb96a7206172c4bf7e73b8ab89d1"} Feb 24 02:09:20.490258 master-0 kubenswrapper[7864]: I0224 02:09:20.490148 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-drrqm" event={"ID":"3332acec-1553-4594-a903-a322399f6d9d","Type":"ContainerStarted","Data":"557c7ff2e8e380b71d6ccd161c67b68838831e347351c85dd62355bb14a7036c"} Feb 24 02:09:20.490258 master-0 kubenswrapper[7864]: I0224 02:09:20.490166 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"683deae1-94b1-4c17-a73f-ad628a09134b","Type":"ContainerDied","Data":"63cac87aa9f86fe69782f0e078c00d8a3d420e25f4f78bbbbc9cfebe09080f84"} Feb 24 02:09:20.490258 master-0 kubenswrapper[7864]: I0224 02:09:20.490188 7864 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63cac87aa9f86fe69782f0e078c00d8a3d420e25f4f78bbbbc9cfebe09080f84" Feb 24 02:09:20.490258 master-0 kubenswrapper[7864]: I0224 02:09:20.490206 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" event={"ID":"fcbda577-b943-4b5c-b041-948aece8e40f","Type":"ContainerStarted","Data":"4d0f5f1383f3fd6438bc21f29c7714007c4b1b3f11506fd58ea51a3c14b41a68"} Feb 24 02:09:20.490258 master-0 kubenswrapper[7864]: I0224 02:09:20.490224 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" event={"ID":"91d16f7b-390a-4d9d-99d6-cc8e210801d1","Type":"ContainerStarted","Data":"2ba78e893a4a0218430d836aa7034890de4059e422e2aea2e06c126f90cc740a"} Feb 24 02:09:20.490258 master-0 kubenswrapper[7864]: I0224 02:09:20.490245 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" event={"ID":"a02536a3-7d3e-4e74-9625-aefed518ec35","Type":"ContainerStarted","Data":"99a135eec1fc60a023004a63b57bb9c9bebf117dd68bed38de221f8b6714663d"} Feb 24 02:09:20.490258 master-0 kubenswrapper[7864]: I0224 02:09:20.490263 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr" event={"ID":"9b5620d6-a5fe-45d7-b39e-8bed7f602a17","Type":"ContainerStarted","Data":"0c62d0c7c4600387db3c442e781dcbd028ad9bd230843d85d89b3999a7a687b8"} Feb 24 02:09:20.496525 master-0 kubenswrapper[7864]: I0224 02:09:20.496443 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" Feb 24 02:09:20.507170 master-0 kubenswrapper[7864]: I0224 02:09:20.507138 7864 scope.go:117] "RemoveContainer" containerID="bd04ca4878f34a8e0c0c455c1d43cdf6ed71c1c4d7bdddea11524004be4de241" Feb 24 02:09:20.519527 master-0 kubenswrapper[7864]: I0224 02:09:20.519426 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-ckntz" podStartSLOduration=256.35010211 podStartE2EDuration="4m27.519392393s" podCreationTimestamp="2026-02-24 02:04:53 +0000 UTC" firstStartedPulling="2026-02-24 02:04:54.773363085 +0000 UTC m=+59.101016707" lastFinishedPulling="2026-02-24 02:05:05.942653328 +0000 UTC m=+70.270306990" observedRunningTime="2026-02-24 02:09:20.515716706 +0000 UTC m=+324.843370388" watchObservedRunningTime="2026-02-24 02:09:20.519392393 +0000 UTC m=+324.847046045" Feb 24 02:09:20.546594 master-0 kubenswrapper[7864]: I0224 02:09:20.546473 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dwmm5" podStartSLOduration=255.225291751 podStartE2EDuration="4m38.54643962s" podCreationTimestamp="2026-02-24 02:04:42 +0000 UTC" firstStartedPulling="2026-02-24 02:04:43.723870616 +0000 UTC m=+48.051524238" lastFinishedPulling="2026-02-24 02:05:07.045018475 +0000 UTC m=+71.372672107" observedRunningTime="2026-02-24 02:09:20.541313304 +0000 UTC m=+324.868966956" watchObservedRunningTime="2026-02-24 02:09:20.54643962 +0000 UTC m=+324.874093282" Feb 24 02:09:20.551493 master-0 kubenswrapper[7864]: I0224 02:09:20.551444 7864 scope.go:117] "RemoveContainer" containerID="681796145fac101487f620de46e9725e65a37fa800e21480a31dfc70bdc40485" Feb 24 02:09:20.568489 master-0 kubenswrapper[7864]: I0224 02:09:20.568387 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rvp5j" podStartSLOduration=254.190986805 podStartE2EDuration="4m38.56836099s" podCreationTimestamp="2026-02-24 02:04:42 +0000 UTC" firstStartedPulling="2026-02-24 02:04:43.730618641 +0000 UTC m=+48.058272263" lastFinishedPulling="2026-02-24 02:05:08.107992816 +0000 UTC m=+72.435646448" observedRunningTime="2026-02-24 02:09:20.567492047 +0000 UTC m=+324.895145699" watchObservedRunningTime="2026-02-24 02:09:20.56836099 +0000 UTC m=+324.896014642" Feb 24 02:09:20.598041 master-0 kubenswrapper[7864]: I0224 02:09:20.597986 7864 scope.go:117] "RemoveContainer" containerID="68ada84fc11ef9772ab5e035538455741610ae1dd423fbf673e921c253973049" Feb 24 02:09:20.618910 master-0 kubenswrapper[7864]: I0224 02:09:20.618852 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 24 02:09:20.623157 master-0 kubenswrapper[7864]: I0224 02:09:20.623095 7864 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 24 02:09:20.703436 master-0 kubenswrapper[7864]: I0224 02:09:20.703376 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-drrqm_3332acec-1553-4594-a903-a322399f6d9d/network-operator/1.log" Feb 24 02:09:20.705828 master-0 kubenswrapper[7864]: I0224 02:09:20.705780 7864 scope.go:117] "RemoveContainer" containerID="7511febd7f596c7a27d0ee29de7073df96d352c94420e8b91ab376f0f8ffbe84" Feb 24 02:09:20.707716 master-0 kubenswrapper[7864]: I0224 02:09:20.707647 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-8586dccc9b-sl5hz_b36d8451-0fda-4d9d-a850-d05c8f847016/openshift-apiserver-operator/1.log" Feb 24 02:09:20.713469 master-0 kubenswrapper[7864]: I0224 02:09:20.712461 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-8l58x_f6e7b773-7ecd-4a5c-8bef-d672f371e7e5/snapshot-controller/1.log" Feb 24 02:09:20.713469 master-0 kubenswrapper[7864]: I0224 02:09:20.713258 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-8l58x_f6e7b773-7ecd-4a5c-8bef-d672f371e7e5/snapshot-controller/0.log" Feb 24 02:09:20.713469 master-0 kubenswrapper[7864]: I0224 02:09:20.713334 7864 generic.go:334] "Generic (PLEG): container finished" podID="f6e7b773-7ecd-4a5c-8bef-d672f371e7e5" containerID="0a3f515f89392d53020339854399716141f53a518a951acd09091b7c2ee359c5" exitCode=1 Feb 24 02:09:20.713469 master-0 kubenswrapper[7864]: I0224 02:09:20.713441 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" event={"ID":"f6e7b773-7ecd-4a5c-8bef-d672f371e7e5","Type":"ContainerDied","Data":"0a3f515f89392d53020339854399716141f53a518a951acd09091b7c2ee359c5"} Feb 24 02:09:20.714167 master-0 kubenswrapper[7864]: I0224 02:09:20.714068 7864 scope.go:117] "RemoveContainer" containerID="0a3f515f89392d53020339854399716141f53a518a951acd09091b7c2ee359c5" Feb 24 02:09:20.714797 master-0 kubenswrapper[7864]: E0224 02:09:20.714703 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-8l58x_openshift-cluster-storage-operator(f6e7b773-7ecd-4a5c-8bef-d672f371e7e5)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" podUID="f6e7b773-7ecd-4a5c-8bef-d672f371e7e5" Feb 24 02:09:20.716296 master-0 kubenswrapper[7864]: I0224 02:09:20.715926 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-46vmq_cabdddba-5507-4e47-98ef-a00c6d0f305d/authentication-operator/1.log" Feb 24 02:09:20.753021 master-0 kubenswrapper[7864]: I0224 02:09:20.752651 7864 scope.go:117] "RemoveContainer" containerID="00825b03893f0ea1af43852d35dd4ae7ab3698ccf0fdf722de5de648ffb1c108" Feb 24 02:09:20.786916 master-0 kubenswrapper[7864]: I0224 02:09:20.786864 7864 scope.go:117] "RemoveContainer" containerID="bb19051dc2b31dca07092846d1f69e1993bc40ba384cd5a5b58d6d990afdcb5d" Feb 24 02:09:20.817939 master-0 kubenswrapper[7864]: I0224 02:09:20.817881 7864 scope.go:117] "RemoveContainer" containerID="34c020b5f77acec103ffa53ff06b99869f8141239314c8cafabb68cd7a9b73bc" Feb 24 02:09:20.849251 master-0 kubenswrapper[7864]: I0224 02:09:20.849182 7864 scope.go:117] "RemoveContainer" containerID="2434dea5e725cbfa7b0fd89dd34a4a36539029bcba3b05703f8e046b7372d369" Feb 24 02:09:20.887805 master-0 kubenswrapper[7864]: I0224 02:09:20.884903 7864 scope.go:117] "RemoveContainer" containerID="00825b03893f0ea1af43852d35dd4ae7ab3698ccf0fdf722de5de648ffb1c108" Feb 24 02:09:20.887805 master-0 kubenswrapper[7864]: E0224 02:09:20.885448 7864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00825b03893f0ea1af43852d35dd4ae7ab3698ccf0fdf722de5de648ffb1c108\": container with ID starting with 00825b03893f0ea1af43852d35dd4ae7ab3698ccf0fdf722de5de648ffb1c108 not found: ID does not exist" containerID="00825b03893f0ea1af43852d35dd4ae7ab3698ccf0fdf722de5de648ffb1c108" Feb 24 02:09:20.887805 master-0 kubenswrapper[7864]: I0224 02:09:20.885498 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00825b03893f0ea1af43852d35dd4ae7ab3698ccf0fdf722de5de648ffb1c108"} err="failed to get container status \"00825b03893f0ea1af43852d35dd4ae7ab3698ccf0fdf722de5de648ffb1c108\": rpc error: code = NotFound desc = could not find container \"00825b03893f0ea1af43852d35dd4ae7ab3698ccf0fdf722de5de648ffb1c108\": container with ID starting with 00825b03893f0ea1af43852d35dd4ae7ab3698ccf0fdf722de5de648ffb1c108 not found: ID does not exist" Feb 24 02:09:20.887805 master-0 kubenswrapper[7864]: I0224 02:09:20.885532 7864 scope.go:117] "RemoveContainer" containerID="bb19051dc2b31dca07092846d1f69e1993bc40ba384cd5a5b58d6d990afdcb5d" Feb 24 02:09:20.887805 master-0 kubenswrapper[7864]: E0224 02:09:20.886106 7864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb19051dc2b31dca07092846d1f69e1993bc40ba384cd5a5b58d6d990afdcb5d\": container with ID starting with bb19051dc2b31dca07092846d1f69e1993bc40ba384cd5a5b58d6d990afdcb5d not found: ID does not exist" containerID="bb19051dc2b31dca07092846d1f69e1993bc40ba384cd5a5b58d6d990afdcb5d" Feb 24 02:09:20.887805 master-0 kubenswrapper[7864]: I0224 02:09:20.886476 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb19051dc2b31dca07092846d1f69e1993bc40ba384cd5a5b58d6d990afdcb5d"} err="failed to get container status \"bb19051dc2b31dca07092846d1f69e1993bc40ba384cd5a5b58d6d990afdcb5d\": rpc error: code = NotFound desc = could not find container \"bb19051dc2b31dca07092846d1f69e1993bc40ba384cd5a5b58d6d990afdcb5d\": container with ID starting with bb19051dc2b31dca07092846d1f69e1993bc40ba384cd5a5b58d6d990afdcb5d not found: ID does not exist" Feb 24 02:09:20.887805 master-0 kubenswrapper[7864]: I0224 02:09:20.886508 7864 scope.go:117] "RemoveContainer" containerID="5667f053bf6054763921df7da9ea78c08243cf4ba68ece9b8b2d0028467d46ce" Feb 24 02:09:20.887805 master-0 kubenswrapper[7864]: E0224 02:09:20.886887 7864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5667f053bf6054763921df7da9ea78c08243cf4ba68ece9b8b2d0028467d46ce\": container with ID starting with 5667f053bf6054763921df7da9ea78c08243cf4ba68ece9b8b2d0028467d46ce not found: ID does not exist" containerID="5667f053bf6054763921df7da9ea78c08243cf4ba68ece9b8b2d0028467d46ce" Feb 24 02:09:20.887805 master-0 kubenswrapper[7864]: I0224 02:09:20.886922 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5667f053bf6054763921df7da9ea78c08243cf4ba68ece9b8b2d0028467d46ce"} err="failed to get container status \"5667f053bf6054763921df7da9ea78c08243cf4ba68ece9b8b2d0028467d46ce\": rpc error: code = NotFound desc = could not find container \"5667f053bf6054763921df7da9ea78c08243cf4ba68ece9b8b2d0028467d46ce\": container with ID starting with 5667f053bf6054763921df7da9ea78c08243cf4ba68ece9b8b2d0028467d46ce not found: ID does not exist" Feb 24 02:09:20.887805 master-0 kubenswrapper[7864]: I0224 02:09:20.886948 7864 scope.go:117] "RemoveContainer" containerID="bd04ca4878f34a8e0c0c455c1d43cdf6ed71c1c4d7bdddea11524004be4de241" Feb 24 02:09:20.887805 master-0 kubenswrapper[7864]: E0224 02:09:20.887358 7864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd04ca4878f34a8e0c0c455c1d43cdf6ed71c1c4d7bdddea11524004be4de241\": container with ID starting with bd04ca4878f34a8e0c0c455c1d43cdf6ed71c1c4d7bdddea11524004be4de241 not found: ID does not exist" containerID="bd04ca4878f34a8e0c0c455c1d43cdf6ed71c1c4d7bdddea11524004be4de241" Feb 24 02:09:20.887805 master-0 kubenswrapper[7864]: I0224 02:09:20.887413 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd04ca4878f34a8e0c0c455c1d43cdf6ed71c1c4d7bdddea11524004be4de241"} err="failed to get container status \"bd04ca4878f34a8e0c0c455c1d43cdf6ed71c1c4d7bdddea11524004be4de241\": rpc error: code = NotFound desc = could not find container \"bd04ca4878f34a8e0c0c455c1d43cdf6ed71c1c4d7bdddea11524004be4de241\": container with ID starting with bd04ca4878f34a8e0c0c455c1d43cdf6ed71c1c4d7bdddea11524004be4de241 not found: ID does not exist" Feb 24 02:09:20.887805 master-0 kubenswrapper[7864]: I0224 02:09:20.887445 7864 scope.go:117] "RemoveContainer" containerID="681796145fac101487f620de46e9725e65a37fa800e21480a31dfc70bdc40485" Feb 24 02:09:20.888626 master-0 kubenswrapper[7864]: E0224 02:09:20.887934 7864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"681796145fac101487f620de46e9725e65a37fa800e21480a31dfc70bdc40485\": container with ID starting with 681796145fac101487f620de46e9725e65a37fa800e21480a31dfc70bdc40485 not found: ID does not exist" containerID="681796145fac101487f620de46e9725e65a37fa800e21480a31dfc70bdc40485" Feb 24 02:09:20.888626 master-0 kubenswrapper[7864]: I0224 02:09:20.887971 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"681796145fac101487f620de46e9725e65a37fa800e21480a31dfc70bdc40485"} err="failed to get container status \"681796145fac101487f620de46e9725e65a37fa800e21480a31dfc70bdc40485\": rpc error: code = NotFound desc = could not find container \"681796145fac101487f620de46e9725e65a37fa800e21480a31dfc70bdc40485\": container with ID starting with 681796145fac101487f620de46e9725e65a37fa800e21480a31dfc70bdc40485 not found: ID does not exist" Feb 24 02:09:20.888626 master-0 kubenswrapper[7864]: I0224 02:09:20.887999 7864 scope.go:117] "RemoveContainer" containerID="7511febd7f596c7a27d0ee29de7073df96d352c94420e8b91ab376f0f8ffbe84" Feb 24 02:09:20.888626 master-0 kubenswrapper[7864]: E0224 02:09:20.888401 7864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7511febd7f596c7a27d0ee29de7073df96d352c94420e8b91ab376f0f8ffbe84\": container with ID starting with 7511febd7f596c7a27d0ee29de7073df96d352c94420e8b91ab376f0f8ffbe84 not found: ID does not exist" containerID="7511febd7f596c7a27d0ee29de7073df96d352c94420e8b91ab376f0f8ffbe84" Feb 24 02:09:20.888626 master-0 kubenswrapper[7864]: I0224 02:09:20.888437 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7511febd7f596c7a27d0ee29de7073df96d352c94420e8b91ab376f0f8ffbe84"} err="failed to get container status \"7511febd7f596c7a27d0ee29de7073df96d352c94420e8b91ab376f0f8ffbe84\": rpc error: code = NotFound desc = could not find container \"7511febd7f596c7a27d0ee29de7073df96d352c94420e8b91ab376f0f8ffbe84\": container with ID starting with 7511febd7f596c7a27d0ee29de7073df96d352c94420e8b91ab376f0f8ffbe84 not found: ID does not exist" Feb 24 02:09:20.888626 master-0 kubenswrapper[7864]: I0224 02:09:20.888463 7864 scope.go:117] "RemoveContainer" containerID="2434dea5e725cbfa7b0fd89dd34a4a36539029bcba3b05703f8e046b7372d369" Feb 24 02:09:20.888991 master-0 kubenswrapper[7864]: E0224 02:09:20.888888 7864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2434dea5e725cbfa7b0fd89dd34a4a36539029bcba3b05703f8e046b7372d369\": container with ID starting with 2434dea5e725cbfa7b0fd89dd34a4a36539029bcba3b05703f8e046b7372d369 not found: ID does not exist" containerID="2434dea5e725cbfa7b0fd89dd34a4a36539029bcba3b05703f8e046b7372d369" Feb 24 02:09:20.888991 master-0 kubenswrapper[7864]: I0224 02:09:20.888925 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2434dea5e725cbfa7b0fd89dd34a4a36539029bcba3b05703f8e046b7372d369"} err="failed to get container status \"2434dea5e725cbfa7b0fd89dd34a4a36539029bcba3b05703f8e046b7372d369\": rpc error: code = NotFound desc = could not find container \"2434dea5e725cbfa7b0fd89dd34a4a36539029bcba3b05703f8e046b7372d369\": container with ID starting with 2434dea5e725cbfa7b0fd89dd34a4a36539029bcba3b05703f8e046b7372d369 not found: ID does not exist" Feb 24 02:09:20.888991 master-0 kubenswrapper[7864]: I0224 02:09:20.888982 7864 scope.go:117] "RemoveContainer" containerID="68ada84fc11ef9772ab5e035538455741610ae1dd423fbf673e921c253973049" Feb 24 02:09:20.889464 master-0 kubenswrapper[7864]: E0224 02:09:20.889384 7864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68ada84fc11ef9772ab5e035538455741610ae1dd423fbf673e921c253973049\": container with ID starting with 68ada84fc11ef9772ab5e035538455741610ae1dd423fbf673e921c253973049 not found: ID does not exist" containerID="68ada84fc11ef9772ab5e035538455741610ae1dd423fbf673e921c253973049" Feb 24 02:09:20.889464 master-0 kubenswrapper[7864]: I0224 02:09:20.889426 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68ada84fc11ef9772ab5e035538455741610ae1dd423fbf673e921c253973049"} err="failed to get container status \"68ada84fc11ef9772ab5e035538455741610ae1dd423fbf673e921c253973049\": rpc error: code = NotFound desc = could not find container \"68ada84fc11ef9772ab5e035538455741610ae1dd423fbf673e921c253973049\": container with ID starting with 68ada84fc11ef9772ab5e035538455741610ae1dd423fbf673e921c253973049 not found: ID does not exist" Feb 24 02:09:20.889464 master-0 kubenswrapper[7864]: I0224 02:09:20.889451 7864 scope.go:117] "RemoveContainer" containerID="34c020b5f77acec103ffa53ff06b99869f8141239314c8cafabb68cd7a9b73bc" Feb 24 02:09:20.889966 master-0 kubenswrapper[7864]: E0224 02:09:20.889914 7864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34c020b5f77acec103ffa53ff06b99869f8141239314c8cafabb68cd7a9b73bc\": container with ID starting with 34c020b5f77acec103ffa53ff06b99869f8141239314c8cafabb68cd7a9b73bc not found: ID does not exist" containerID="34c020b5f77acec103ffa53ff06b99869f8141239314c8cafabb68cd7a9b73bc" Feb 24 02:09:20.890045 master-0 kubenswrapper[7864]: I0224 02:09:20.889956 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34c020b5f77acec103ffa53ff06b99869f8141239314c8cafabb68cd7a9b73bc"} err="failed to get container status \"34c020b5f77acec103ffa53ff06b99869f8141239314c8cafabb68cd7a9b73bc\": rpc error: code = NotFound desc = could not find container \"34c020b5f77acec103ffa53ff06b99869f8141239314c8cafabb68cd7a9b73bc\": container with ID starting with 34c020b5f77acec103ffa53ff06b99869f8141239314c8cafabb68cd7a9b73bc not found: ID does not exist" Feb 24 02:09:20.890045 master-0 kubenswrapper[7864]: I0224 02:09:20.889982 7864 scope.go:117] "RemoveContainer" containerID="1dd68f4f64e0c62e01d0497cf59111173fe627d06971140a305a4032c20cc485" Feb 24 02:09:21.123318 master-0 kubenswrapper[7864]: I0224 02:09:21.123098 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hrmdr" podStartSLOduration=256.866886447 podStartE2EDuration="4m38.123070882s" podCreationTimestamp="2026-02-24 02:04:43 +0000 UTC" firstStartedPulling="2026-02-24 02:04:45.789570147 +0000 UTC m=+50.117223799" lastFinishedPulling="2026-02-24 02:05:07.045754602 +0000 UTC m=+71.373408234" observedRunningTime="2026-02-24 02:09:21.119168839 +0000 UTC m=+325.446822501" watchObservedRunningTime="2026-02-24 02:09:21.123070882 +0000 UTC m=+325.450724534" Feb 24 02:09:21.726020 master-0 kubenswrapper[7864]: I0224 02:09:21.725948 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-77cd4d9559-8tttg_f5463fbf-ac21-4058-9a3b-30d0e5ea31b7/kube-scheduler-operator-container/1.log" Feb 24 02:09:21.728643 master-0 kubenswrapper[7864]: I0224 02:09:21.728564 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-8l58x_f6e7b773-7ecd-4a5c-8bef-d672f371e7e5/snapshot-controller/1.log" Feb 24 02:09:21.729782 master-0 kubenswrapper[7864]: I0224 02:09:21.729698 7864 scope.go:117] "RemoveContainer" containerID="0a3f515f89392d53020339854399716141f53a518a951acd09091b7c2ee359c5" Feb 24 02:09:21.730227 master-0 kubenswrapper[7864]: E0224 02:09:21.730139 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-8l58x_openshift-cluster-storage-operator(f6e7b773-7ecd-4a5c-8bef-d672f371e7e5)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" podUID="f6e7b773-7ecd-4a5c-8bef-d672f371e7e5" Feb 24 02:09:21.732046 master-0 kubenswrapper[7864]: I0224 02:09:21.731944 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-fc889cfd5-xdws2_fcbda577-b943-4b5c-b041-948aece8e40f/kube-storage-version-migrator-operator/1.log" Feb 24 02:09:21.739753 master-0 kubenswrapper[7864]: I0224 02:09:21.739697 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-tl97n_a02536a3-7d3e-4e74-9625-aefed518ec35/kube-controller-manager-operator/1.log" Feb 24 02:09:21.742915 master-0 kubenswrapper[7864]: I0224 02:09:21.742858 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-2492q_f85222bf-f51a-4232-8db1-1e6ee593617b/kube-apiserver-operator/1.log" Feb 24 02:09:21.900611 master-0 kubenswrapper[7864]: I0224 02:09:21.900498 7864 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="598f691a-1472-4198-bcd5-6956217d30f9" path="/var/lib/kubelet/pods/598f691a-1472-4198-bcd5-6956217d30f9/volumes" Feb 24 02:09:22.084467 master-0 kubenswrapper[7864]: I0224 02:09:22.084294 7864 patch_prober.go:28] interesting pod/authentication-operator-5bd7c86784-46vmq container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" start-of-body= Feb 24 02:09:22.084467 master-0 kubenswrapper[7864]: I0224 02:09:22.084381 7864 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" podUID="cabdddba-5507-4e47-98ef-a00c6d0f305d" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" Feb 24 02:09:22.994983 master-0 kubenswrapper[7864]: I0224 02:09:22.994889 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 24 02:09:26.407828 master-0 kubenswrapper[7864]: E0224 02:09:26.407630 7864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Feb 24 02:09:27.217008 master-0 kubenswrapper[7864]: I0224 02:09:27.216882 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:09:27.995410 master-0 kubenswrapper[7864]: I0224 02:09:27.995287 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 24 02:09:28.031620 master-0 kubenswrapper[7864]: I0224 02:09:28.031497 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 24 02:09:28.622562 master-0 kubenswrapper[7864]: E0224 02:09:28.622438 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:09:29.804221 master-0 kubenswrapper[7864]: I0224 02:09:29.804095 7864 generic.go:334] "Generic (PLEG): container finished" podID="303d5058-84df-40d1-a941-896b093ae470" containerID="af016520fee3befe328cfcacf1e61661d632e7bab3a83e84890042b8512881e1" exitCode=0 Feb 24 02:09:29.805002 master-0 kubenswrapper[7864]: I0224 02:09:29.804205 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" event={"ID":"303d5058-84df-40d1-a941-896b093ae470","Type":"ContainerDied","Data":"af016520fee3befe328cfcacf1e61661d632e7bab3a83e84890042b8512881e1"} Feb 24 02:09:29.805002 master-0 kubenswrapper[7864]: I0224 02:09:29.804314 7864 scope.go:117] "RemoveContainer" containerID="8930b4416260ff6550582e0ac717e48f996f0ee753ab29009f1d6eec95a046f6" Feb 24 02:09:29.805136 master-0 kubenswrapper[7864]: I0224 02:09:29.805010 7864 scope.go:117] "RemoveContainer" containerID="af016520fee3befe328cfcacf1e61661d632e7bab3a83e84890042b8512881e1" Feb 24 02:09:29.805459 master-0 kubenswrapper[7864]: E0224 02:09:29.805394 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-olm-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-olm-operator pod=cluster-olm-operator-5bd7768f54-7wc6k_openshift-cluster-olm-operator(303d5058-84df-40d1-a941-896b093ae470)\"" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" podUID="303d5058-84df-40d1-a941-896b093ae470" Feb 24 02:09:29.807916 master-0 kubenswrapper[7864]: I0224 02:09:29.807858 7864 generic.go:334] "Generic (PLEG): container finished" podID="c92835f0-7f32-4584-8304-843d7979392a" containerID="31574f7edd04f096f094395019ad492b65bb9b76a514603c20ad3eb658100f5f" exitCode=0 Feb 24 02:09:29.808026 master-0 kubenswrapper[7864]: I0224 02:09:29.807939 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" event={"ID":"c92835f0-7f32-4584-8304-843d7979392a","Type":"ContainerDied","Data":"31574f7edd04f096f094395019ad492b65bb9b76a514603c20ad3eb658100f5f"} Feb 24 02:09:29.808687 master-0 kubenswrapper[7864]: I0224 02:09:29.808646 7864 scope.go:117] "RemoveContainer" containerID="31574f7edd04f096f094395019ad492b65bb9b76a514603c20ad3eb658100f5f" Feb 24 02:09:29.809015 master-0 kubenswrapper[7864]: E0224 02:09:29.808969 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-config-operator pod=openshift-config-operator-6f47d587d6-ccrxg_openshift-config-operator(c92835f0-7f32-4584-8304-843d7979392a)\"" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" podUID="c92835f0-7f32-4584-8304-843d7979392a" Feb 24 02:09:29.810334 master-0 kubenswrapper[7864]: I0224 02:09:29.810274 7864 generic.go:334] "Generic (PLEG): container finished" podID="c84dc269-43ae-4083-9998-a0b3c90bb681" containerID="d334785d9219b7d8b6844162f83a93171c2b15443bfd41ab182a8af1bf2c16e9" exitCode=0 Feb 24 02:09:29.810471 master-0 kubenswrapper[7864]: I0224 02:09:29.810374 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" event={"ID":"c84dc269-43ae-4083-9998-a0b3c90bb681","Type":"ContainerDied","Data":"d334785d9219b7d8b6844162f83a93171c2b15443bfd41ab182a8af1bf2c16e9"} Feb 24 02:09:29.810837 master-0 kubenswrapper[7864]: I0224 02:09:29.810809 7864 scope.go:117] "RemoveContainer" containerID="d334785d9219b7d8b6844162f83a93171c2b15443bfd41ab182a8af1bf2c16e9" Feb 24 02:09:29.814757 master-0 kubenswrapper[7864]: I0224 02:09:29.813936 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-bcf775fc9-8x6sd_6a9ccd8e-d964-4c03-8ffc-51b464030c25/cluster-node-tuning-operator/0.log" Feb 24 02:09:29.814757 master-0 kubenswrapper[7864]: I0224 02:09:29.814014 7864 generic.go:334] "Generic (PLEG): container finished" podID="6a9ccd8e-d964-4c03-8ffc-51b464030c25" containerID="3b252dadf775881151b12a03ae689a3184f813df8c5304f84973c53cfd29954e" exitCode=1 Feb 24 02:09:29.814757 master-0 kubenswrapper[7864]: I0224 02:09:29.814109 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" event={"ID":"6a9ccd8e-d964-4c03-8ffc-51b464030c25","Type":"ContainerDied","Data":"3b252dadf775881151b12a03ae689a3184f813df8c5304f84973c53cfd29954e"} Feb 24 02:09:29.814757 master-0 kubenswrapper[7864]: I0224 02:09:29.814659 7864 scope.go:117] "RemoveContainer" containerID="3b252dadf775881151b12a03ae689a3184f813df8c5304f84973c53cfd29954e" Feb 24 02:09:29.816872 master-0 kubenswrapper[7864]: I0224 02:09:29.816510 7864 generic.go:334] "Generic (PLEG): container finished" podID="7b098bd4-5751-4b01-8409-0688fd29233e" containerID="cd4d613347fcadf7d18c80092bf87aa7750bed8421d9d1f3c7ea9c90740b5523" exitCode=0 Feb 24 02:09:29.816872 master-0 kubenswrapper[7864]: I0224 02:09:29.816600 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-c95qc" event={"ID":"7b098bd4-5751-4b01-8409-0688fd29233e","Type":"ContainerDied","Data":"cd4d613347fcadf7d18c80092bf87aa7750bed8421d9d1f3c7ea9c90740b5523"} Feb 24 02:09:29.817086 master-0 kubenswrapper[7864]: I0224 02:09:29.817044 7864 scope.go:117] "RemoveContainer" containerID="cd4d613347fcadf7d18c80092bf87aa7750bed8421d9d1f3c7ea9c90740b5523" Feb 24 02:09:29.821291 master-0 kubenswrapper[7864]: I0224 02:09:29.821247 7864 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="1547c14f4eb619b96bd863bc221c6f76692c0c715d2c89a10b3b0a90e8c9a765" exitCode=0 Feb 24 02:09:29.821414 master-0 kubenswrapper[7864]: I0224 02:09:29.821313 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"1547c14f4eb619b96bd863bc221c6f76692c0c715d2c89a10b3b0a90e8c9a765"} Feb 24 02:09:29.821819 master-0 kubenswrapper[7864]: I0224 02:09:29.821788 7864 scope.go:117] "RemoveContainer" containerID="1547c14f4eb619b96bd863bc221c6f76692c0c715d2c89a10b3b0a90e8c9a765" Feb 24 02:09:29.824632 master-0 kubenswrapper[7864]: I0224 02:09:29.823971 7864 generic.go:334] "Generic (PLEG): container finished" podID="c6153510-452b-4726-8b63-8cc894daa168" containerID="cc397ba4850c1884796fd99f77165bb7223cb379b9ddec9b0da649d7c4abf92f" exitCode=0 Feb 24 02:09:29.824632 master-0 kubenswrapper[7864]: I0224 02:09:29.824012 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-nqcs2" event={"ID":"c6153510-452b-4726-8b63-8cc894daa168","Type":"ContainerDied","Data":"cc397ba4850c1884796fd99f77165bb7223cb379b9ddec9b0da649d7c4abf92f"} Feb 24 02:09:29.824632 master-0 kubenswrapper[7864]: I0224 02:09:29.824388 7864 scope.go:117] "RemoveContainer" containerID="cc397ba4850c1884796fd99f77165bb7223cb379b9ddec9b0da649d7c4abf92f" Feb 24 02:09:29.867062 master-0 kubenswrapper[7864]: I0224 02:09:29.866830 7864 scope.go:117] "RemoveContainer" containerID="07b9433470f2cae90108f994623de6a108abe146e8addc319cfc6c6ef422b361" Feb 24 02:09:30.213068 master-0 kubenswrapper[7864]: I0224 02:09:30.213007 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:09:30.232169 master-0 kubenswrapper[7864]: I0224 02:09:30.224614 7864 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": context deadline exceeded" Feb 24 02:09:30.366908 master-0 kubenswrapper[7864]: I0224 02:09:30.366716 7864 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:09:30.836189 master-0 kubenswrapper[7864]: I0224 02:09:30.836104 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-nqcs2" event={"ID":"c6153510-452b-4726-8b63-8cc894daa168","Type":"ContainerStarted","Data":"744949b5aa1c1c17d025fe44f4d0b2efdedae3bce2dd2885b36b07a915024ace"} Feb 24 02:09:30.844800 master-0 kubenswrapper[7864]: I0224 02:09:30.844689 7864 scope.go:117] "RemoveContainer" containerID="31574f7edd04f096f094395019ad492b65bb9b76a514603c20ad3eb658100f5f" Feb 24 02:09:30.845115 master-0 kubenswrapper[7864]: E0224 02:09:30.845054 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-config-operator pod=openshift-config-operator-6f47d587d6-ccrxg_openshift-config-operator(c92835f0-7f32-4584-8304-843d7979392a)\"" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" podUID="c92835f0-7f32-4584-8304-843d7979392a" Feb 24 02:09:30.846714 master-0 kubenswrapper[7864]: I0224 02:09:30.846642 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" event={"ID":"c84dc269-43ae-4083-9998-a0b3c90bb681","Type":"ContainerStarted","Data":"f2168e99f1f05c5d55e4f3c5a9f0f43a42237ceed5d8da4d7ab8c9252dfaf352"} Feb 24 02:09:30.850273 master-0 kubenswrapper[7864]: I0224 02:09:30.850216 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-bcf775fc9-8x6sd_6a9ccd8e-d964-4c03-8ffc-51b464030c25/cluster-node-tuning-operator/0.log" Feb 24 02:09:30.850394 master-0 kubenswrapper[7864]: I0224 02:09:30.850359 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" event={"ID":"6a9ccd8e-d964-4c03-8ffc-51b464030c25","Type":"ContainerStarted","Data":"5a7ad2a781522a5adfd41c8ba931c6dfc84f053a55b73cc07dafd244e7f970cc"} Feb 24 02:09:30.853484 master-0 kubenswrapper[7864]: I0224 02:09:30.853414 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-c95qc" event={"ID":"7b098bd4-5751-4b01-8409-0688fd29233e","Type":"ContainerStarted","Data":"f292b51fcd18744289c5ad0eb2ad98ecedf1200c5c17b00777cb5c2c1e7e3e7d"} Feb 24 02:09:30.858116 master-0 kubenswrapper[7864]: I0224 02:09:30.858052 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"00f4257f3724c1a89a1afb3daabff19398e78a26b25bcfbd1dca685042ce886f"} Feb 24 02:09:31.184782 master-0 kubenswrapper[7864]: I0224 02:09:31.184483 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:09:31.738885 master-0 kubenswrapper[7864]: I0224 02:09:31.738792 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:09:31.866303 master-0 kubenswrapper[7864]: I0224 02:09:31.866209 7864 scope.go:117] "RemoveContainer" containerID="31574f7edd04f096f094395019ad492b65bb9b76a514603c20ad3eb658100f5f" Feb 24 02:09:31.867295 master-0 kubenswrapper[7864]: E0224 02:09:31.866536 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-config-operator pod=openshift-config-operator-6f47d587d6-ccrxg_openshift-config-operator(c92835f0-7f32-4584-8304-843d7979392a)\"" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" podUID="c92835f0-7f32-4584-8304-843d7979392a" Feb 24 02:09:32.081428 master-0 kubenswrapper[7864]: I0224 02:09:32.081224 7864 patch_prober.go:28] interesting pod/authentication-operator-5bd7c86784-46vmq container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" start-of-body= Feb 24 02:09:32.081428 master-0 kubenswrapper[7864]: I0224 02:09:32.081311 7864 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" podUID="cabdddba-5507-4e47-98ef-a00c6d0f305d" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" Feb 24 02:09:32.503178 master-0 kubenswrapper[7864]: I0224 02:09:32.503041 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:09:33.016412 master-0 kubenswrapper[7864]: I0224 02:09:33.016355 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 24 02:09:33.497967 master-0 kubenswrapper[7864]: E0224 02:09:33.497902 7864 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 24 02:09:35.503740 master-0 kubenswrapper[7864]: I0224 02:09:35.503619 7864 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 02:09:35.878892 master-0 kubenswrapper[7864]: I0224 02:09:35.878666 7864 scope.go:117] "RemoveContainer" containerID="0a3f515f89392d53020339854399716141f53a518a951acd09091b7c2ee359c5" Feb 24 02:09:36.918777 master-0 kubenswrapper[7864]: I0224 02:09:36.918632 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-8l58x_f6e7b773-7ecd-4a5c-8bef-d672f371e7e5/snapshot-controller/1.log" Feb 24 02:09:36.918777 master-0 kubenswrapper[7864]: I0224 02:09:36.918788 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" event={"ID":"f6e7b773-7ecd-4a5c-8bef-d672f371e7e5","Type":"ContainerStarted","Data":"4291e0907ef19a32f51af939efa4c04ce2db9b3453891c7e12142da00a40e520"} Feb 24 02:09:38.687564 master-0 kubenswrapper[7864]: I0224 02:09:38.687479 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:09:38.694892 master-0 kubenswrapper[7864]: I0224 02:09:38.694838 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:09:40.874698 master-0 kubenswrapper[7864]: I0224 02:09:40.874559 7864 scope.go:117] "RemoveContainer" containerID="af016520fee3befe328cfcacf1e61661d632e7bab3a83e84890042b8512881e1" Feb 24 02:09:41.956992 master-0 kubenswrapper[7864]: I0224 02:09:41.956910 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" event={"ID":"303d5058-84df-40d1-a941-896b093ae470","Type":"ContainerStarted","Data":"5ade2e4f1fdc37ba74fb08a73d1b48600d369e60d30927c9f48ef0e5d4fba55a"} Feb 24 02:09:42.081940 master-0 kubenswrapper[7864]: I0224 02:09:42.081843 7864 patch_prober.go:28] interesting pod/authentication-operator-5bd7c86784-46vmq container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" start-of-body= Feb 24 02:09:42.082137 master-0 kubenswrapper[7864]: I0224 02:09:42.081944 7864 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" podUID="cabdddba-5507-4e47-98ef-a00c6d0f305d" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" Feb 24 02:09:42.082137 master-0 kubenswrapper[7864]: I0224 02:09:42.082017 7864 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:09:42.082889 master-0 kubenswrapper[7864]: I0224 02:09:42.082823 7864 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"a6b3894561c0264e4475f846f6d8a4a1bfc54fcd4a779abb17626105631909c8"} pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Feb 24 02:09:42.083011 master-0 kubenswrapper[7864]: I0224 02:09:42.082900 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" podUID="cabdddba-5507-4e47-98ef-a00c6d0f305d" containerName="authentication-operator" containerID="cri-o://a6b3894561c0264e4475f846f6d8a4a1bfc54fcd4a779abb17626105631909c8" gracePeriod=30 Feb 24 02:09:42.511873 master-0 kubenswrapper[7864]: I0224 02:09:42.511795 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:09:42.520213 master-0 kubenswrapper[7864]: I0224 02:09:42.520150 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:09:42.967347 master-0 kubenswrapper[7864]: I0224 02:09:42.967285 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-46vmq_cabdddba-5507-4e47-98ef-a00c6d0f305d/authentication-operator/2.log" Feb 24 02:09:42.968244 master-0 kubenswrapper[7864]: I0224 02:09:42.968035 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-46vmq_cabdddba-5507-4e47-98ef-a00c6d0f305d/authentication-operator/1.log" Feb 24 02:09:42.968244 master-0 kubenswrapper[7864]: I0224 02:09:42.968099 7864 generic.go:334] "Generic (PLEG): container finished" podID="cabdddba-5507-4e47-98ef-a00c6d0f305d" containerID="a6b3894561c0264e4475f846f6d8a4a1bfc54fcd4a779abb17626105631909c8" exitCode=255 Feb 24 02:09:42.968244 master-0 kubenswrapper[7864]: I0224 02:09:42.968177 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" event={"ID":"cabdddba-5507-4e47-98ef-a00c6d0f305d","Type":"ContainerDied","Data":"a6b3894561c0264e4475f846f6d8a4a1bfc54fcd4a779abb17626105631909c8"} Feb 24 02:09:42.968454 master-0 kubenswrapper[7864]: I0224 02:09:42.968297 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" event={"ID":"cabdddba-5507-4e47-98ef-a00c6d0f305d","Type":"ContainerStarted","Data":"07dffa654def49a750cd8c0b2fc4b62a229828d8d6aee70a6f54eea4290dad7f"} Feb 24 02:09:42.968454 master-0 kubenswrapper[7864]: I0224 02:09:42.968351 7864 scope.go:117] "RemoveContainer" containerID="3b9fb79825f01ed4c00a7769132bc445aa326532ee27382a22aec874d90be7a4" Feb 24 02:09:43.875343 master-0 kubenswrapper[7864]: I0224 02:09:43.875257 7864 scope.go:117] "RemoveContainer" containerID="31574f7edd04f096f094395019ad492b65bb9b76a514603c20ad3eb658100f5f" Feb 24 02:09:43.979418 master-0 kubenswrapper[7864]: I0224 02:09:43.979356 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-46vmq_cabdddba-5507-4e47-98ef-a00c6d0f305d/authentication-operator/2.log" Feb 24 02:09:44.992165 master-0 kubenswrapper[7864]: I0224 02:09:44.992082 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" event={"ID":"c92835f0-7f32-4584-8304-843d7979392a","Type":"ContainerStarted","Data":"e6ffae40e1742eb9b97a832429ff51e680fe1fd780f353261d9c3fa4f6b341dc"} Feb 24 02:09:44.993019 master-0 kubenswrapper[7864]: I0224 02:09:44.992644 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:09:46.891731 master-0 kubenswrapper[7864]: E0224 02:09:46.891635 7864 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 24 02:09:48.369109 master-0 kubenswrapper[7864]: I0224 02:09:48.369004 7864 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-ccrxg container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.23:8443/healthz\": dial tcp 10.128.0.23:8443: connect: connection refused" start-of-body= Feb 24 02:09:48.369109 master-0 kubenswrapper[7864]: I0224 02:09:48.369097 7864 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" podUID="c92835f0-7f32-4584-8304-843d7979392a" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.23:8443/healthz\": dial tcp 10.128.0.23:8443: connect: connection refused" Feb 24 02:09:49.739610 master-0 kubenswrapper[7864]: I0224 02:09:49.739486 7864 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-ccrxg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.23:8443/healthz\": dial tcp 10.128.0.23:8443: connect: connection refused" start-of-body= Feb 24 02:09:49.740663 master-0 kubenswrapper[7864]: I0224 02:09:49.739635 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" podUID="c92835f0-7f32-4584-8304-843d7979392a" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.23:8443/healthz\": dial tcp 10.128.0.23:8443: connect: connection refused" Feb 24 02:09:52.744875 master-0 kubenswrapper[7864]: I0224 02:09:52.744750 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:10:04.283603 master-0 kubenswrapper[7864]: I0224 02:10:04.281963 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59"] Feb 24 02:10:04.283603 master-0 kubenswrapper[7864]: E0224 02:10:04.282338 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd02da41-8a48-4436-ae58-6363e7554898" containerName="installer" Feb 24 02:10:04.283603 master-0 kubenswrapper[7864]: I0224 02:10:04.282365 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd02da41-8a48-4436-ae58-6363e7554898" containerName="installer" Feb 24 02:10:04.283603 master-0 kubenswrapper[7864]: E0224 02:10:04.282390 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="683deae1-94b1-4c17-a73f-ad628a09134b" containerName="installer" Feb 24 02:10:04.283603 master-0 kubenswrapper[7864]: I0224 02:10:04.282404 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="683deae1-94b1-4c17-a73f-ad628a09134b" containerName="installer" Feb 24 02:10:04.283603 master-0 kubenswrapper[7864]: E0224 02:10:04.282430 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64b7ea36-8849-4955-80b5-c7e7c12fcc29" containerName="installer" Feb 24 02:10:04.283603 master-0 kubenswrapper[7864]: I0224 02:10:04.282441 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="64b7ea36-8849-4955-80b5-c7e7c12fcc29" containerName="installer" Feb 24 02:10:04.283603 master-0 kubenswrapper[7864]: E0224 02:10:04.282457 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="598f691a-1472-4198-bcd5-6956217d30f9" containerName="installer" Feb 24 02:10:04.283603 master-0 kubenswrapper[7864]: I0224 02:10:04.282469 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="598f691a-1472-4198-bcd5-6956217d30f9" containerName="installer" Feb 24 02:10:04.283603 master-0 kubenswrapper[7864]: I0224 02:10:04.282675 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd02da41-8a48-4436-ae58-6363e7554898" containerName="installer" Feb 24 02:10:04.283603 master-0 kubenswrapper[7864]: I0224 02:10:04.282692 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="64b7ea36-8849-4955-80b5-c7e7c12fcc29" containerName="installer" Feb 24 02:10:04.283603 master-0 kubenswrapper[7864]: I0224 02:10:04.282705 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="683deae1-94b1-4c17-a73f-ad628a09134b" containerName="installer" Feb 24 02:10:04.283603 master-0 kubenswrapper[7864]: I0224 02:10:04.282731 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="598f691a-1472-4198-bcd5-6956217d30f9" containerName="installer" Feb 24 02:10:04.283603 master-0 kubenswrapper[7864]: I0224 02:10:04.283542 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59" Feb 24 02:10:04.287737 master-0 kubenswrapper[7864]: I0224 02:10:04.286887 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 24 02:10:04.287737 master-0 kubenswrapper[7864]: I0224 02:10:04.286980 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 24 02:10:04.287737 master-0 kubenswrapper[7864]: I0224 02:10:04.287363 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 24 02:10:04.287737 master-0 kubenswrapper[7864]: I0224 02:10:04.287467 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 24 02:10:04.309212 master-0 kubenswrapper[7864]: I0224 02:10:04.308767 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59"] Feb 24 02:10:04.361254 master-0 kubenswrapper[7864]: I0224 02:10:04.360123 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/74a7801b-b7a4-4292-91b3-6285c239aeb7-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-fcr59\" (UID: \"74a7801b-b7a4-4292-91b3-6285c239aeb7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59" Feb 24 02:10:04.361254 master-0 kubenswrapper[7864]: I0224 02:10:04.360217 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdmhx\" (UniqueName: \"kubernetes.io/projected/74a7801b-b7a4-4292-91b3-6285c239aeb7-kube-api-access-pdmhx\") pod \"cloud-credential-operator-6968c58f46-fcr59\" (UID: \"74a7801b-b7a4-4292-91b3-6285c239aeb7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59" Feb 24 02:10:04.361254 master-0 kubenswrapper[7864]: I0224 02:10:04.360252 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74a7801b-b7a4-4292-91b3-6285c239aeb7-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-fcr59\" (UID: \"74a7801b-b7a4-4292-91b3-6285c239aeb7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59" Feb 24 02:10:04.461765 master-0 kubenswrapper[7864]: I0224 02:10:04.461656 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdmhx\" (UniqueName: \"kubernetes.io/projected/74a7801b-b7a4-4292-91b3-6285c239aeb7-kube-api-access-pdmhx\") pod \"cloud-credential-operator-6968c58f46-fcr59\" (UID: \"74a7801b-b7a4-4292-91b3-6285c239aeb7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59" Feb 24 02:10:04.461765 master-0 kubenswrapper[7864]: I0224 02:10:04.461750 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74a7801b-b7a4-4292-91b3-6285c239aeb7-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-fcr59\" (UID: \"74a7801b-b7a4-4292-91b3-6285c239aeb7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59" Feb 24 02:10:04.462235 master-0 kubenswrapper[7864]: I0224 02:10:04.462166 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/74a7801b-b7a4-4292-91b3-6285c239aeb7-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-fcr59\" (UID: \"74a7801b-b7a4-4292-91b3-6285c239aeb7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59" Feb 24 02:10:04.462453 master-0 kubenswrapper[7864]: E0224 02:10:04.462390 7864 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Feb 24 02:10:04.462563 master-0 kubenswrapper[7864]: E0224 02:10:04.462537 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74a7801b-b7a4-4292-91b3-6285c239aeb7-cloud-credential-operator-serving-cert podName:74a7801b-b7a4-4292-91b3-6285c239aeb7 nodeName:}" failed. No retries permitted until 2026-02-24 02:10:04.962503202 +0000 UTC m=+369.290156864 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/74a7801b-b7a4-4292-91b3-6285c239aeb7-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-6968c58f46-fcr59" (UID: "74a7801b-b7a4-4292-91b3-6285c239aeb7") : secret "cloud-credential-operator-serving-cert" not found Feb 24 02:10:04.463038 master-0 kubenswrapper[7864]: I0224 02:10:04.462988 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74a7801b-b7a4-4292-91b3-6285c239aeb7-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-fcr59\" (UID: \"74a7801b-b7a4-4292-91b3-6285c239aeb7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59" Feb 24 02:10:04.490890 master-0 kubenswrapper[7864]: I0224 02:10:04.490814 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdmhx\" (UniqueName: \"kubernetes.io/projected/74a7801b-b7a4-4292-91b3-6285c239aeb7-kube-api-access-pdmhx\") pod \"cloud-credential-operator-6968c58f46-fcr59\" (UID: \"74a7801b-b7a4-4292-91b3-6285c239aeb7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59" Feb 24 02:10:04.968961 master-0 kubenswrapper[7864]: I0224 02:10:04.968831 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/74a7801b-b7a4-4292-91b3-6285c239aeb7-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-fcr59\" (UID: \"74a7801b-b7a4-4292-91b3-6285c239aeb7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59" Feb 24 02:10:04.969257 master-0 kubenswrapper[7864]: E0224 02:10:04.969093 7864 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Feb 24 02:10:04.969257 master-0 kubenswrapper[7864]: E0224 02:10:04.969225 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74a7801b-b7a4-4292-91b3-6285c239aeb7-cloud-credential-operator-serving-cert podName:74a7801b-b7a4-4292-91b3-6285c239aeb7 nodeName:}" failed. No retries permitted until 2026-02-24 02:10:05.969194374 +0000 UTC m=+370.296848026 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/74a7801b-b7a4-4292-91b3-6285c239aeb7-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-6968c58f46-fcr59" (UID: "74a7801b-b7a4-4292-91b3-6285c239aeb7") : secret "cloud-credential-operator-serving-cert" not found Feb 24 02:10:05.980344 master-0 kubenswrapper[7864]: I0224 02:10:05.980258 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/74a7801b-b7a4-4292-91b3-6285c239aeb7-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-fcr59\" (UID: \"74a7801b-b7a4-4292-91b3-6285c239aeb7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59" Feb 24 02:10:05.981238 master-0 kubenswrapper[7864]: E0224 02:10:05.980483 7864 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Feb 24 02:10:05.981238 master-0 kubenswrapper[7864]: E0224 02:10:05.980619 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74a7801b-b7a4-4292-91b3-6285c239aeb7-cloud-credential-operator-serving-cert podName:74a7801b-b7a4-4292-91b3-6285c239aeb7 nodeName:}" failed. No retries permitted until 2026-02-24 02:10:07.980555326 +0000 UTC m=+372.308208978 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/74a7801b-b7a4-4292-91b3-6285c239aeb7-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-6968c58f46-fcr59" (UID: "74a7801b-b7a4-4292-91b3-6285c239aeb7") : secret "cloud-credential-operator-serving-cert" not found Feb 24 02:10:06.159307 master-0 kubenswrapper[7864]: I0224 02:10:06.159241 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-8l58x_f6e7b773-7ecd-4a5c-8bef-d672f371e7e5/snapshot-controller/2.log" Feb 24 02:10:06.160167 master-0 kubenswrapper[7864]: I0224 02:10:06.160112 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-8l58x_f6e7b773-7ecd-4a5c-8bef-d672f371e7e5/snapshot-controller/1.log" Feb 24 02:10:06.160273 master-0 kubenswrapper[7864]: I0224 02:10:06.160196 7864 generic.go:334] "Generic (PLEG): container finished" podID="f6e7b773-7ecd-4a5c-8bef-d672f371e7e5" containerID="4291e0907ef19a32f51af939efa4c04ce2db9b3453891c7e12142da00a40e520" exitCode=1 Feb 24 02:10:06.160273 master-0 kubenswrapper[7864]: I0224 02:10:06.160255 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" event={"ID":"f6e7b773-7ecd-4a5c-8bef-d672f371e7e5","Type":"ContainerDied","Data":"4291e0907ef19a32f51af939efa4c04ce2db9b3453891c7e12142da00a40e520"} Feb 24 02:10:06.160409 master-0 kubenswrapper[7864]: I0224 02:10:06.160314 7864 scope.go:117] "RemoveContainer" containerID="0a3f515f89392d53020339854399716141f53a518a951acd09091b7c2ee359c5" Feb 24 02:10:06.161286 master-0 kubenswrapper[7864]: I0224 02:10:06.161222 7864 scope.go:117] "RemoveContainer" containerID="4291e0907ef19a32f51af939efa4c04ce2db9b3453891c7e12142da00a40e520" Feb 24 02:10:06.161752 master-0 kubenswrapper[7864]: E0224 02:10:06.161686 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-8l58x_openshift-cluster-storage-operator(f6e7b773-7ecd-4a5c-8bef-d672f371e7e5)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" podUID="f6e7b773-7ecd-4a5c-8bef-d672f371e7e5" Feb 24 02:10:07.170032 master-0 kubenswrapper[7864]: I0224 02:10:07.169942 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-8l58x_f6e7b773-7ecd-4a5c-8bef-d672f371e7e5/snapshot-controller/2.log" Feb 24 02:10:08.010143 master-0 kubenswrapper[7864]: I0224 02:10:08.010088 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/74a7801b-b7a4-4292-91b3-6285c239aeb7-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-fcr59\" (UID: \"74a7801b-b7a4-4292-91b3-6285c239aeb7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59" Feb 24 02:10:08.015447 master-0 kubenswrapper[7864]: I0224 02:10:08.015388 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/74a7801b-b7a4-4292-91b3-6285c239aeb7-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-fcr59\" (UID: \"74a7801b-b7a4-4292-91b3-6285c239aeb7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59" Feb 24 02:10:08.233778 master-0 kubenswrapper[7864]: I0224 02:10:08.233665 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59" Feb 24 02:10:08.758635 master-0 kubenswrapper[7864]: I0224 02:10:08.758542 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59"] Feb 24 02:10:08.765987 master-0 kubenswrapper[7864]: W0224 02:10:08.765916 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74a7801b_b7a4_4292_91b3_6285c239aeb7.slice/crio-89afab5096911f8752c791dc8598fa3869a80370cd36dec7298c9b9d91c19d81 WatchSource:0}: Error finding container 89afab5096911f8752c791dc8598fa3869a80370cd36dec7298c9b9d91c19d81: Status 404 returned error can't find the container with id 89afab5096911f8752c791dc8598fa3869a80370cd36dec7298c9b9d91c19d81 Feb 24 02:10:08.951843 master-0 kubenswrapper[7864]: I0224 02:10:08.951366 7864 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 02:10:09.186845 master-0 kubenswrapper[7864]: I0224 02:10:09.186694 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59" event={"ID":"74a7801b-b7a4-4292-91b3-6285c239aeb7","Type":"ContainerStarted","Data":"613df6266ab2db0a40595ffadff232bad8adba1e1c946c35a0e200ec0ca8ec5a"} Feb 24 02:10:09.186845 master-0 kubenswrapper[7864]: I0224 02:10:09.186771 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59" event={"ID":"74a7801b-b7a4-4292-91b3-6285c239aeb7","Type":"ContainerStarted","Data":"89afab5096911f8752c791dc8598fa3869a80370cd36dec7298c9b9d91c19d81"} Feb 24 02:10:09.828163 master-0 kubenswrapper[7864]: I0224 02:10:09.828092 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g862w"] Feb 24 02:10:09.829961 master-0 kubenswrapper[7864]: I0224 02:10:09.829876 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g862w" podUID="66fc4bf9-47d0-4530-a49e-912a61cc35fd" containerName="registry-server" containerID="cri-o://2532aa7e9c30e14420bff6125a2af5695149f5e5139bfadc9ef461f7fd4adc89" gracePeriod=2 Feb 24 02:10:09.849353 master-0 kubenswrapper[7864]: I0224 02:10:09.849294 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrmdr"] Feb 24 02:10:09.849709 master-0 kubenswrapper[7864]: I0224 02:10:09.849650 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hrmdr" podUID="37e3de57-34a2-4d55-9200-1bb94530c4ba" containerName="registry-server" containerID="cri-o://322843bc1875642e381de77fa85af19267060376aa5ec0dd58286deaec2ce5ae" gracePeriod=2 Feb 24 02:10:09.870028 master-0 kubenswrapper[7864]: I0224 02:10:09.868273 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4znnj"] Feb 24 02:10:09.877285 master-0 kubenswrapper[7864]: I0224 02:10:09.874779 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4znnj" Feb 24 02:10:09.884049 master-0 kubenswrapper[7864]: I0224 02:10:09.881636 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-pfvnv" Feb 24 02:10:09.901727 master-0 kubenswrapper[7864]: I0224 02:10:09.901663 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qqt7p"] Feb 24 02:10:09.902936 master-0 kubenswrapper[7864]: I0224 02:10:09.902904 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4znnj"] Feb 24 02:10:09.902936 master-0 kubenswrapper[7864]: I0224 02:10:09.902932 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qqt7p"] Feb 24 02:10:09.903061 master-0 kubenswrapper[7864]: I0224 02:10:09.903005 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qqt7p" Feb 24 02:10:09.904626 master-0 kubenswrapper[7864]: I0224 02:10:09.904492 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-7phpl" Feb 24 02:10:09.934801 master-0 kubenswrapper[7864]: I0224 02:10:09.934758 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56a17d6-d740-4349-833e-b5279f7db2d4-utilities\") pod \"redhat-operators-4znnj\" (UID: \"e56a17d6-d740-4349-833e-b5279f7db2d4\") " pod="openshift-marketplace/redhat-operators-4znnj" Feb 24 02:10:09.934930 master-0 kubenswrapper[7864]: I0224 02:10:09.934806 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q86gx\" (UniqueName: \"kubernetes.io/projected/b085f760-0e24-41a8-af09-538396aad935-kube-api-access-q86gx\") pod \"redhat-marketplace-qqt7p\" (UID: \"b085f760-0e24-41a8-af09-538396aad935\") " pod="openshift-marketplace/redhat-marketplace-qqt7p" Feb 24 02:10:09.934930 master-0 kubenswrapper[7864]: I0224 02:10:09.934866 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b085f760-0e24-41a8-af09-538396aad935-catalog-content\") pod \"redhat-marketplace-qqt7p\" (UID: \"b085f760-0e24-41a8-af09-538396aad935\") " pod="openshift-marketplace/redhat-marketplace-qqt7p" Feb 24 02:10:09.935024 master-0 kubenswrapper[7864]: I0224 02:10:09.935015 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56a17d6-d740-4349-833e-b5279f7db2d4-catalog-content\") pod \"redhat-operators-4znnj\" (UID: \"e56a17d6-d740-4349-833e-b5279f7db2d4\") " pod="openshift-marketplace/redhat-operators-4znnj" Feb 24 02:10:09.935068 master-0 kubenswrapper[7864]: I0224 02:10:09.935044 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg7sb\" (UniqueName: \"kubernetes.io/projected/e56a17d6-d740-4349-833e-b5279f7db2d4-kube-api-access-gg7sb\") pod \"redhat-operators-4znnj\" (UID: \"e56a17d6-d740-4349-833e-b5279f7db2d4\") " pod="openshift-marketplace/redhat-operators-4znnj" Feb 24 02:10:09.935352 master-0 kubenswrapper[7864]: I0224 02:10:09.935300 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b085f760-0e24-41a8-af09-538396aad935-utilities\") pod \"redhat-marketplace-qqt7p\" (UID: \"b085f760-0e24-41a8-af09-538396aad935\") " pod="openshift-marketplace/redhat-marketplace-qqt7p" Feb 24 02:10:10.036300 master-0 kubenswrapper[7864]: I0224 02:10:10.036235 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56a17d6-d740-4349-833e-b5279f7db2d4-utilities\") pod \"redhat-operators-4znnj\" (UID: \"e56a17d6-d740-4349-833e-b5279f7db2d4\") " pod="openshift-marketplace/redhat-operators-4znnj" Feb 24 02:10:10.036300 master-0 kubenswrapper[7864]: I0224 02:10:10.036297 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q86gx\" (UniqueName: \"kubernetes.io/projected/b085f760-0e24-41a8-af09-538396aad935-kube-api-access-q86gx\") pod \"redhat-marketplace-qqt7p\" (UID: \"b085f760-0e24-41a8-af09-538396aad935\") " pod="openshift-marketplace/redhat-marketplace-qqt7p" Feb 24 02:10:10.036484 master-0 kubenswrapper[7864]: I0224 02:10:10.036323 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b085f760-0e24-41a8-af09-538396aad935-catalog-content\") pod \"redhat-marketplace-qqt7p\" (UID: \"b085f760-0e24-41a8-af09-538396aad935\") " pod="openshift-marketplace/redhat-marketplace-qqt7p" Feb 24 02:10:10.036484 master-0 kubenswrapper[7864]: I0224 02:10:10.036379 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56a17d6-d740-4349-833e-b5279f7db2d4-catalog-content\") pod \"redhat-operators-4znnj\" (UID: \"e56a17d6-d740-4349-833e-b5279f7db2d4\") " pod="openshift-marketplace/redhat-operators-4znnj" Feb 24 02:10:10.036484 master-0 kubenswrapper[7864]: I0224 02:10:10.036425 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg7sb\" (UniqueName: \"kubernetes.io/projected/e56a17d6-d740-4349-833e-b5279f7db2d4-kube-api-access-gg7sb\") pod \"redhat-operators-4znnj\" (UID: \"e56a17d6-d740-4349-833e-b5279f7db2d4\") " pod="openshift-marketplace/redhat-operators-4znnj" Feb 24 02:10:10.036484 master-0 kubenswrapper[7864]: I0224 02:10:10.036472 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b085f760-0e24-41a8-af09-538396aad935-utilities\") pod \"redhat-marketplace-qqt7p\" (UID: \"b085f760-0e24-41a8-af09-538396aad935\") " pod="openshift-marketplace/redhat-marketplace-qqt7p" Feb 24 02:10:10.037333 master-0 kubenswrapper[7864]: I0224 02:10:10.037276 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56a17d6-d740-4349-833e-b5279f7db2d4-utilities\") pod \"redhat-operators-4znnj\" (UID: \"e56a17d6-d740-4349-833e-b5279f7db2d4\") " pod="openshift-marketplace/redhat-operators-4znnj" Feb 24 02:10:10.037431 master-0 kubenswrapper[7864]: I0224 02:10:10.037291 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b085f760-0e24-41a8-af09-538396aad935-utilities\") pod \"redhat-marketplace-qqt7p\" (UID: \"b085f760-0e24-41a8-af09-538396aad935\") " pod="openshift-marketplace/redhat-marketplace-qqt7p" Feb 24 02:10:10.038071 master-0 kubenswrapper[7864]: I0224 02:10:10.038025 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56a17d6-d740-4349-833e-b5279f7db2d4-catalog-content\") pod \"redhat-operators-4znnj\" (UID: \"e56a17d6-d740-4349-833e-b5279f7db2d4\") " pod="openshift-marketplace/redhat-operators-4znnj" Feb 24 02:10:10.038531 master-0 kubenswrapper[7864]: I0224 02:10:10.038463 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b085f760-0e24-41a8-af09-538396aad935-catalog-content\") pod \"redhat-marketplace-qqt7p\" (UID: \"b085f760-0e24-41a8-af09-538396aad935\") " pod="openshift-marketplace/redhat-marketplace-qqt7p" Feb 24 02:10:10.053450 master-0 kubenswrapper[7864]: I0224 02:10:10.053376 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q86gx\" (UniqueName: \"kubernetes.io/projected/b085f760-0e24-41a8-af09-538396aad935-kube-api-access-q86gx\") pod \"redhat-marketplace-qqt7p\" (UID: \"b085f760-0e24-41a8-af09-538396aad935\") " pod="openshift-marketplace/redhat-marketplace-qqt7p" Feb 24 02:10:10.060191 master-0 kubenswrapper[7864]: I0224 02:10:10.060148 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg7sb\" (UniqueName: \"kubernetes.io/projected/e56a17d6-d740-4349-833e-b5279f7db2d4-kube-api-access-gg7sb\") pod \"redhat-operators-4znnj\" (UID: \"e56a17d6-d740-4349-833e-b5279f7db2d4\") " pod="openshift-marketplace/redhat-operators-4znnj" Feb 24 02:10:10.120086 master-0 kubenswrapper[7864]: I0224 02:10:10.119987 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4znnj" Feb 24 02:10:10.188639 master-0 kubenswrapper[7864]: I0224 02:10:10.188332 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qqt7p" Feb 24 02:10:10.200247 master-0 kubenswrapper[7864]: I0224 02:10:10.199351 7864 generic.go:334] "Generic (PLEG): container finished" podID="66fc4bf9-47d0-4530-a49e-912a61cc35fd" containerID="2532aa7e9c30e14420bff6125a2af5695149f5e5139bfadc9ef461f7fd4adc89" exitCode=0 Feb 24 02:10:10.200247 master-0 kubenswrapper[7864]: I0224 02:10:10.199452 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g862w" event={"ID":"66fc4bf9-47d0-4530-a49e-912a61cc35fd","Type":"ContainerDied","Data":"2532aa7e9c30e14420bff6125a2af5695149f5e5139bfadc9ef461f7fd4adc89"} Feb 24 02:10:10.202613 master-0 kubenswrapper[7864]: I0224 02:10:10.202544 7864 generic.go:334] "Generic (PLEG): container finished" podID="37e3de57-34a2-4d55-9200-1bb94530c4ba" containerID="322843bc1875642e381de77fa85af19267060376aa5ec0dd58286deaec2ce5ae" exitCode=0 Feb 24 02:10:10.202719 master-0 kubenswrapper[7864]: I0224 02:10:10.202627 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrmdr" event={"ID":"37e3de57-34a2-4d55-9200-1bb94530c4ba","Type":"ContainerDied","Data":"322843bc1875642e381de77fa85af19267060376aa5ec0dd58286deaec2ce5ae"} Feb 24 02:10:10.323665 master-0 kubenswrapper[7864]: I0224 02:10:10.323620 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrmdr" Feb 24 02:10:10.333296 master-0 kubenswrapper[7864]: I0224 02:10:10.333210 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g862w" Feb 24 02:10:10.444327 master-0 kubenswrapper[7864]: I0224 02:10:10.444203 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37e3de57-34a2-4d55-9200-1bb94530c4ba-catalog-content\") pod \"37e3de57-34a2-4d55-9200-1bb94530c4ba\" (UID: \"37e3de57-34a2-4d55-9200-1bb94530c4ba\") " Feb 24 02:10:10.444327 master-0 kubenswrapper[7864]: I0224 02:10:10.444249 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66fc4bf9-47d0-4530-a49e-912a61cc35fd-utilities\") pod \"66fc4bf9-47d0-4530-a49e-912a61cc35fd\" (UID: \"66fc4bf9-47d0-4530-a49e-912a61cc35fd\") " Feb 24 02:10:10.444327 master-0 kubenswrapper[7864]: I0224 02:10:10.444275 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37e3de57-34a2-4d55-9200-1bb94530c4ba-utilities\") pod \"37e3de57-34a2-4d55-9200-1bb94530c4ba\" (UID: \"37e3de57-34a2-4d55-9200-1bb94530c4ba\") " Feb 24 02:10:10.444623 master-0 kubenswrapper[7864]: I0224 02:10:10.444351 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66fc4bf9-47d0-4530-a49e-912a61cc35fd-catalog-content\") pod \"66fc4bf9-47d0-4530-a49e-912a61cc35fd\" (UID: \"66fc4bf9-47d0-4530-a49e-912a61cc35fd\") " Feb 24 02:10:10.444623 master-0 kubenswrapper[7864]: I0224 02:10:10.444380 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z6d6\" (UniqueName: \"kubernetes.io/projected/37e3de57-34a2-4d55-9200-1bb94530c4ba-kube-api-access-5z6d6\") pod \"37e3de57-34a2-4d55-9200-1bb94530c4ba\" (UID: \"37e3de57-34a2-4d55-9200-1bb94530c4ba\") " Feb 24 02:10:10.444623 master-0 kubenswrapper[7864]: I0224 02:10:10.444408 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zf97\" (UniqueName: \"kubernetes.io/projected/66fc4bf9-47d0-4530-a49e-912a61cc35fd-kube-api-access-7zf97\") pod \"66fc4bf9-47d0-4530-a49e-912a61cc35fd\" (UID: \"66fc4bf9-47d0-4530-a49e-912a61cc35fd\") " Feb 24 02:10:10.445106 master-0 kubenswrapper[7864]: I0224 02:10:10.445058 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37e3de57-34a2-4d55-9200-1bb94530c4ba-utilities" (OuterVolumeSpecName: "utilities") pod "37e3de57-34a2-4d55-9200-1bb94530c4ba" (UID: "37e3de57-34a2-4d55-9200-1bb94530c4ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:10:10.447368 master-0 kubenswrapper[7864]: I0224 02:10:10.446421 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66fc4bf9-47d0-4530-a49e-912a61cc35fd-utilities" (OuterVolumeSpecName: "utilities") pod "66fc4bf9-47d0-4530-a49e-912a61cc35fd" (UID: "66fc4bf9-47d0-4530-a49e-912a61cc35fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:10:10.448346 master-0 kubenswrapper[7864]: I0224 02:10:10.448272 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66fc4bf9-47d0-4530-a49e-912a61cc35fd-kube-api-access-7zf97" (OuterVolumeSpecName: "kube-api-access-7zf97") pod "66fc4bf9-47d0-4530-a49e-912a61cc35fd" (UID: "66fc4bf9-47d0-4530-a49e-912a61cc35fd"). InnerVolumeSpecName "kube-api-access-7zf97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:10:10.449011 master-0 kubenswrapper[7864]: I0224 02:10:10.448974 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37e3de57-34a2-4d55-9200-1bb94530c4ba-kube-api-access-5z6d6" (OuterVolumeSpecName: "kube-api-access-5z6d6") pod "37e3de57-34a2-4d55-9200-1bb94530c4ba" (UID: "37e3de57-34a2-4d55-9200-1bb94530c4ba"). InnerVolumeSpecName "kube-api-access-5z6d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:10:10.473626 master-0 kubenswrapper[7864]: I0224 02:10:10.473519 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37e3de57-34a2-4d55-9200-1bb94530c4ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37e3de57-34a2-4d55-9200-1bb94530c4ba" (UID: "37e3de57-34a2-4d55-9200-1bb94530c4ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:10:10.545689 master-0 kubenswrapper[7864]: I0224 02:10:10.545635 7864 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z6d6\" (UniqueName: \"kubernetes.io/projected/37e3de57-34a2-4d55-9200-1bb94530c4ba-kube-api-access-5z6d6\") on node \"master-0\" DevicePath \"\"" Feb 24 02:10:10.545689 master-0 kubenswrapper[7864]: I0224 02:10:10.545683 7864 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zf97\" (UniqueName: \"kubernetes.io/projected/66fc4bf9-47d0-4530-a49e-912a61cc35fd-kube-api-access-7zf97\") on node \"master-0\" DevicePath \"\"" Feb 24 02:10:10.545923 master-0 kubenswrapper[7864]: I0224 02:10:10.545705 7864 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66fc4bf9-47d0-4530-a49e-912a61cc35fd-utilities\") on node \"master-0\" DevicePath \"\"" Feb 24 02:10:10.545923 master-0 kubenswrapper[7864]: I0224 02:10:10.545726 7864 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37e3de57-34a2-4d55-9200-1bb94530c4ba-catalog-content\") on node \"master-0\" DevicePath \"\"" Feb 24 02:10:10.545923 master-0 kubenswrapper[7864]: I0224 02:10:10.545745 7864 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37e3de57-34a2-4d55-9200-1bb94530c4ba-utilities\") on node \"master-0\" DevicePath \"\"" Feb 24 02:10:10.565323 master-0 kubenswrapper[7864]: I0224 02:10:10.565276 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4znnj"] Feb 24 02:10:10.644338 master-0 kubenswrapper[7864]: I0224 02:10:10.644272 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qqt7p"] Feb 24 02:10:10.645490 master-0 kubenswrapper[7864]: I0224 02:10:10.645418 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66fc4bf9-47d0-4530-a49e-912a61cc35fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66fc4bf9-47d0-4530-a49e-912a61cc35fd" (UID: "66fc4bf9-47d0-4530-a49e-912a61cc35fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:10:10.648084 master-0 kubenswrapper[7864]: I0224 02:10:10.646128 7864 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66fc4bf9-47d0-4530-a49e-912a61cc35fd-catalog-content\") on node \"master-0\" DevicePath \"\"" Feb 24 02:10:10.659964 master-0 kubenswrapper[7864]: W0224 02:10:10.659902 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb085f760_0e24_41a8_af09_538396aad935.slice/crio-f0367a6433cb34322b034ae4858460e50d6150d575d08858a19734f838b6527f WatchSource:0}: Error finding container f0367a6433cb34322b034ae4858460e50d6150d575d08858a19734f838b6527f: Status 404 returned error can't find the container with id f0367a6433cb34322b034ae4858460e50d6150d575d08858a19734f838b6527f Feb 24 02:10:11.228386 master-0 kubenswrapper[7864]: I0224 02:10:11.228195 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hrmdr" event={"ID":"37e3de57-34a2-4d55-9200-1bb94530c4ba","Type":"ContainerDied","Data":"4dae55f823ae7282c11d348d0d8dbe2c0d2f0c63af0a55678ff766d1c40919a2"} Feb 24 02:10:11.228386 master-0 kubenswrapper[7864]: I0224 02:10:11.228271 7864 scope.go:117] "RemoveContainer" containerID="322843bc1875642e381de77fa85af19267060376aa5ec0dd58286deaec2ce5ae" Feb 24 02:10:11.229303 master-0 kubenswrapper[7864]: I0224 02:10:11.228412 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hrmdr" Feb 24 02:10:11.239697 master-0 kubenswrapper[7864]: I0224 02:10:11.239619 7864 generic.go:334] "Generic (PLEG): container finished" podID="b085f760-0e24-41a8-af09-538396aad935" containerID="aab5f16cb62468cbd33a1b962837194ed256ccf00334d577f86ad2e704134976" exitCode=0 Feb 24 02:10:11.239842 master-0 kubenswrapper[7864]: I0224 02:10:11.239788 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qqt7p" event={"ID":"b085f760-0e24-41a8-af09-538396aad935","Type":"ContainerDied","Data":"aab5f16cb62468cbd33a1b962837194ed256ccf00334d577f86ad2e704134976"} Feb 24 02:10:11.239917 master-0 kubenswrapper[7864]: I0224 02:10:11.239842 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qqt7p" event={"ID":"b085f760-0e24-41a8-af09-538396aad935","Type":"ContainerStarted","Data":"f0367a6433cb34322b034ae4858460e50d6150d575d08858a19734f838b6527f"} Feb 24 02:10:11.249467 master-0 kubenswrapper[7864]: I0224 02:10:11.249399 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g862w" event={"ID":"66fc4bf9-47d0-4530-a49e-912a61cc35fd","Type":"ContainerDied","Data":"854b5254b3aab6aa03516df6e3f68bfe61a7f44a97343b1454b52cf5fb8570fb"} Feb 24 02:10:11.249467 master-0 kubenswrapper[7864]: I0224 02:10:11.249420 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g862w" Feb 24 02:10:11.254464 master-0 kubenswrapper[7864]: I0224 02:10:11.254394 7864 generic.go:334] "Generic (PLEG): container finished" podID="e56a17d6-d740-4349-833e-b5279f7db2d4" containerID="1e7c0eb2fdff11adf850fda4f441e025c2724fb8123e10487c2648065fa6f259" exitCode=0 Feb 24 02:10:11.254751 master-0 kubenswrapper[7864]: I0224 02:10:11.254471 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4znnj" event={"ID":"e56a17d6-d740-4349-833e-b5279f7db2d4","Type":"ContainerDied","Data":"1e7c0eb2fdff11adf850fda4f441e025c2724fb8123e10487c2648065fa6f259"} Feb 24 02:10:11.254751 master-0 kubenswrapper[7864]: I0224 02:10:11.254534 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4znnj" event={"ID":"e56a17d6-d740-4349-833e-b5279f7db2d4","Type":"ContainerStarted","Data":"29c6111030d71a276fc5ae8422a3897c52faae1bbf5d2f44516c595b0829852b"} Feb 24 02:10:11.256685 master-0 kubenswrapper[7864]: I0224 02:10:11.256640 7864 scope.go:117] "RemoveContainer" containerID="070c8f9bc2ed166f63fc4c2e18cc7b61c41cc8c4f62eef10cba0347b004004dd" Feb 24 02:10:11.301762 master-0 kubenswrapper[7864]: I0224 02:10:11.301676 7864 scope.go:117] "RemoveContainer" containerID="3a7cb7544ef9adc2d8aa9406b02bf17a505621aa7f06c4c4f21130b77ce2cb96" Feb 24 02:10:11.306275 master-0 kubenswrapper[7864]: I0224 02:10:11.306239 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrmdr"] Feb 24 02:10:11.310062 master-0 kubenswrapper[7864]: I0224 02:10:11.310010 7864 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hrmdr"] Feb 24 02:10:11.335961 master-0 kubenswrapper[7864]: I0224 02:10:11.335913 7864 scope.go:117] "RemoveContainer" containerID="2532aa7e9c30e14420bff6125a2af5695149f5e5139bfadc9ef461f7fd4adc89" Feb 24 02:10:11.355659 master-0 kubenswrapper[7864]: I0224 02:10:11.355592 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g862w"] Feb 24 02:10:11.362775 master-0 kubenswrapper[7864]: I0224 02:10:11.362661 7864 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g862w"] Feb 24 02:10:11.367009 master-0 kubenswrapper[7864]: I0224 02:10:11.366941 7864 scope.go:117] "RemoveContainer" containerID="b78034075f91df2edeb691499dd0e273ccdf1af852814089e22a4e04132f7e74" Feb 24 02:10:11.385265 master-0 kubenswrapper[7864]: I0224 02:10:11.385220 7864 scope.go:117] "RemoveContainer" containerID="62c61ea685817e2cfa906ea1d16d68da17c02d130b5c19c33206e9b53300e193" Feb 24 02:10:11.881128 master-0 kubenswrapper[7864]: I0224 02:10:11.881051 7864 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37e3de57-34a2-4d55-9200-1bb94530c4ba" path="/var/lib/kubelet/pods/37e3de57-34a2-4d55-9200-1bb94530c4ba/volumes" Feb 24 02:10:11.882307 master-0 kubenswrapper[7864]: I0224 02:10:11.882279 7864 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66fc4bf9-47d0-4530-a49e-912a61cc35fd" path="/var/lib/kubelet/pods/66fc4bf9-47d0-4530-a49e-912a61cc35fd/volumes" Feb 24 02:10:11.994043 master-0 kubenswrapper[7864]: I0224 02:10:11.993992 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dwmm5"] Feb 24 02:10:11.994421 master-0 kubenswrapper[7864]: I0224 02:10:11.994390 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dwmm5" podUID="afda9f0b-a304-490a-a080-0384a0a4e85b" containerName="registry-server" containerID="cri-o://0b73ffd7abb9891a785ae20eb828e6d67f31fc7de0f336bc6f4c9a7fd4183589" gracePeriod=2 Feb 24 02:10:12.196195 master-0 kubenswrapper[7864]: I0224 02:10:12.196003 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rvp5j"] Feb 24 02:10:12.198885 master-0 kubenswrapper[7864]: I0224 02:10:12.196730 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rvp5j" podUID="dae6353d-97ee-46f8-8430-0b5211134a03" containerName="registry-server" containerID="cri-o://205f85ade9249d8470ee4fc1fdb70f84501cc4dfa70d448bf2660e191d096077" gracePeriod=2 Feb 24 02:10:12.264788 master-0 kubenswrapper[7864]: I0224 02:10:12.264748 7864 generic.go:334] "Generic (PLEG): container finished" podID="afda9f0b-a304-490a-a080-0384a0a4e85b" containerID="0b73ffd7abb9891a785ae20eb828e6d67f31fc7de0f336bc6f4c9a7fd4183589" exitCode=0 Feb 24 02:10:12.265179 master-0 kubenswrapper[7864]: I0224 02:10:12.264863 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dwmm5" event={"ID":"afda9f0b-a304-490a-a080-0384a0a4e85b","Type":"ContainerDied","Data":"0b73ffd7abb9891a785ae20eb828e6d67f31fc7de0f336bc6f4c9a7fd4183589"} Feb 24 02:10:12.415159 master-0 kubenswrapper[7864]: I0224 02:10:12.415083 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dwmm5" Feb 24 02:10:12.421108 master-0 kubenswrapper[7864]: I0224 02:10:12.421060 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-brpmb"] Feb 24 02:10:12.421301 master-0 kubenswrapper[7864]: E0224 02:10:12.421276 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66fc4bf9-47d0-4530-a49e-912a61cc35fd" containerName="extract-content" Feb 24 02:10:12.421301 master-0 kubenswrapper[7864]: I0224 02:10:12.421294 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="66fc4bf9-47d0-4530-a49e-912a61cc35fd" containerName="extract-content" Feb 24 02:10:12.421409 master-0 kubenswrapper[7864]: E0224 02:10:12.421304 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afda9f0b-a304-490a-a080-0384a0a4e85b" containerName="registry-server" Feb 24 02:10:12.421409 master-0 kubenswrapper[7864]: I0224 02:10:12.421311 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="afda9f0b-a304-490a-a080-0384a0a4e85b" containerName="registry-server" Feb 24 02:10:12.421409 master-0 kubenswrapper[7864]: E0224 02:10:12.421321 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afda9f0b-a304-490a-a080-0384a0a4e85b" containerName="extract-content" Feb 24 02:10:12.421409 master-0 kubenswrapper[7864]: I0224 02:10:12.421327 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="afda9f0b-a304-490a-a080-0384a0a4e85b" containerName="extract-content" Feb 24 02:10:12.421409 master-0 kubenswrapper[7864]: E0224 02:10:12.421344 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37e3de57-34a2-4d55-9200-1bb94530c4ba" containerName="extract-content" Feb 24 02:10:12.421409 master-0 kubenswrapper[7864]: I0224 02:10:12.421362 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="37e3de57-34a2-4d55-9200-1bb94530c4ba" containerName="extract-content" Feb 24 02:10:12.421409 master-0 kubenswrapper[7864]: E0224 02:10:12.421371 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afda9f0b-a304-490a-a080-0384a0a4e85b" containerName="extract-utilities" Feb 24 02:10:12.421409 master-0 kubenswrapper[7864]: I0224 02:10:12.421376 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="afda9f0b-a304-490a-a080-0384a0a4e85b" containerName="extract-utilities" Feb 24 02:10:12.421409 master-0 kubenswrapper[7864]: E0224 02:10:12.421389 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37e3de57-34a2-4d55-9200-1bb94530c4ba" containerName="extract-utilities" Feb 24 02:10:12.421409 master-0 kubenswrapper[7864]: I0224 02:10:12.421394 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="37e3de57-34a2-4d55-9200-1bb94530c4ba" containerName="extract-utilities" Feb 24 02:10:12.421409 master-0 kubenswrapper[7864]: E0224 02:10:12.421403 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37e3de57-34a2-4d55-9200-1bb94530c4ba" containerName="registry-server" Feb 24 02:10:12.421409 master-0 kubenswrapper[7864]: I0224 02:10:12.421409 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="37e3de57-34a2-4d55-9200-1bb94530c4ba" containerName="registry-server" Feb 24 02:10:12.421409 master-0 kubenswrapper[7864]: E0224 02:10:12.421419 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66fc4bf9-47d0-4530-a49e-912a61cc35fd" containerName="extract-utilities" Feb 24 02:10:12.421409 master-0 kubenswrapper[7864]: I0224 02:10:12.421426 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="66fc4bf9-47d0-4530-a49e-912a61cc35fd" containerName="extract-utilities" Feb 24 02:10:12.421409 master-0 kubenswrapper[7864]: E0224 02:10:12.421436 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66fc4bf9-47d0-4530-a49e-912a61cc35fd" containerName="registry-server" Feb 24 02:10:12.421409 master-0 kubenswrapper[7864]: I0224 02:10:12.421441 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="66fc4bf9-47d0-4530-a49e-912a61cc35fd" containerName="registry-server" Feb 24 02:10:12.422055 master-0 kubenswrapper[7864]: I0224 02:10:12.421525 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="66fc4bf9-47d0-4530-a49e-912a61cc35fd" containerName="registry-server" Feb 24 02:10:12.422055 master-0 kubenswrapper[7864]: I0224 02:10:12.421533 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="afda9f0b-a304-490a-a080-0384a0a4e85b" containerName="registry-server" Feb 24 02:10:12.422055 master-0 kubenswrapper[7864]: I0224 02:10:12.421540 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="37e3de57-34a2-4d55-9200-1bb94530c4ba" containerName="registry-server" Feb 24 02:10:12.422317 master-0 kubenswrapper[7864]: I0224 02:10:12.422266 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-brpmb" Feb 24 02:10:12.423567 master-0 kubenswrapper[7864]: I0224 02:10:12.423525 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-k5dgr" Feb 24 02:10:12.429511 master-0 kubenswrapper[7864]: I0224 02:10:12.429473 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-brpmb"] Feb 24 02:10:12.581110 master-0 kubenswrapper[7864]: I0224 02:10:12.581053 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afda9f0b-a304-490a-a080-0384a0a4e85b-utilities\") pod \"afda9f0b-a304-490a-a080-0384a0a4e85b\" (UID: \"afda9f0b-a304-490a-a080-0384a0a4e85b\") " Feb 24 02:10:12.581300 master-0 kubenswrapper[7864]: I0224 02:10:12.581268 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afda9f0b-a304-490a-a080-0384a0a4e85b-catalog-content\") pod \"afda9f0b-a304-490a-a080-0384a0a4e85b\" (UID: \"afda9f0b-a304-490a-a080-0384a0a4e85b\") " Feb 24 02:10:12.581360 master-0 kubenswrapper[7864]: I0224 02:10:12.581313 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rmqn\" (UniqueName: \"kubernetes.io/projected/afda9f0b-a304-490a-a080-0384a0a4e85b-kube-api-access-5rmqn\") pod \"afda9f0b-a304-490a-a080-0384a0a4e85b\" (UID: \"afda9f0b-a304-490a-a080-0384a0a4e85b\") " Feb 24 02:10:12.581687 master-0 kubenswrapper[7864]: I0224 02:10:12.581649 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9ngc\" (UniqueName: \"kubernetes.io/projected/0cb042de-c873-408c-a4c4-ef9f7e546a08-kube-api-access-p9ngc\") pod \"certified-operators-brpmb\" (UID: \"0cb042de-c873-408c-a4c4-ef9f7e546a08\") " pod="openshift-marketplace/certified-operators-brpmb" Feb 24 02:10:12.581745 master-0 kubenswrapper[7864]: I0224 02:10:12.581724 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cb042de-c873-408c-a4c4-ef9f7e546a08-catalog-content\") pod \"certified-operators-brpmb\" (UID: \"0cb042de-c873-408c-a4c4-ef9f7e546a08\") " pod="openshift-marketplace/certified-operators-brpmb" Feb 24 02:10:12.581815 master-0 kubenswrapper[7864]: I0224 02:10:12.581788 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cb042de-c873-408c-a4c4-ef9f7e546a08-utilities\") pod \"certified-operators-brpmb\" (UID: \"0cb042de-c873-408c-a4c4-ef9f7e546a08\") " pod="openshift-marketplace/certified-operators-brpmb" Feb 24 02:10:12.583291 master-0 kubenswrapper[7864]: I0224 02:10:12.583247 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afda9f0b-a304-490a-a080-0384a0a4e85b-utilities" (OuterVolumeSpecName: "utilities") pod "afda9f0b-a304-490a-a080-0384a0a4e85b" (UID: "afda9f0b-a304-490a-a080-0384a0a4e85b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:10:12.592861 master-0 kubenswrapper[7864]: I0224 02:10:12.590386 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afda9f0b-a304-490a-a080-0384a0a4e85b-kube-api-access-5rmqn" (OuterVolumeSpecName: "kube-api-access-5rmqn") pod "afda9f0b-a304-490a-a080-0384a0a4e85b" (UID: "afda9f0b-a304-490a-a080-0384a0a4e85b"). InnerVolumeSpecName "kube-api-access-5rmqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:10:12.601203 master-0 kubenswrapper[7864]: I0224 02:10:12.601177 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rvp5j" Feb 24 02:10:12.607406 master-0 kubenswrapper[7864]: I0224 02:10:12.607356 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kkwwl"] Feb 24 02:10:12.608556 master-0 kubenswrapper[7864]: E0224 02:10:12.608531 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae6353d-97ee-46f8-8430-0b5211134a03" containerName="extract-utilities" Feb 24 02:10:12.608556 master-0 kubenswrapper[7864]: I0224 02:10:12.608556 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae6353d-97ee-46f8-8430-0b5211134a03" containerName="extract-utilities" Feb 24 02:10:12.608688 master-0 kubenswrapper[7864]: E0224 02:10:12.608590 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae6353d-97ee-46f8-8430-0b5211134a03" containerName="registry-server" Feb 24 02:10:12.608688 master-0 kubenswrapper[7864]: I0224 02:10:12.608600 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae6353d-97ee-46f8-8430-0b5211134a03" containerName="registry-server" Feb 24 02:10:12.608688 master-0 kubenswrapper[7864]: E0224 02:10:12.608611 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae6353d-97ee-46f8-8430-0b5211134a03" containerName="extract-content" Feb 24 02:10:12.608688 master-0 kubenswrapper[7864]: I0224 02:10:12.608618 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae6353d-97ee-46f8-8430-0b5211134a03" containerName="extract-content" Feb 24 02:10:12.610255 master-0 kubenswrapper[7864]: I0224 02:10:12.608739 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="dae6353d-97ee-46f8-8430-0b5211134a03" containerName="registry-server" Feb 24 02:10:12.610255 master-0 kubenswrapper[7864]: I0224 02:10:12.609539 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kkwwl" Feb 24 02:10:12.611732 master-0 kubenswrapper[7864]: I0224 02:10:12.611689 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-lbd2d" Feb 24 02:10:12.634673 master-0 kubenswrapper[7864]: I0224 02:10:12.630610 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kkwwl"] Feb 24 02:10:12.669240 master-0 kubenswrapper[7864]: I0224 02:10:12.669067 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afda9f0b-a304-490a-a080-0384a0a4e85b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "afda9f0b-a304-490a-a080-0384a0a4e85b" (UID: "afda9f0b-a304-490a-a080-0384a0a4e85b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:10:12.688687 master-0 kubenswrapper[7864]: I0224 02:10:12.683361 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cb042de-c873-408c-a4c4-ef9f7e546a08-utilities\") pod \"certified-operators-brpmb\" (UID: \"0cb042de-c873-408c-a4c4-ef9f7e546a08\") " pod="openshift-marketplace/certified-operators-brpmb" Feb 24 02:10:12.688687 master-0 kubenswrapper[7864]: I0224 02:10:12.683461 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9ngc\" (UniqueName: \"kubernetes.io/projected/0cb042de-c873-408c-a4c4-ef9f7e546a08-kube-api-access-p9ngc\") pod \"certified-operators-brpmb\" (UID: \"0cb042de-c873-408c-a4c4-ef9f7e546a08\") " pod="openshift-marketplace/certified-operators-brpmb" Feb 24 02:10:12.688687 master-0 kubenswrapper[7864]: I0224 02:10:12.683502 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cb042de-c873-408c-a4c4-ef9f7e546a08-catalog-content\") pod \"certified-operators-brpmb\" (UID: \"0cb042de-c873-408c-a4c4-ef9f7e546a08\") " pod="openshift-marketplace/certified-operators-brpmb" Feb 24 02:10:12.688687 master-0 kubenswrapper[7864]: I0224 02:10:12.683550 7864 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afda9f0b-a304-490a-a080-0384a0a4e85b-utilities\") on node \"master-0\" DevicePath \"\"" Feb 24 02:10:12.688687 master-0 kubenswrapper[7864]: I0224 02:10:12.683566 7864 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afda9f0b-a304-490a-a080-0384a0a4e85b-catalog-content\") on node \"master-0\" DevicePath \"\"" Feb 24 02:10:12.688687 master-0 kubenswrapper[7864]: I0224 02:10:12.683595 7864 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rmqn\" (UniqueName: \"kubernetes.io/projected/afda9f0b-a304-490a-a080-0384a0a4e85b-kube-api-access-5rmqn\") on node \"master-0\" DevicePath \"\"" Feb 24 02:10:12.688687 master-0 kubenswrapper[7864]: I0224 02:10:12.684012 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cb042de-c873-408c-a4c4-ef9f7e546a08-catalog-content\") pod \"certified-operators-brpmb\" (UID: \"0cb042de-c873-408c-a4c4-ef9f7e546a08\") " pod="openshift-marketplace/certified-operators-brpmb" Feb 24 02:10:12.688687 master-0 kubenswrapper[7864]: I0224 02:10:12.684235 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cb042de-c873-408c-a4c4-ef9f7e546a08-utilities\") pod \"certified-operators-brpmb\" (UID: \"0cb042de-c873-408c-a4c4-ef9f7e546a08\") " pod="openshift-marketplace/certified-operators-brpmb" Feb 24 02:10:12.711956 master-0 kubenswrapper[7864]: I0224 02:10:12.711916 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9ngc\" (UniqueName: \"kubernetes.io/projected/0cb042de-c873-408c-a4c4-ef9f7e546a08-kube-api-access-p9ngc\") pod \"certified-operators-brpmb\" (UID: \"0cb042de-c873-408c-a4c4-ef9f7e546a08\") " pod="openshift-marketplace/certified-operators-brpmb" Feb 24 02:10:12.785353 master-0 kubenswrapper[7864]: I0224 02:10:12.785287 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9f88\" (UniqueName: \"kubernetes.io/projected/dae6353d-97ee-46f8-8430-0b5211134a03-kube-api-access-n9f88\") pod \"dae6353d-97ee-46f8-8430-0b5211134a03\" (UID: \"dae6353d-97ee-46f8-8430-0b5211134a03\") " Feb 24 02:10:12.785504 master-0 kubenswrapper[7864]: I0224 02:10:12.785429 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dae6353d-97ee-46f8-8430-0b5211134a03-catalog-content\") pod \"dae6353d-97ee-46f8-8430-0b5211134a03\" (UID: \"dae6353d-97ee-46f8-8430-0b5211134a03\") " Feb 24 02:10:12.785504 master-0 kubenswrapper[7864]: I0224 02:10:12.785475 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dae6353d-97ee-46f8-8430-0b5211134a03-utilities\") pod \"dae6353d-97ee-46f8-8430-0b5211134a03\" (UID: \"dae6353d-97ee-46f8-8430-0b5211134a03\") " Feb 24 02:10:12.786844 master-0 kubenswrapper[7864]: I0224 02:10:12.786802 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pc72\" (UniqueName: \"kubernetes.io/projected/a4267e3a-aaaf-4b2f-a37c-0f097a35783f-kube-api-access-5pc72\") pod \"community-operators-kkwwl\" (UID: \"a4267e3a-aaaf-4b2f-a37c-0f097a35783f\") " pod="openshift-marketplace/community-operators-kkwwl" Feb 24 02:10:12.787075 master-0 kubenswrapper[7864]: I0224 02:10:12.786865 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4267e3a-aaaf-4b2f-a37c-0f097a35783f-catalog-content\") pod \"community-operators-kkwwl\" (UID: \"a4267e3a-aaaf-4b2f-a37c-0f097a35783f\") " pod="openshift-marketplace/community-operators-kkwwl" Feb 24 02:10:12.787075 master-0 kubenswrapper[7864]: I0224 02:10:12.786935 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4267e3a-aaaf-4b2f-a37c-0f097a35783f-utilities\") pod \"community-operators-kkwwl\" (UID: \"a4267e3a-aaaf-4b2f-a37c-0f097a35783f\") " pod="openshift-marketplace/community-operators-kkwwl" Feb 24 02:10:12.787567 master-0 kubenswrapper[7864]: I0224 02:10:12.787524 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dae6353d-97ee-46f8-8430-0b5211134a03-utilities" (OuterVolumeSpecName: "utilities") pod "dae6353d-97ee-46f8-8430-0b5211134a03" (UID: "dae6353d-97ee-46f8-8430-0b5211134a03"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:10:12.789749 master-0 kubenswrapper[7864]: I0224 02:10:12.789697 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dae6353d-97ee-46f8-8430-0b5211134a03-kube-api-access-n9f88" (OuterVolumeSpecName: "kube-api-access-n9f88") pod "dae6353d-97ee-46f8-8430-0b5211134a03" (UID: "dae6353d-97ee-46f8-8430-0b5211134a03"). InnerVolumeSpecName "kube-api-access-n9f88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:10:12.836085 master-0 kubenswrapper[7864]: I0224 02:10:12.836023 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dae6353d-97ee-46f8-8430-0b5211134a03-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dae6353d-97ee-46f8-8430-0b5211134a03" (UID: "dae6353d-97ee-46f8-8430-0b5211134a03"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:10:12.874669 master-0 kubenswrapper[7864]: I0224 02:10:12.874618 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-brpmb" Feb 24 02:10:12.887866 master-0 kubenswrapper[7864]: I0224 02:10:12.887813 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4267e3a-aaaf-4b2f-a37c-0f097a35783f-utilities\") pod \"community-operators-kkwwl\" (UID: \"a4267e3a-aaaf-4b2f-a37c-0f097a35783f\") " pod="openshift-marketplace/community-operators-kkwwl" Feb 24 02:10:12.887866 master-0 kubenswrapper[7864]: I0224 02:10:12.887862 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pc72\" (UniqueName: \"kubernetes.io/projected/a4267e3a-aaaf-4b2f-a37c-0f097a35783f-kube-api-access-5pc72\") pod \"community-operators-kkwwl\" (UID: \"a4267e3a-aaaf-4b2f-a37c-0f097a35783f\") " pod="openshift-marketplace/community-operators-kkwwl" Feb 24 02:10:12.888072 master-0 kubenswrapper[7864]: I0224 02:10:12.888040 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4267e3a-aaaf-4b2f-a37c-0f097a35783f-catalog-content\") pod \"community-operators-kkwwl\" (UID: \"a4267e3a-aaaf-4b2f-a37c-0f097a35783f\") " pod="openshift-marketplace/community-operators-kkwwl" Feb 24 02:10:12.888200 master-0 kubenswrapper[7864]: I0224 02:10:12.888166 7864 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dae6353d-97ee-46f8-8430-0b5211134a03-utilities\") on node \"master-0\" DevicePath \"\"" Feb 24 02:10:12.888200 master-0 kubenswrapper[7864]: I0224 02:10:12.888188 7864 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9f88\" (UniqueName: \"kubernetes.io/projected/dae6353d-97ee-46f8-8430-0b5211134a03-kube-api-access-n9f88\") on node \"master-0\" DevicePath \"\"" Feb 24 02:10:12.888200 master-0 kubenswrapper[7864]: I0224 02:10:12.888200 7864 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dae6353d-97ee-46f8-8430-0b5211134a03-catalog-content\") on node \"master-0\" DevicePath \"\"" Feb 24 02:10:12.888610 master-0 kubenswrapper[7864]: I0224 02:10:12.888529 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4267e3a-aaaf-4b2f-a37c-0f097a35783f-utilities\") pod \"community-operators-kkwwl\" (UID: \"a4267e3a-aaaf-4b2f-a37c-0f097a35783f\") " pod="openshift-marketplace/community-operators-kkwwl" Feb 24 02:10:12.888707 master-0 kubenswrapper[7864]: I0224 02:10:12.888618 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4267e3a-aaaf-4b2f-a37c-0f097a35783f-catalog-content\") pod \"community-operators-kkwwl\" (UID: \"a4267e3a-aaaf-4b2f-a37c-0f097a35783f\") " pod="openshift-marketplace/community-operators-kkwwl" Feb 24 02:10:12.906121 master-0 kubenswrapper[7864]: I0224 02:10:12.906071 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pc72\" (UniqueName: \"kubernetes.io/projected/a4267e3a-aaaf-4b2f-a37c-0f097a35783f-kube-api-access-5pc72\") pod \"community-operators-kkwwl\" (UID: \"a4267e3a-aaaf-4b2f-a37c-0f097a35783f\") " pod="openshift-marketplace/community-operators-kkwwl" Feb 24 02:10:12.931512 master-0 kubenswrapper[7864]: I0224 02:10:12.930585 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kkwwl" Feb 24 02:10:13.282905 master-0 kubenswrapper[7864]: I0224 02:10:13.282859 7864 generic.go:334] "Generic (PLEG): container finished" podID="b085f760-0e24-41a8-af09-538396aad935" containerID="532c8330be4e1bb00f3b8c98db49eb86ee33fea1c47fd0eb58ed9999c987cc56" exitCode=0 Feb 24 02:10:13.284217 master-0 kubenswrapper[7864]: I0224 02:10:13.283079 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qqt7p" event={"ID":"b085f760-0e24-41a8-af09-538396aad935","Type":"ContainerDied","Data":"532c8330be4e1bb00f3b8c98db49eb86ee33fea1c47fd0eb58ed9999c987cc56"} Feb 24 02:10:13.300058 master-0 kubenswrapper[7864]: I0224 02:10:13.287519 7864 generic.go:334] "Generic (PLEG): container finished" podID="e56a17d6-d740-4349-833e-b5279f7db2d4" containerID="5c2bae7a5f82ac2dacb3d782b5c120ab2d48eaa30503f169922755bafd417358" exitCode=0 Feb 24 02:10:13.300058 master-0 kubenswrapper[7864]: I0224 02:10:13.287647 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4znnj" event={"ID":"e56a17d6-d740-4349-833e-b5279f7db2d4","Type":"ContainerDied","Data":"5c2bae7a5f82ac2dacb3d782b5c120ab2d48eaa30503f169922755bafd417358"} Feb 24 02:10:13.306344 master-0 kubenswrapper[7864]: I0224 02:10:13.306313 7864 generic.go:334] "Generic (PLEG): container finished" podID="dae6353d-97ee-46f8-8430-0b5211134a03" containerID="205f85ade9249d8470ee4fc1fdb70f84501cc4dfa70d448bf2660e191d096077" exitCode=0 Feb 24 02:10:13.306527 master-0 kubenswrapper[7864]: I0224 02:10:13.306491 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rvp5j" event={"ID":"dae6353d-97ee-46f8-8430-0b5211134a03","Type":"ContainerDied","Data":"205f85ade9249d8470ee4fc1fdb70f84501cc4dfa70d448bf2660e191d096077"} Feb 24 02:10:13.306670 master-0 kubenswrapper[7864]: I0224 02:10:13.306653 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rvp5j" event={"ID":"dae6353d-97ee-46f8-8430-0b5211134a03","Type":"ContainerDied","Data":"50356bfb3d2696621a889bb656126f097ae204f6e3e58121f4dfd1103a714bae"} Feb 24 02:10:13.306782 master-0 kubenswrapper[7864]: I0224 02:10:13.306768 7864 scope.go:117] "RemoveContainer" containerID="205f85ade9249d8470ee4fc1fdb70f84501cc4dfa70d448bf2660e191d096077" Feb 24 02:10:13.307011 master-0 kubenswrapper[7864]: I0224 02:10:13.306994 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rvp5j" Feb 24 02:10:13.350664 master-0 kubenswrapper[7864]: I0224 02:10:13.350249 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dwmm5" event={"ID":"afda9f0b-a304-490a-a080-0384a0a4e85b","Type":"ContainerDied","Data":"d63981b48a66be59479eff1aa85e6d210bec4551670260a1b35a8bdeca808631"} Feb 24 02:10:13.350664 master-0 kubenswrapper[7864]: I0224 02:10:13.350380 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dwmm5" Feb 24 02:10:13.376313 master-0 kubenswrapper[7864]: I0224 02:10:13.376199 7864 scope.go:117] "RemoveContainer" containerID="71bc8bbb7d9892bd409de5b6757bf0fbd43d044cbbf9e4957d9d2d3633dcae38" Feb 24 02:10:13.408394 master-0 kubenswrapper[7864]: I0224 02:10:13.408346 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dwmm5"] Feb 24 02:10:13.409134 master-0 kubenswrapper[7864]: W0224 02:10:13.409103 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0cb042de_c873_408c_a4c4_ef9f7e546a08.slice/crio-9867de2597cef8ea21b27065bc6c5ebb42eda67f6c0e55aa21a2e92ed8089c54 WatchSource:0}: Error finding container 9867de2597cef8ea21b27065bc6c5ebb42eda67f6c0e55aa21a2e92ed8089c54: Status 404 returned error can't find the container with id 9867de2597cef8ea21b27065bc6c5ebb42eda67f6c0e55aa21a2e92ed8089c54 Feb 24 02:10:13.411272 master-0 kubenswrapper[7864]: I0224 02:10:13.411113 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-brpmb"] Feb 24 02:10:13.413681 master-0 kubenswrapper[7864]: I0224 02:10:13.413605 7864 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dwmm5"] Feb 24 02:10:13.415472 master-0 kubenswrapper[7864]: I0224 02:10:13.415443 7864 scope.go:117] "RemoveContainer" containerID="efcc1a9e753b5314b68914ac8cee7200d3873607fe40f63d396a194371722d62" Feb 24 02:10:13.416925 master-0 kubenswrapper[7864]: I0224 02:10:13.416891 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rvp5j"] Feb 24 02:10:13.419518 master-0 kubenswrapper[7864]: I0224 02:10:13.419475 7864 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rvp5j"] Feb 24 02:10:13.432621 master-0 kubenswrapper[7864]: I0224 02:10:13.432592 7864 scope.go:117] "RemoveContainer" containerID="205f85ade9249d8470ee4fc1fdb70f84501cc4dfa70d448bf2660e191d096077" Feb 24 02:10:13.432984 master-0 kubenswrapper[7864]: E0224 02:10:13.432943 7864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"205f85ade9249d8470ee4fc1fdb70f84501cc4dfa70d448bf2660e191d096077\": container with ID starting with 205f85ade9249d8470ee4fc1fdb70f84501cc4dfa70d448bf2660e191d096077 not found: ID does not exist" containerID="205f85ade9249d8470ee4fc1fdb70f84501cc4dfa70d448bf2660e191d096077" Feb 24 02:10:13.433082 master-0 kubenswrapper[7864]: I0224 02:10:13.432999 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"205f85ade9249d8470ee4fc1fdb70f84501cc4dfa70d448bf2660e191d096077"} err="failed to get container status \"205f85ade9249d8470ee4fc1fdb70f84501cc4dfa70d448bf2660e191d096077\": rpc error: code = NotFound desc = could not find container \"205f85ade9249d8470ee4fc1fdb70f84501cc4dfa70d448bf2660e191d096077\": container with ID starting with 205f85ade9249d8470ee4fc1fdb70f84501cc4dfa70d448bf2660e191d096077 not found: ID does not exist" Feb 24 02:10:13.433082 master-0 kubenswrapper[7864]: I0224 02:10:13.433029 7864 scope.go:117] "RemoveContainer" containerID="71bc8bbb7d9892bd409de5b6757bf0fbd43d044cbbf9e4957d9d2d3633dcae38" Feb 24 02:10:13.433165 master-0 kubenswrapper[7864]: I0224 02:10:13.433096 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kkwwl"] Feb 24 02:10:13.433492 master-0 kubenswrapper[7864]: E0224 02:10:13.433457 7864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71bc8bbb7d9892bd409de5b6757bf0fbd43d044cbbf9e4957d9d2d3633dcae38\": container with ID starting with 71bc8bbb7d9892bd409de5b6757bf0fbd43d044cbbf9e4957d9d2d3633dcae38 not found: ID does not exist" containerID="71bc8bbb7d9892bd409de5b6757bf0fbd43d044cbbf9e4957d9d2d3633dcae38" Feb 24 02:10:13.433656 master-0 kubenswrapper[7864]: I0224 02:10:13.433501 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71bc8bbb7d9892bd409de5b6757bf0fbd43d044cbbf9e4957d9d2d3633dcae38"} err="failed to get container status \"71bc8bbb7d9892bd409de5b6757bf0fbd43d044cbbf9e4957d9d2d3633dcae38\": rpc error: code = NotFound desc = could not find container \"71bc8bbb7d9892bd409de5b6757bf0fbd43d044cbbf9e4957d9d2d3633dcae38\": container with ID starting with 71bc8bbb7d9892bd409de5b6757bf0fbd43d044cbbf9e4957d9d2d3633dcae38 not found: ID does not exist" Feb 24 02:10:13.433656 master-0 kubenswrapper[7864]: I0224 02:10:13.433627 7864 scope.go:117] "RemoveContainer" containerID="efcc1a9e753b5314b68914ac8cee7200d3873607fe40f63d396a194371722d62" Feb 24 02:10:13.434153 master-0 kubenswrapper[7864]: E0224 02:10:13.434110 7864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efcc1a9e753b5314b68914ac8cee7200d3873607fe40f63d396a194371722d62\": container with ID starting with efcc1a9e753b5314b68914ac8cee7200d3873607fe40f63d396a194371722d62 not found: ID does not exist" containerID="efcc1a9e753b5314b68914ac8cee7200d3873607fe40f63d396a194371722d62" Feb 24 02:10:13.434214 master-0 kubenswrapper[7864]: I0224 02:10:13.434140 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efcc1a9e753b5314b68914ac8cee7200d3873607fe40f63d396a194371722d62"} err="failed to get container status \"efcc1a9e753b5314b68914ac8cee7200d3873607fe40f63d396a194371722d62\": rpc error: code = NotFound desc = could not find container \"efcc1a9e753b5314b68914ac8cee7200d3873607fe40f63d396a194371722d62\": container with ID starting with efcc1a9e753b5314b68914ac8cee7200d3873607fe40f63d396a194371722d62 not found: ID does not exist" Feb 24 02:10:13.434214 master-0 kubenswrapper[7864]: I0224 02:10:13.434170 7864 scope.go:117] "RemoveContainer" containerID="0b73ffd7abb9891a785ae20eb828e6d67f31fc7de0f336bc6f4c9a7fd4183589" Feb 24 02:10:13.440487 master-0 kubenswrapper[7864]: W0224 02:10:13.440441 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4267e3a_aaaf_4b2f_a37c_0f097a35783f.slice/crio-200fe8533b917a93531075b1165c1c0e535a0cad7b646b4108d64e256389832b WatchSource:0}: Error finding container 200fe8533b917a93531075b1165c1c0e535a0cad7b646b4108d64e256389832b: Status 404 returned error can't find the container with id 200fe8533b917a93531075b1165c1c0e535a0cad7b646b4108d64e256389832b Feb 24 02:10:13.465795 master-0 kubenswrapper[7864]: I0224 02:10:13.465759 7864 scope.go:117] "RemoveContainer" containerID="256f4504e3d4b82fdb34220ca0fb4428b8d341655ed1ff64de7c87b7b80ba566" Feb 24 02:10:13.483653 master-0 kubenswrapper[7864]: I0224 02:10:13.483601 7864 scope.go:117] "RemoveContainer" containerID="2c80afcf16dd6a27d5d95ff164c146d6550b41c092afe1dfc56432c7b3cf9a3d" Feb 24 02:10:13.882980 master-0 kubenswrapper[7864]: I0224 02:10:13.882850 7864 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afda9f0b-a304-490a-a080-0384a0a4e85b" path="/var/lib/kubelet/pods/afda9f0b-a304-490a-a080-0384a0a4e85b/volumes" Feb 24 02:10:13.883692 master-0 kubenswrapper[7864]: I0224 02:10:13.883648 7864 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dae6353d-97ee-46f8-8430-0b5211134a03" path="/var/lib/kubelet/pods/dae6353d-97ee-46f8-8430-0b5211134a03/volumes" Feb 24 02:10:14.030334 master-0 kubenswrapper[7864]: I0224 02:10:14.030275 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-798b897698-rqrlc"] Feb 24 02:10:14.032523 master-0 kubenswrapper[7864]: I0224 02:10:14.032451 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-798b897698-rqrlc" Feb 24 02:10:14.036357 master-0 kubenswrapper[7864]: I0224 02:10:14.036309 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 24 02:10:14.036684 master-0 kubenswrapper[7864]: I0224 02:10:14.036638 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 24 02:10:14.036773 master-0 kubenswrapper[7864]: I0224 02:10:14.036702 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 24 02:10:14.036999 master-0 kubenswrapper[7864]: I0224 02:10:14.036945 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 24 02:10:14.036999 master-0 kubenswrapper[7864]: I0224 02:10:14.036950 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-7xngw" Feb 24 02:10:14.037281 master-0 kubenswrapper[7864]: I0224 02:10:14.036996 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 24 02:10:14.112345 master-0 kubenswrapper[7864]: I0224 02:10:14.112281 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b1fb12ad-9035-4039-9be6-6f8a84b93063-auth-proxy-config\") pod \"machine-approver-798b897698-rqrlc\" (UID: \"b1fb12ad-9035-4039-9be6-6f8a84b93063\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-rqrlc" Feb 24 02:10:14.112461 master-0 kubenswrapper[7864]: I0224 02:10:14.112365 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhfzs\" (UniqueName: \"kubernetes.io/projected/b1fb12ad-9035-4039-9be6-6f8a84b93063-kube-api-access-nhfzs\") pod \"machine-approver-798b897698-rqrlc\" (UID: \"b1fb12ad-9035-4039-9be6-6f8a84b93063\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-rqrlc" Feb 24 02:10:14.112461 master-0 kubenswrapper[7864]: I0224 02:10:14.112398 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b1fb12ad-9035-4039-9be6-6f8a84b93063-machine-approver-tls\") pod \"machine-approver-798b897698-rqrlc\" (UID: \"b1fb12ad-9035-4039-9be6-6f8a84b93063\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-rqrlc" Feb 24 02:10:14.112461 master-0 kubenswrapper[7864]: I0224 02:10:14.112435 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1fb12ad-9035-4039-9be6-6f8a84b93063-config\") pod \"machine-approver-798b897698-rqrlc\" (UID: \"b1fb12ad-9035-4039-9be6-6f8a84b93063\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-rqrlc" Feb 24 02:10:14.216075 master-0 kubenswrapper[7864]: I0224 02:10:14.215989 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b1fb12ad-9035-4039-9be6-6f8a84b93063-auth-proxy-config\") pod \"machine-approver-798b897698-rqrlc\" (UID: \"b1fb12ad-9035-4039-9be6-6f8a84b93063\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-rqrlc" Feb 24 02:10:14.216075 master-0 kubenswrapper[7864]: I0224 02:10:14.216068 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhfzs\" (UniqueName: \"kubernetes.io/projected/b1fb12ad-9035-4039-9be6-6f8a84b93063-kube-api-access-nhfzs\") pod \"machine-approver-798b897698-rqrlc\" (UID: \"b1fb12ad-9035-4039-9be6-6f8a84b93063\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-rqrlc" Feb 24 02:10:14.216075 master-0 kubenswrapper[7864]: I0224 02:10:14.216087 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b1fb12ad-9035-4039-9be6-6f8a84b93063-machine-approver-tls\") pod \"machine-approver-798b897698-rqrlc\" (UID: \"b1fb12ad-9035-4039-9be6-6f8a84b93063\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-rqrlc" Feb 24 02:10:14.216824 master-0 kubenswrapper[7864]: I0224 02:10:14.216117 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1fb12ad-9035-4039-9be6-6f8a84b93063-config\") pod \"machine-approver-798b897698-rqrlc\" (UID: \"b1fb12ad-9035-4039-9be6-6f8a84b93063\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-rqrlc" Feb 24 02:10:14.216824 master-0 kubenswrapper[7864]: I0224 02:10:14.216738 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1fb12ad-9035-4039-9be6-6f8a84b93063-config\") pod \"machine-approver-798b897698-rqrlc\" (UID: \"b1fb12ad-9035-4039-9be6-6f8a84b93063\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-rqrlc" Feb 24 02:10:14.218182 master-0 kubenswrapper[7864]: I0224 02:10:14.217402 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b1fb12ad-9035-4039-9be6-6f8a84b93063-auth-proxy-config\") pod \"machine-approver-798b897698-rqrlc\" (UID: \"b1fb12ad-9035-4039-9be6-6f8a84b93063\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-rqrlc" Feb 24 02:10:14.262697 master-0 kubenswrapper[7864]: I0224 02:10:14.253001 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b1fb12ad-9035-4039-9be6-6f8a84b93063-machine-approver-tls\") pod \"machine-approver-798b897698-rqrlc\" (UID: \"b1fb12ad-9035-4039-9be6-6f8a84b93063\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-rqrlc" Feb 24 02:10:14.263125 master-0 kubenswrapper[7864]: I0224 02:10:14.263091 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhfzs\" (UniqueName: \"kubernetes.io/projected/b1fb12ad-9035-4039-9be6-6f8a84b93063-kube-api-access-nhfzs\") pod \"machine-approver-798b897698-rqrlc\" (UID: \"b1fb12ad-9035-4039-9be6-6f8a84b93063\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-rqrlc" Feb 24 02:10:14.365535 master-0 kubenswrapper[7864]: I0224 02:10:14.365459 7864 generic.go:334] "Generic (PLEG): container finished" podID="0cb042de-c873-408c-a4c4-ef9f7e546a08" containerID="700bebabab614b094ae34ffb33548f3295723695a9fad8972c8014d17036eac5" exitCode=0 Feb 24 02:10:14.366095 master-0 kubenswrapper[7864]: I0224 02:10:14.365540 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brpmb" event={"ID":"0cb042de-c873-408c-a4c4-ef9f7e546a08","Type":"ContainerDied","Data":"700bebabab614b094ae34ffb33548f3295723695a9fad8972c8014d17036eac5"} Feb 24 02:10:14.366095 master-0 kubenswrapper[7864]: I0224 02:10:14.365641 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brpmb" event={"ID":"0cb042de-c873-408c-a4c4-ef9f7e546a08","Type":"ContainerStarted","Data":"9867de2597cef8ea21b27065bc6c5ebb42eda67f6c0e55aa21a2e92ed8089c54"} Feb 24 02:10:14.368000 master-0 kubenswrapper[7864]: I0224 02:10:14.367964 7864 generic.go:334] "Generic (PLEG): container finished" podID="a4267e3a-aaaf-4b2f-a37c-0f097a35783f" containerID="17cb16eb2bff3a3eb8a7a600aa48da74788c2b2cfe679ee292ca901ee7cdc53d" exitCode=0 Feb 24 02:10:14.368116 master-0 kubenswrapper[7864]: I0224 02:10:14.368056 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkwwl" event={"ID":"a4267e3a-aaaf-4b2f-a37c-0f097a35783f","Type":"ContainerDied","Data":"17cb16eb2bff3a3eb8a7a600aa48da74788c2b2cfe679ee292ca901ee7cdc53d"} Feb 24 02:10:14.368162 master-0 kubenswrapper[7864]: I0224 02:10:14.368139 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkwwl" event={"ID":"a4267e3a-aaaf-4b2f-a37c-0f097a35783f","Type":"ContainerStarted","Data":"200fe8533b917a93531075b1165c1c0e535a0cad7b646b4108d64e256389832b"} Feb 24 02:10:14.403218 master-0 kubenswrapper[7864]: I0224 02:10:14.403045 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-798b897698-rqrlc" Feb 24 02:10:16.490397 master-0 kubenswrapper[7864]: W0224 02:10:16.490288 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1fb12ad_9035_4039_9be6_6f8a84b93063.slice/crio-1d7ad5dce5cca89cf38ab4bf4d6df280fd4b670e4a45f436939e2c9cc9abc228 WatchSource:0}: Error finding container 1d7ad5dce5cca89cf38ab4bf4d6df280fd4b670e4a45f436939e2c9cc9abc228: Status 404 returned error can't find the container with id 1d7ad5dce5cca89cf38ab4bf4d6df280fd4b670e4a45f436939e2c9cc9abc228 Feb 24 02:10:16.935037 master-0 kubenswrapper[7864]: I0224 02:10:16.934980 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-bkc9s"] Feb 24 02:10:16.936736 master-0 kubenswrapper[7864]: I0224 02:10:16.936703 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-bkc9s" Feb 24 02:10:16.941608 master-0 kubenswrapper[7864]: I0224 02:10:16.939350 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-ncv6j" Feb 24 02:10:16.941608 master-0 kubenswrapper[7864]: I0224 02:10:16.940025 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 24 02:10:16.941608 master-0 kubenswrapper[7864]: I0224 02:10:16.940240 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 24 02:10:16.941608 master-0 kubenswrapper[7864]: I0224 02:10:16.940482 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 24 02:10:16.954804 master-0 kubenswrapper[7864]: I0224 02:10:16.954751 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-bkc9s"] Feb 24 02:10:16.963058 master-0 kubenswrapper[7864]: I0224 02:10:16.963012 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0127e0d5-9961-4ff6-851d-884e71e1dcf2-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-bkc9s\" (UID: \"0127e0d5-9961-4ff6-851d-884e71e1dcf2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-bkc9s" Feb 24 02:10:16.963151 master-0 kubenswrapper[7864]: I0224 02:10:16.963091 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbc5w\" (UniqueName: \"kubernetes.io/projected/0127e0d5-9961-4ff6-851d-884e71e1dcf2-kube-api-access-nbc5w\") pod \"cluster-samples-operator-65c5c48b9b-bkc9s\" (UID: \"0127e0d5-9961-4ff6-851d-884e71e1dcf2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-bkc9s" Feb 24 02:10:17.064177 master-0 kubenswrapper[7864]: I0224 02:10:17.064132 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0127e0d5-9961-4ff6-851d-884e71e1dcf2-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-bkc9s\" (UID: \"0127e0d5-9961-4ff6-851d-884e71e1dcf2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-bkc9s" Feb 24 02:10:17.064486 master-0 kubenswrapper[7864]: I0224 02:10:17.064462 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbc5w\" (UniqueName: \"kubernetes.io/projected/0127e0d5-9961-4ff6-851d-884e71e1dcf2-kube-api-access-nbc5w\") pod \"cluster-samples-operator-65c5c48b9b-bkc9s\" (UID: \"0127e0d5-9961-4ff6-851d-884e71e1dcf2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-bkc9s" Feb 24 02:10:17.071588 master-0 kubenswrapper[7864]: I0224 02:10:17.069029 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0127e0d5-9961-4ff6-851d-884e71e1dcf2-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-bkc9s\" (UID: \"0127e0d5-9961-4ff6-851d-884e71e1dcf2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-bkc9s" Feb 24 02:10:17.091099 master-0 kubenswrapper[7864]: I0224 02:10:17.090976 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbc5w\" (UniqueName: \"kubernetes.io/projected/0127e0d5-9961-4ff6-851d-884e71e1dcf2-kube-api-access-nbc5w\") pod \"cluster-samples-operator-65c5c48b9b-bkc9s\" (UID: \"0127e0d5-9961-4ff6-851d-884e71e1dcf2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-bkc9s" Feb 24 02:10:17.263953 master-0 kubenswrapper[7864]: I0224 02:10:17.263892 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-bkc9s" Feb 24 02:10:17.434426 master-0 kubenswrapper[7864]: I0224 02:10:17.432807 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qqt7p" event={"ID":"b085f760-0e24-41a8-af09-538396aad935","Type":"ContainerStarted","Data":"f5b9142d347cbc969aebf3a8b0f790729154ed617740186897333ed97fd30b72"} Feb 24 02:10:17.434426 master-0 kubenswrapper[7864]: I0224 02:10:17.434257 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4znnj" event={"ID":"e56a17d6-d740-4349-833e-b5279f7db2d4","Type":"ContainerStarted","Data":"15aeb6521953635bf29090f8919f8b80363ab8e1ccf1d84c06bc5a39df964852"} Feb 24 02:10:17.436000 master-0 kubenswrapper[7864]: I0224 02:10:17.435653 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkwwl" event={"ID":"a4267e3a-aaaf-4b2f-a37c-0f097a35783f","Type":"ContainerStarted","Data":"1c5e5c71f0526c11a274e2510dcba4e8e573cc7f5dabb80f3b317e540bfa20fd"} Feb 24 02:10:17.437520 master-0 kubenswrapper[7864]: I0224 02:10:17.437486 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-rqrlc" event={"ID":"b1fb12ad-9035-4039-9be6-6f8a84b93063","Type":"ContainerStarted","Data":"a0afb79e704d94990f37c7377f85e018c90abaa7d13c2a20dd0aac3f294b7e15"} Feb 24 02:10:17.437608 master-0 kubenswrapper[7864]: I0224 02:10:17.437528 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-rqrlc" event={"ID":"b1fb12ad-9035-4039-9be6-6f8a84b93063","Type":"ContainerStarted","Data":"1d7ad5dce5cca89cf38ab4bf4d6df280fd4b670e4a45f436939e2c9cc9abc228"} Feb 24 02:10:17.440393 master-0 kubenswrapper[7864]: I0224 02:10:17.439834 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59" event={"ID":"74a7801b-b7a4-4292-91b3-6285c239aeb7","Type":"ContainerStarted","Data":"ac010ae93fa7a3f9d57b4980dd10c5273055c70374932bda4fb37b79384ffe47"} Feb 24 02:10:17.452310 master-0 kubenswrapper[7864]: I0224 02:10:17.452256 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brpmb" event={"ID":"0cb042de-c873-408c-a4c4-ef9f7e546a08","Type":"ContainerStarted","Data":"d8bc7a70a71673332c516e84a549e1618f6a4d5aacc90397bac38d952ac62d70"} Feb 24 02:10:17.479034 master-0 kubenswrapper[7864]: I0224 02:10:17.478888 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qqt7p" podStartSLOduration=3.238752091 podStartE2EDuration="8.478867649s" podCreationTimestamp="2026-02-24 02:10:09 +0000 UTC" firstStartedPulling="2026-02-24 02:10:11.242278513 +0000 UTC m=+375.569932175" lastFinishedPulling="2026-02-24 02:10:16.482394111 +0000 UTC m=+380.810047733" observedRunningTime="2026-02-24 02:10:17.45620405 +0000 UTC m=+381.783857682" watchObservedRunningTime="2026-02-24 02:10:17.478867649 +0000 UTC m=+381.806521281" Feb 24 02:10:17.523059 master-0 kubenswrapper[7864]: I0224 02:10:17.522964 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4znnj" podStartSLOduration=3.223989052 podStartE2EDuration="8.522911984s" podCreationTimestamp="2026-02-24 02:10:09 +0000 UTC" firstStartedPulling="2026-02-24 02:10:11.257894886 +0000 UTC m=+375.585548538" lastFinishedPulling="2026-02-24 02:10:16.556817848 +0000 UTC m=+380.884471470" observedRunningTime="2026-02-24 02:10:17.520627773 +0000 UTC m=+381.848281405" watchObservedRunningTime="2026-02-24 02:10:17.522911984 +0000 UTC m=+381.850565616" Feb 24 02:10:17.866336 master-0 kubenswrapper[7864]: I0224 02:10:17.862163 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59" podStartSLOduration=6.207615785 podStartE2EDuration="13.8621417s" podCreationTimestamp="2026-02-24 02:10:04 +0000 UTC" firstStartedPulling="2026-02-24 02:10:08.951263818 +0000 UTC m=+373.278917480" lastFinishedPulling="2026-02-24 02:10:16.605789743 +0000 UTC m=+380.933443395" observedRunningTime="2026-02-24 02:10:17.548096499 +0000 UTC m=+381.875750131" watchObservedRunningTime="2026-02-24 02:10:17.8621417 +0000 UTC m=+382.189795332" Feb 24 02:10:17.866336 master-0 kubenswrapper[7864]: I0224 02:10:17.865266 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-bkc9s"] Feb 24 02:10:17.893520 master-0 kubenswrapper[7864]: I0224 02:10:17.892538 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk"] Feb 24 02:10:17.893520 master-0 kubenswrapper[7864]: I0224 02:10:17.893400 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk" Feb 24 02:10:17.897499 master-0 kubenswrapper[7864]: I0224 02:10:17.896789 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-ndtpv" Feb 24 02:10:17.897499 master-0 kubenswrapper[7864]: I0224 02:10:17.897365 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 24 02:10:17.897955 master-0 kubenswrapper[7864]: I0224 02:10:17.897730 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 24 02:10:17.907973 master-0 kubenswrapper[7864]: I0224 02:10:17.907897 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk"] Feb 24 02:10:18.085087 master-0 kubenswrapper[7864]: I0224 02:10:18.085012 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt2q4\" (UniqueName: \"kubernetes.io/projected/91168f3d-70eb-4351-bb83-5411a96ad29d-kube-api-access-rt2q4\") pod \"cluster-autoscaler-operator-86b8dc6d6-mtrdk\" (UID: \"91168f3d-70eb-4351-bb83-5411a96ad29d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk" Feb 24 02:10:18.085324 master-0 kubenswrapper[7864]: I0224 02:10:18.085159 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/91168f3d-70eb-4351-bb83-5411a96ad29d-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-mtrdk\" (UID: \"91168f3d-70eb-4351-bb83-5411a96ad29d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk" Feb 24 02:10:18.085324 master-0 kubenswrapper[7864]: I0224 02:10:18.085199 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/91168f3d-70eb-4351-bb83-5411a96ad29d-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-mtrdk\" (UID: \"91168f3d-70eb-4351-bb83-5411a96ad29d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk" Feb 24 02:10:18.186464 master-0 kubenswrapper[7864]: I0224 02:10:18.186301 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/91168f3d-70eb-4351-bb83-5411a96ad29d-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-mtrdk\" (UID: \"91168f3d-70eb-4351-bb83-5411a96ad29d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk" Feb 24 02:10:18.186464 master-0 kubenswrapper[7864]: I0224 02:10:18.186370 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/91168f3d-70eb-4351-bb83-5411a96ad29d-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-mtrdk\" (UID: \"91168f3d-70eb-4351-bb83-5411a96ad29d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk" Feb 24 02:10:18.186809 master-0 kubenswrapper[7864]: I0224 02:10:18.186624 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt2q4\" (UniqueName: \"kubernetes.io/projected/91168f3d-70eb-4351-bb83-5411a96ad29d-kube-api-access-rt2q4\") pod \"cluster-autoscaler-operator-86b8dc6d6-mtrdk\" (UID: \"91168f3d-70eb-4351-bb83-5411a96ad29d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk" Feb 24 02:10:18.187767 master-0 kubenswrapper[7864]: I0224 02:10:18.187734 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/91168f3d-70eb-4351-bb83-5411a96ad29d-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-mtrdk\" (UID: \"91168f3d-70eb-4351-bb83-5411a96ad29d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk" Feb 24 02:10:18.193305 master-0 kubenswrapper[7864]: I0224 02:10:18.193247 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/91168f3d-70eb-4351-bb83-5411a96ad29d-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-mtrdk\" (UID: \"91168f3d-70eb-4351-bb83-5411a96ad29d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk" Feb 24 02:10:18.211310 master-0 kubenswrapper[7864]: I0224 02:10:18.211249 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt2q4\" (UniqueName: \"kubernetes.io/projected/91168f3d-70eb-4351-bb83-5411a96ad29d-kube-api-access-rt2q4\") pod \"cluster-autoscaler-operator-86b8dc6d6-mtrdk\" (UID: \"91168f3d-70eb-4351-bb83-5411a96ad29d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk" Feb 24 02:10:18.261402 master-0 kubenswrapper[7864]: I0224 02:10:18.261333 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk" Feb 24 02:10:18.328256 master-0 kubenswrapper[7864]: I0224 02:10:18.328192 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-c5wlk"] Feb 24 02:10:18.329219 master-0 kubenswrapper[7864]: I0224 02:10:18.329183 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-c5wlk" Feb 24 02:10:18.334889 master-0 kubenswrapper[7864]: I0224 02:10:18.334433 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-6fp4p" Feb 24 02:10:18.335014 master-0 kubenswrapper[7864]: I0224 02:10:18.334907 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 24 02:10:18.356851 master-0 kubenswrapper[7864]: I0224 02:10:18.356789 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-c5wlk"] Feb 24 02:10:18.391476 master-0 kubenswrapper[7864]: I0224 02:10:18.391390 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/011c6603-d533-4449-b409-f6f698a3bd50-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-c5wlk\" (UID: \"011c6603-d533-4449-b409-f6f698a3bd50\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-c5wlk" Feb 24 02:10:18.391723 master-0 kubenswrapper[7864]: I0224 02:10:18.391515 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh4wr\" (UniqueName: \"kubernetes.io/projected/011c6603-d533-4449-b409-f6f698a3bd50-kube-api-access-xh4wr\") pod \"cluster-storage-operator-f94476f49-c5wlk\" (UID: \"011c6603-d533-4449-b409-f6f698a3bd50\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-c5wlk" Feb 24 02:10:18.468511 master-0 kubenswrapper[7864]: I0224 02:10:18.468471 7864 generic.go:334] "Generic (PLEG): container finished" podID="a4267e3a-aaaf-4b2f-a37c-0f097a35783f" containerID="1c5e5c71f0526c11a274e2510dcba4e8e573cc7f5dabb80f3b317e540bfa20fd" exitCode=0 Feb 24 02:10:18.468900 master-0 kubenswrapper[7864]: I0224 02:10:18.468559 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkwwl" event={"ID":"a4267e3a-aaaf-4b2f-a37c-0f097a35783f","Type":"ContainerDied","Data":"1c5e5c71f0526c11a274e2510dcba4e8e573cc7f5dabb80f3b317e540bfa20fd"} Feb 24 02:10:18.472223 master-0 kubenswrapper[7864]: I0224 02:10:18.470635 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-bkc9s" event={"ID":"0127e0d5-9961-4ff6-851d-884e71e1dcf2","Type":"ContainerStarted","Data":"c8a44d641739b0edde589e3cc2ab82e120d1f854cda8b41d7ab46952d705c4b9"} Feb 24 02:10:18.477970 master-0 kubenswrapper[7864]: I0224 02:10:18.477883 7864 generic.go:334] "Generic (PLEG): container finished" podID="0cb042de-c873-408c-a4c4-ef9f7e546a08" containerID="d8bc7a70a71673332c516e84a549e1618f6a4d5aacc90397bac38d952ac62d70" exitCode=0 Feb 24 02:10:18.478792 master-0 kubenswrapper[7864]: I0224 02:10:18.478738 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brpmb" event={"ID":"0cb042de-c873-408c-a4c4-ef9f7e546a08","Type":"ContainerDied","Data":"d8bc7a70a71673332c516e84a549e1618f6a4d5aacc90397bac38d952ac62d70"} Feb 24 02:10:18.494815 master-0 kubenswrapper[7864]: I0224 02:10:18.494760 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/011c6603-d533-4449-b409-f6f698a3bd50-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-c5wlk\" (UID: \"011c6603-d533-4449-b409-f6f698a3bd50\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-c5wlk" Feb 24 02:10:18.495392 master-0 kubenswrapper[7864]: I0224 02:10:18.495340 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh4wr\" (UniqueName: \"kubernetes.io/projected/011c6603-d533-4449-b409-f6f698a3bd50-kube-api-access-xh4wr\") pod \"cluster-storage-operator-f94476f49-c5wlk\" (UID: \"011c6603-d533-4449-b409-f6f698a3bd50\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-c5wlk" Feb 24 02:10:18.500883 master-0 kubenswrapper[7864]: I0224 02:10:18.500836 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/011c6603-d533-4449-b409-f6f698a3bd50-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-c5wlk\" (UID: \"011c6603-d533-4449-b409-f6f698a3bd50\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-c5wlk" Feb 24 02:10:18.520036 master-0 kubenswrapper[7864]: I0224 02:10:18.519971 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh4wr\" (UniqueName: \"kubernetes.io/projected/011c6603-d533-4449-b409-f6f698a3bd50-kube-api-access-xh4wr\") pod \"cluster-storage-operator-f94476f49-c5wlk\" (UID: \"011c6603-d533-4449-b409-f6f698a3bd50\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-c5wlk" Feb 24 02:10:18.655014 master-0 kubenswrapper[7864]: I0224 02:10:18.654957 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-c5wlk" Feb 24 02:10:18.918601 master-0 kubenswrapper[7864]: I0224 02:10:18.918492 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-59b498fcfb-dbkwd"] Feb 24 02:10:18.922019 master-0 kubenswrapper[7864]: I0224 02:10:18.920561 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" Feb 24 02:10:18.927602 master-0 kubenswrapper[7864]: I0224 02:10:18.927192 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 24 02:10:18.928073 master-0 kubenswrapper[7864]: I0224 02:10:18.928054 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 24 02:10:18.928406 master-0 kubenswrapper[7864]: I0224 02:10:18.928057 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 24 02:10:18.928406 master-0 kubenswrapper[7864]: I0224 02:10:18.928059 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 24 02:10:18.928406 master-0 kubenswrapper[7864]: I0224 02:10:18.928118 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-rrfph" Feb 24 02:10:18.928510 master-0 kubenswrapper[7864]: I0224 02:10:18.928125 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 24 02:10:18.986386 master-0 kubenswrapper[7864]: I0224 02:10:18.985563 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-59b498fcfb-dbkwd"] Feb 24 02:10:19.004628 master-0 kubenswrapper[7864]: I0224 02:10:19.002627 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e0c87ae-6387-4c00-b03d-582566907fb6-service-ca-bundle\") pod \"insights-operator-59b498fcfb-dbkwd\" (UID: \"8e0c87ae-6387-4c00-b03d-582566907fb6\") " pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" Feb 24 02:10:19.004628 master-0 kubenswrapper[7864]: I0224 02:10:19.002687 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e0c87ae-6387-4c00-b03d-582566907fb6-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-dbkwd\" (UID: \"8e0c87ae-6387-4c00-b03d-582566907fb6\") " pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" Feb 24 02:10:19.004628 master-0 kubenswrapper[7864]: I0224 02:10:19.002727 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e0c87ae-6387-4c00-b03d-582566907fb6-serving-cert\") pod \"insights-operator-59b498fcfb-dbkwd\" (UID: \"8e0c87ae-6387-4c00-b03d-582566907fb6\") " pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" Feb 24 02:10:19.004628 master-0 kubenswrapper[7864]: I0224 02:10:19.002749 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dcvb\" (UniqueName: \"kubernetes.io/projected/8e0c87ae-6387-4c00-b03d-582566907fb6-kube-api-access-5dcvb\") pod \"insights-operator-59b498fcfb-dbkwd\" (UID: \"8e0c87ae-6387-4c00-b03d-582566907fb6\") " pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" Feb 24 02:10:19.004628 master-0 kubenswrapper[7864]: I0224 02:10:19.002841 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/8e0c87ae-6387-4c00-b03d-582566907fb6-snapshots\") pod \"insights-operator-59b498fcfb-dbkwd\" (UID: \"8e0c87ae-6387-4c00-b03d-582566907fb6\") " pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" Feb 24 02:10:19.103862 master-0 kubenswrapper[7864]: I0224 02:10:19.103810 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e0c87ae-6387-4c00-b03d-582566907fb6-service-ca-bundle\") pod \"insights-operator-59b498fcfb-dbkwd\" (UID: \"8e0c87ae-6387-4c00-b03d-582566907fb6\") " pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" Feb 24 02:10:19.104052 master-0 kubenswrapper[7864]: I0224 02:10:19.103886 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e0c87ae-6387-4c00-b03d-582566907fb6-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-dbkwd\" (UID: \"8e0c87ae-6387-4c00-b03d-582566907fb6\") " pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" Feb 24 02:10:19.104052 master-0 kubenswrapper[7864]: I0224 02:10:19.103952 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e0c87ae-6387-4c00-b03d-582566907fb6-serving-cert\") pod \"insights-operator-59b498fcfb-dbkwd\" (UID: \"8e0c87ae-6387-4c00-b03d-582566907fb6\") " pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" Feb 24 02:10:19.104052 master-0 kubenswrapper[7864]: I0224 02:10:19.103987 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dcvb\" (UniqueName: \"kubernetes.io/projected/8e0c87ae-6387-4c00-b03d-582566907fb6-kube-api-access-5dcvb\") pod \"insights-operator-59b498fcfb-dbkwd\" (UID: \"8e0c87ae-6387-4c00-b03d-582566907fb6\") " pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" Feb 24 02:10:19.104052 master-0 kubenswrapper[7864]: I0224 02:10:19.104029 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/8e0c87ae-6387-4c00-b03d-582566907fb6-snapshots\") pod \"insights-operator-59b498fcfb-dbkwd\" (UID: \"8e0c87ae-6387-4c00-b03d-582566907fb6\") " pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" Feb 24 02:10:19.106554 master-0 kubenswrapper[7864]: I0224 02:10:19.105536 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/8e0c87ae-6387-4c00-b03d-582566907fb6-snapshots\") pod \"insights-operator-59b498fcfb-dbkwd\" (UID: \"8e0c87ae-6387-4c00-b03d-582566907fb6\") " pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" Feb 24 02:10:19.106554 master-0 kubenswrapper[7864]: I0224 02:10:19.105661 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e0c87ae-6387-4c00-b03d-582566907fb6-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-dbkwd\" (UID: \"8e0c87ae-6387-4c00-b03d-582566907fb6\") " pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" Feb 24 02:10:19.106554 master-0 kubenswrapper[7864]: I0224 02:10:19.106512 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e0c87ae-6387-4c00-b03d-582566907fb6-service-ca-bundle\") pod \"insights-operator-59b498fcfb-dbkwd\" (UID: \"8e0c87ae-6387-4c00-b03d-582566907fb6\") " pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" Feb 24 02:10:19.112769 master-0 kubenswrapper[7864]: I0224 02:10:19.111833 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e0c87ae-6387-4c00-b03d-582566907fb6-serving-cert\") pod \"insights-operator-59b498fcfb-dbkwd\" (UID: \"8e0c87ae-6387-4c00-b03d-582566907fb6\") " pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" Feb 24 02:10:19.127684 master-0 kubenswrapper[7864]: I0224 02:10:19.127635 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dcvb\" (UniqueName: \"kubernetes.io/projected/8e0c87ae-6387-4c00-b03d-582566907fb6-kube-api-access-5dcvb\") pod \"insights-operator-59b498fcfb-dbkwd\" (UID: \"8e0c87ae-6387-4c00-b03d-582566907fb6\") " pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" Feb 24 02:10:19.257238 master-0 kubenswrapper[7864]: I0224 02:10:19.257193 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" Feb 24 02:10:19.317070 master-0 kubenswrapper[7864]: I0224 02:10:19.317001 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-c5wlk"] Feb 24 02:10:19.369593 master-0 kubenswrapper[7864]: I0224 02:10:19.367002 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk"] Feb 24 02:10:19.381388 master-0 kubenswrapper[7864]: W0224 02:10:19.381333 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91168f3d_70eb_4351_bb83_5411a96ad29d.slice/crio-2335a36c0dbf2382a55062b41bd9de9b70220499140428315aa61fdfd6dde11f WatchSource:0}: Error finding container 2335a36c0dbf2382a55062b41bd9de9b70220499140428315aa61fdfd6dde11f: Status 404 returned error can't find the container with id 2335a36c0dbf2382a55062b41bd9de9b70220499140428315aa61fdfd6dde11f Feb 24 02:10:19.492153 master-0 kubenswrapper[7864]: I0224 02:10:19.491894 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-rqrlc" event={"ID":"b1fb12ad-9035-4039-9be6-6f8a84b93063","Type":"ContainerStarted","Data":"704cc93c8cee0a4135d186aa5dbd6348038b2819c9f9b5e8637ce4b259a4b3a4"} Feb 24 02:10:19.493320 master-0 kubenswrapper[7864]: I0224 02:10:19.493274 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk" event={"ID":"91168f3d-70eb-4351-bb83-5411a96ad29d","Type":"ContainerStarted","Data":"2335a36c0dbf2382a55062b41bd9de9b70220499140428315aa61fdfd6dde11f"} Feb 24 02:10:19.497187 master-0 kubenswrapper[7864]: I0224 02:10:19.497148 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brpmb" event={"ID":"0cb042de-c873-408c-a4c4-ef9f7e546a08","Type":"ContainerStarted","Data":"810cd243cc24c7b21ab849373cf57f7831cbca0a7bcf82441855e145620041e9"} Feb 24 02:10:19.505632 master-0 kubenswrapper[7864]: I0224 02:10:19.499387 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-c5wlk" event={"ID":"011c6603-d533-4449-b409-f6f698a3bd50","Type":"ContainerStarted","Data":"da0959fc5c7a27175270ce726463fd3e9e8da5aff2a8a6bf45a477613fc17349"} Feb 24 02:10:19.505632 master-0 kubenswrapper[7864]: I0224 02:10:19.501440 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkwwl" event={"ID":"a4267e3a-aaaf-4b2f-a37c-0f097a35783f","Type":"ContainerStarted","Data":"e749b70347c5e0663b75c02245f5717b4cc199456a6d1ef4ebc3ed6a3962364b"} Feb 24 02:10:19.510031 master-0 kubenswrapper[7864]: I0224 02:10:19.509983 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-798b897698-rqrlc" podStartSLOduration=3.472102422 podStartE2EDuration="5.509952236s" podCreationTimestamp="2026-02-24 02:10:14 +0000 UTC" firstStartedPulling="2026-02-24 02:10:16.804739522 +0000 UTC m=+381.132393154" lastFinishedPulling="2026-02-24 02:10:18.842589346 +0000 UTC m=+383.170242968" observedRunningTime="2026-02-24 02:10:19.509777091 +0000 UTC m=+383.837430713" watchObservedRunningTime="2026-02-24 02:10:19.509952236 +0000 UTC m=+383.837605858" Feb 24 02:10:19.534111 master-0 kubenswrapper[7864]: I0224 02:10:19.534055 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kkwwl" podStartSLOduration=4.823851907 podStartE2EDuration="7.534041673s" podCreationTimestamp="2026-02-24 02:10:12 +0000 UTC" firstStartedPulling="2026-02-24 02:10:16.441012587 +0000 UTC m=+380.768666239" lastFinishedPulling="2026-02-24 02:10:19.151202383 +0000 UTC m=+383.478856005" observedRunningTime="2026-02-24 02:10:19.53317552 +0000 UTC m=+383.860829162" watchObservedRunningTime="2026-02-24 02:10:19.534041673 +0000 UTC m=+383.861695295" Feb 24 02:10:19.726265 master-0 kubenswrapper[7864]: I0224 02:10:19.726159 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-brpmb" podStartSLOduration=5.032969564 podStartE2EDuration="7.72613836s" podCreationTimestamp="2026-02-24 02:10:12 +0000 UTC" firstStartedPulling="2026-02-24 02:10:16.442537288 +0000 UTC m=+380.770190910" lastFinishedPulling="2026-02-24 02:10:19.135706094 +0000 UTC m=+383.463359706" observedRunningTime="2026-02-24 02:10:19.55095693 +0000 UTC m=+383.878610542" watchObservedRunningTime="2026-02-24 02:10:19.72613836 +0000 UTC m=+384.053791992" Feb 24 02:10:19.727228 master-0 kubenswrapper[7864]: I0224 02:10:19.727204 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-59b498fcfb-dbkwd"] Feb 24 02:10:19.740487 master-0 kubenswrapper[7864]: W0224 02:10:19.740443 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e0c87ae_6387_4c00_b03d_582566907fb6.slice/crio-627eb8f17d5fd787312a09b22ed574b8b738c499ce476e336267fe5d3546a7b9 WatchSource:0}: Error finding container 627eb8f17d5fd787312a09b22ed574b8b738c499ce476e336267fe5d3546a7b9: Status 404 returned error can't find the container with id 627eb8f17d5fd787312a09b22ed574b8b738c499ce476e336267fe5d3546a7b9 Feb 24 02:10:19.875661 master-0 kubenswrapper[7864]: I0224 02:10:19.875140 7864 scope.go:117] "RemoveContainer" containerID="4291e0907ef19a32f51af939efa4c04ce2db9b3453891c7e12142da00a40e520" Feb 24 02:10:19.875661 master-0 kubenswrapper[7864]: E0224 02:10:19.875340 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-8l58x_openshift-cluster-storage-operator(f6e7b773-7ecd-4a5c-8bef-d672f371e7e5)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" podUID="f6e7b773-7ecd-4a5c-8bef-d672f371e7e5" Feb 24 02:10:20.124687 master-0 kubenswrapper[7864]: I0224 02:10:20.120620 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4znnj" Feb 24 02:10:20.124687 master-0 kubenswrapper[7864]: I0224 02:10:20.120752 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4znnj" Feb 24 02:10:20.188848 master-0 kubenswrapper[7864]: I0224 02:10:20.188734 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qqt7p" Feb 24 02:10:20.188848 master-0 kubenswrapper[7864]: I0224 02:10:20.188793 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qqt7p" Feb 24 02:10:20.242661 master-0 kubenswrapper[7864]: I0224 02:10:20.242296 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qqt7p" Feb 24 02:10:20.512644 master-0 kubenswrapper[7864]: I0224 02:10:20.510629 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw"] Feb 24 02:10:20.512644 master-0 kubenswrapper[7864]: I0224 02:10:20.511560 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw" Feb 24 02:10:20.514833 master-0 kubenswrapper[7864]: I0224 02:10:20.514770 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk" event={"ID":"91168f3d-70eb-4351-bb83-5411a96ad29d","Type":"ContainerStarted","Data":"67bcd090168c3067b9df003f44922fafc0a07cb705086156a46960cbedf5866d"} Feb 24 02:10:20.515992 master-0 kubenswrapper[7864]: I0224 02:10:20.515941 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" event={"ID":"8e0c87ae-6387-4c00-b03d-582566907fb6","Type":"ContainerStarted","Data":"627eb8f17d5fd787312a09b22ed574b8b738c499ce476e336267fe5d3546a7b9"} Feb 24 02:10:20.520093 master-0 kubenswrapper[7864]: I0224 02:10:20.518261 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 24 02:10:20.520093 master-0 kubenswrapper[7864]: I0224 02:10:20.518524 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 24 02:10:20.520093 master-0 kubenswrapper[7864]: I0224 02:10:20.518730 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 24 02:10:20.520093 master-0 kubenswrapper[7864]: I0224 02:10:20.519087 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-lprdj" Feb 24 02:10:20.520093 master-0 kubenswrapper[7864]: I0224 02:10:20.519424 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 24 02:10:20.520275 master-0 kubenswrapper[7864]: I0224 02:10:20.520171 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 24 02:10:20.539069 master-0 kubenswrapper[7864]: I0224 02:10:20.534352 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d706583-f8dc-4a3c-832b-7e8249d0c662-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw\" (UID: \"9d706583-f8dc-4a3c-832b-7e8249d0c662\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw" Feb 24 02:10:20.539069 master-0 kubenswrapper[7864]: I0224 02:10:20.534422 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpg5p\" (UniqueName: \"kubernetes.io/projected/9d706583-f8dc-4a3c-832b-7e8249d0c662-kube-api-access-bpg5p\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw\" (UID: \"9d706583-f8dc-4a3c-832b-7e8249d0c662\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw" Feb 24 02:10:20.539069 master-0 kubenswrapper[7864]: I0224 02:10:20.534512 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9d706583-f8dc-4a3c-832b-7e8249d0c662-images\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw\" (UID: \"9d706583-f8dc-4a3c-832b-7e8249d0c662\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw" Feb 24 02:10:20.539069 master-0 kubenswrapper[7864]: I0224 02:10:20.534541 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d706583-f8dc-4a3c-832b-7e8249d0c662-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw\" (UID: \"9d706583-f8dc-4a3c-832b-7e8249d0c662\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw" Feb 24 02:10:20.539069 master-0 kubenswrapper[7864]: I0224 02:10:20.534570 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9d706583-f8dc-4a3c-832b-7e8249d0c662-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw\" (UID: \"9d706583-f8dc-4a3c-832b-7e8249d0c662\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw" Feb 24 02:10:20.643305 master-0 kubenswrapper[7864]: I0224 02:10:20.643263 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d706583-f8dc-4a3c-832b-7e8249d0c662-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw\" (UID: \"9d706583-f8dc-4a3c-832b-7e8249d0c662\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw" Feb 24 02:10:20.643802 master-0 kubenswrapper[7864]: I0224 02:10:20.643764 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpg5p\" (UniqueName: \"kubernetes.io/projected/9d706583-f8dc-4a3c-832b-7e8249d0c662-kube-api-access-bpg5p\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw\" (UID: \"9d706583-f8dc-4a3c-832b-7e8249d0c662\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw" Feb 24 02:10:20.643866 master-0 kubenswrapper[7864]: I0224 02:10:20.643824 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9d706583-f8dc-4a3c-832b-7e8249d0c662-images\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw\" (UID: \"9d706583-f8dc-4a3c-832b-7e8249d0c662\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw" Feb 24 02:10:20.643866 master-0 kubenswrapper[7864]: I0224 02:10:20.643845 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d706583-f8dc-4a3c-832b-7e8249d0c662-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw\" (UID: \"9d706583-f8dc-4a3c-832b-7e8249d0c662\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw" Feb 24 02:10:20.643866 master-0 kubenswrapper[7864]: I0224 02:10:20.643863 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9d706583-f8dc-4a3c-832b-7e8249d0c662-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw\" (UID: \"9d706583-f8dc-4a3c-832b-7e8249d0c662\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw" Feb 24 02:10:20.643993 master-0 kubenswrapper[7864]: I0224 02:10:20.643973 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9d706583-f8dc-4a3c-832b-7e8249d0c662-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw\" (UID: \"9d706583-f8dc-4a3c-832b-7e8249d0c662\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw" Feb 24 02:10:20.648630 master-0 kubenswrapper[7864]: I0224 02:10:20.644830 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9d706583-f8dc-4a3c-832b-7e8249d0c662-images\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw\" (UID: \"9d706583-f8dc-4a3c-832b-7e8249d0c662\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw" Feb 24 02:10:20.648630 master-0 kubenswrapper[7864]: I0224 02:10:20.645295 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d706583-f8dc-4a3c-832b-7e8249d0c662-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw\" (UID: \"9d706583-f8dc-4a3c-832b-7e8249d0c662\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw" Feb 24 02:10:20.653300 master-0 kubenswrapper[7864]: I0224 02:10:20.653278 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d706583-f8dc-4a3c-832b-7e8249d0c662-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw\" (UID: \"9d706583-f8dc-4a3c-832b-7e8249d0c662\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw" Feb 24 02:10:20.703165 master-0 kubenswrapper[7864]: I0224 02:10:20.703126 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpg5p\" (UniqueName: \"kubernetes.io/projected/9d706583-f8dc-4a3c-832b-7e8249d0c662-kube-api-access-bpg5p\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw\" (UID: \"9d706583-f8dc-4a3c-832b-7e8249d0c662\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw" Feb 24 02:10:20.704732 master-0 kubenswrapper[7864]: I0224 02:10:20.704693 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm"] Feb 24 02:10:20.705563 master-0 kubenswrapper[7864]: I0224 02:10:20.705547 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" Feb 24 02:10:20.728650 master-0 kubenswrapper[7864]: I0224 02:10:20.723126 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 24 02:10:20.728650 master-0 kubenswrapper[7864]: I0224 02:10:20.724609 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-brkg4" Feb 24 02:10:20.728650 master-0 kubenswrapper[7864]: I0224 02:10:20.724805 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 24 02:10:20.728650 master-0 kubenswrapper[7864]: I0224 02:10:20.728270 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 24 02:10:20.748955 master-0 kubenswrapper[7864]: I0224 02:10:20.748521 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm"] Feb 24 02:10:20.840137 master-0 kubenswrapper[7864]: I0224 02:10:20.839242 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw" Feb 24 02:10:20.847154 master-0 kubenswrapper[7864]: I0224 02:10:20.846528 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-dsjgm\" (UID: \"0ce6dd93-084c-4e15-8b7c-e0829a6df14e\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" Feb 24 02:10:20.847154 master-0 kubenswrapper[7864]: I0224 02:10:20.846579 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-images\") pod \"machine-api-operator-5c7cf458b4-dsjgm\" (UID: \"0ce6dd93-084c-4e15-8b7c-e0829a6df14e\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" Feb 24 02:10:20.847154 master-0 kubenswrapper[7864]: I0224 02:10:20.846616 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8msx\" (UniqueName: \"kubernetes.io/projected/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-kube-api-access-q8msx\") pod \"machine-api-operator-5c7cf458b4-dsjgm\" (UID: \"0ce6dd93-084c-4e15-8b7c-e0829a6df14e\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" Feb 24 02:10:20.847154 master-0 kubenswrapper[7864]: I0224 02:10:20.846635 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-config\") pod \"machine-api-operator-5c7cf458b4-dsjgm\" (UID: \"0ce6dd93-084c-4e15-8b7c-e0829a6df14e\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" Feb 24 02:10:20.950439 master-0 kubenswrapper[7864]: I0224 02:10:20.950381 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-dsjgm\" (UID: \"0ce6dd93-084c-4e15-8b7c-e0829a6df14e\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" Feb 24 02:10:20.950760 master-0 kubenswrapper[7864]: I0224 02:10:20.950734 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-images\") pod \"machine-api-operator-5c7cf458b4-dsjgm\" (UID: \"0ce6dd93-084c-4e15-8b7c-e0829a6df14e\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" Feb 24 02:10:20.950803 master-0 kubenswrapper[7864]: I0224 02:10:20.950782 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8msx\" (UniqueName: \"kubernetes.io/projected/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-kube-api-access-q8msx\") pod \"machine-api-operator-5c7cf458b4-dsjgm\" (UID: \"0ce6dd93-084c-4e15-8b7c-e0829a6df14e\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" Feb 24 02:10:20.950832 master-0 kubenswrapper[7864]: I0224 02:10:20.950811 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-config\") pod \"machine-api-operator-5c7cf458b4-dsjgm\" (UID: \"0ce6dd93-084c-4e15-8b7c-e0829a6df14e\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" Feb 24 02:10:20.954468 master-0 kubenswrapper[7864]: I0224 02:10:20.954374 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-dsjgm\" (UID: \"0ce6dd93-084c-4e15-8b7c-e0829a6df14e\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" Feb 24 02:10:20.954928 master-0 kubenswrapper[7864]: I0224 02:10:20.954803 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-images\") pod \"machine-api-operator-5c7cf458b4-dsjgm\" (UID: \"0ce6dd93-084c-4e15-8b7c-e0829a6df14e\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" Feb 24 02:10:20.956344 master-0 kubenswrapper[7864]: I0224 02:10:20.956307 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-config\") pod \"machine-api-operator-5c7cf458b4-dsjgm\" (UID: \"0ce6dd93-084c-4e15-8b7c-e0829a6df14e\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" Feb 24 02:10:20.974437 master-0 kubenswrapper[7864]: I0224 02:10:20.974050 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8msx\" (UniqueName: \"kubernetes.io/projected/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-kube-api-access-q8msx\") pod \"machine-api-operator-5c7cf458b4-dsjgm\" (UID: \"0ce6dd93-084c-4e15-8b7c-e0829a6df14e\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" Feb 24 02:10:21.039090 master-0 kubenswrapper[7864]: I0224 02:10:21.039027 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" Feb 24 02:10:21.178740 master-0 kubenswrapper[7864]: I0224 02:10:21.173433 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4znnj" podUID="e56a17d6-d740-4349-833e-b5279f7db2d4" containerName="registry-server" probeResult="failure" output=< Feb 24 02:10:21.178740 master-0 kubenswrapper[7864]: timeout: failed to connect service ":50051" within 1s Feb 24 02:10:21.178740 master-0 kubenswrapper[7864]: > Feb 24 02:10:22.102353 master-0 kubenswrapper[7864]: I0224 02:10:22.102302 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-hfpql"] Feb 24 02:10:22.103301 master-0 kubenswrapper[7864]: I0224 02:10:22.103269 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hfpql" Feb 24 02:10:22.107039 master-0 kubenswrapper[7864]: I0224 02:10:22.107007 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 24 02:10:22.107116 master-0 kubenswrapper[7864]: I0224 02:10:22.107023 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-pbr2s" Feb 24 02:10:22.283389 master-0 kubenswrapper[7864]: I0224 02:10:22.283322 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4q7n\" (UniqueName: \"kubernetes.io/projected/df42c69b-1a0e-41f5-9006-17540369b9ad-kube-api-access-f4q7n\") pod \"machine-config-daemon-hfpql\" (UID: \"df42c69b-1a0e-41f5-9006-17540369b9ad\") " pod="openshift-machine-config-operator/machine-config-daemon-hfpql" Feb 24 02:10:22.283714 master-0 kubenswrapper[7864]: I0224 02:10:22.283692 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/df42c69b-1a0e-41f5-9006-17540369b9ad-rootfs\") pod \"machine-config-daemon-hfpql\" (UID: \"df42c69b-1a0e-41f5-9006-17540369b9ad\") " pod="openshift-machine-config-operator/machine-config-daemon-hfpql" Feb 24 02:10:22.283826 master-0 kubenswrapper[7864]: I0224 02:10:22.283811 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/df42c69b-1a0e-41f5-9006-17540369b9ad-proxy-tls\") pod \"machine-config-daemon-hfpql\" (UID: \"df42c69b-1a0e-41f5-9006-17540369b9ad\") " pod="openshift-machine-config-operator/machine-config-daemon-hfpql" Feb 24 02:10:22.283983 master-0 kubenswrapper[7864]: I0224 02:10:22.283968 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/df42c69b-1a0e-41f5-9006-17540369b9ad-mcd-auth-proxy-config\") pod \"machine-config-daemon-hfpql\" (UID: \"df42c69b-1a0e-41f5-9006-17540369b9ad\") " pod="openshift-machine-config-operator/machine-config-daemon-hfpql" Feb 24 02:10:22.385439 master-0 kubenswrapper[7864]: I0224 02:10:22.385311 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/df42c69b-1a0e-41f5-9006-17540369b9ad-mcd-auth-proxy-config\") pod \"machine-config-daemon-hfpql\" (UID: \"df42c69b-1a0e-41f5-9006-17540369b9ad\") " pod="openshift-machine-config-operator/machine-config-daemon-hfpql" Feb 24 02:10:22.385439 master-0 kubenswrapper[7864]: I0224 02:10:22.385397 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4q7n\" (UniqueName: \"kubernetes.io/projected/df42c69b-1a0e-41f5-9006-17540369b9ad-kube-api-access-f4q7n\") pod \"machine-config-daemon-hfpql\" (UID: \"df42c69b-1a0e-41f5-9006-17540369b9ad\") " pod="openshift-machine-config-operator/machine-config-daemon-hfpql" Feb 24 02:10:22.385439 master-0 kubenswrapper[7864]: I0224 02:10:22.385433 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/df42c69b-1a0e-41f5-9006-17540369b9ad-rootfs\") pod \"machine-config-daemon-hfpql\" (UID: \"df42c69b-1a0e-41f5-9006-17540369b9ad\") " pod="openshift-machine-config-operator/machine-config-daemon-hfpql" Feb 24 02:10:22.386756 master-0 kubenswrapper[7864]: I0224 02:10:22.386701 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/df42c69b-1a0e-41f5-9006-17540369b9ad-mcd-auth-proxy-config\") pod \"machine-config-daemon-hfpql\" (UID: \"df42c69b-1a0e-41f5-9006-17540369b9ad\") " pod="openshift-machine-config-operator/machine-config-daemon-hfpql" Feb 24 02:10:22.386844 master-0 kubenswrapper[7864]: I0224 02:10:22.386814 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/df42c69b-1a0e-41f5-9006-17540369b9ad-proxy-tls\") pod \"machine-config-daemon-hfpql\" (UID: \"df42c69b-1a0e-41f5-9006-17540369b9ad\") " pod="openshift-machine-config-operator/machine-config-daemon-hfpql" Feb 24 02:10:22.387858 master-0 kubenswrapper[7864]: I0224 02:10:22.387793 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/df42c69b-1a0e-41f5-9006-17540369b9ad-rootfs\") pod \"machine-config-daemon-hfpql\" (UID: \"df42c69b-1a0e-41f5-9006-17540369b9ad\") " pod="openshift-machine-config-operator/machine-config-daemon-hfpql" Feb 24 02:10:22.390277 master-0 kubenswrapper[7864]: I0224 02:10:22.390232 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/df42c69b-1a0e-41f5-9006-17540369b9ad-proxy-tls\") pod \"machine-config-daemon-hfpql\" (UID: \"df42c69b-1a0e-41f5-9006-17540369b9ad\") " pod="openshift-machine-config-operator/machine-config-daemon-hfpql" Feb 24 02:10:22.406152 master-0 kubenswrapper[7864]: I0224 02:10:22.406093 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4q7n\" (UniqueName: \"kubernetes.io/projected/df42c69b-1a0e-41f5-9006-17540369b9ad-kube-api-access-f4q7n\") pod \"machine-config-daemon-hfpql\" (UID: \"df42c69b-1a0e-41f5-9006-17540369b9ad\") " pod="openshift-machine-config-operator/machine-config-daemon-hfpql" Feb 24 02:10:22.429884 master-0 kubenswrapper[7864]: I0224 02:10:22.429485 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hfpql" Feb 24 02:10:22.875096 master-0 kubenswrapper[7864]: I0224 02:10:22.875011 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-brpmb" Feb 24 02:10:22.875096 master-0 kubenswrapper[7864]: I0224 02:10:22.875091 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-brpmb" Feb 24 02:10:22.931419 master-0 kubenswrapper[7864]: I0224 02:10:22.931351 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kkwwl" Feb 24 02:10:22.931730 master-0 kubenswrapper[7864]: I0224 02:10:22.931661 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kkwwl" Feb 24 02:10:22.932845 master-0 kubenswrapper[7864]: I0224 02:10:22.932785 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-brpmb" Feb 24 02:10:22.935524 master-0 kubenswrapper[7864]: W0224 02:10:22.935470 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d706583_f8dc_4a3c_832b_7e8249d0c662.slice/crio-acbd6c24283dc82192e4eb0fdc26b53de15fac7e1a937183378a366b3365f0ee WatchSource:0}: Error finding container acbd6c24283dc82192e4eb0fdc26b53de15fac7e1a937183378a366b3365f0ee: Status 404 returned error can't find the container with id acbd6c24283dc82192e4eb0fdc26b53de15fac7e1a937183378a366b3365f0ee Feb 24 02:10:22.995249 master-0 kubenswrapper[7864]: I0224 02:10:22.995190 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kkwwl" Feb 24 02:10:23.540503 master-0 kubenswrapper[7864]: I0224 02:10:23.540411 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw" event={"ID":"9d706583-f8dc-4a3c-832b-7e8249d0c662","Type":"ContainerStarted","Data":"acbd6c24283dc82192e4eb0fdc26b53de15fac7e1a937183378a366b3365f0ee"} Feb 24 02:10:23.632997 master-0 kubenswrapper[7864]: I0224 02:10:23.632938 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm"] Feb 24 02:10:23.961819 master-0 kubenswrapper[7864]: W0224 02:10:23.961750 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ce6dd93_084c_4e15_8b7c_e0829a6df14e.slice/crio-7a26b2c8abf7070791144c3b808d314860b4cb305adbb9095d34928ddbac7f4a WatchSource:0}: Error finding container 7a26b2c8abf7070791144c3b808d314860b4cb305adbb9095d34928ddbac7f4a: Status 404 returned error can't find the container with id 7a26b2c8abf7070791144c3b808d314860b4cb305adbb9095d34928ddbac7f4a Feb 24 02:10:23.967420 master-0 kubenswrapper[7864]: W0224 02:10:23.967295 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf42c69b_1a0e_41f5_9006_17540369b9ad.slice/crio-8f97b854deca962131113e1e495c21e96d538f802eb3f5de41bedce3ba1452e3 WatchSource:0}: Error finding container 8f97b854deca962131113e1e495c21e96d538f802eb3f5de41bedce3ba1452e3: Status 404 returned error can't find the container with id 8f97b854deca962131113e1e495c21e96d538f802eb3f5de41bedce3ba1452e3 Feb 24 02:10:24.554975 master-0 kubenswrapper[7864]: I0224 02:10:24.554908 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-bkc9s" event={"ID":"0127e0d5-9961-4ff6-851d-884e71e1dcf2","Type":"ContainerStarted","Data":"73a9bec7b7f8ef1aa34fa536a5b424811fa7916bba904a12f88c7fc64ea1d064"} Feb 24 02:10:24.560025 master-0 kubenswrapper[7864]: I0224 02:10:24.558843 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk" event={"ID":"91168f3d-70eb-4351-bb83-5411a96ad29d","Type":"ContainerStarted","Data":"48e2670880e2a3683b1d35f70f451bd1df96189544e8c098ed8ce993e49c290a"} Feb 24 02:10:24.564661 master-0 kubenswrapper[7864]: I0224 02:10:24.563182 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hfpql" event={"ID":"df42c69b-1a0e-41f5-9006-17540369b9ad","Type":"ContainerStarted","Data":"7f363b229955dc8837df5e264f53a130100fee7d47644a7ba2897d8fb2e9598c"} Feb 24 02:10:24.564661 master-0 kubenswrapper[7864]: I0224 02:10:24.563219 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hfpql" event={"ID":"df42c69b-1a0e-41f5-9006-17540369b9ad","Type":"ContainerStarted","Data":"8f97b854deca962131113e1e495c21e96d538f802eb3f5de41bedce3ba1452e3"} Feb 24 02:10:24.569095 master-0 kubenswrapper[7864]: I0224 02:10:24.569029 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-c5wlk" event={"ID":"011c6603-d533-4449-b409-f6f698a3bd50","Type":"ContainerStarted","Data":"188661fad9f6ca0ff77605f5232fde5303986f995916e8cce064c3bbfe8c7e01"} Feb 24 02:10:24.576820 master-0 kubenswrapper[7864]: I0224 02:10:24.576790 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" event={"ID":"0ce6dd93-084c-4e15-8b7c-e0829a6df14e","Type":"ContainerStarted","Data":"da1d53ab7e4f85fb1b75944d0a63b108d12150f96511744937d45deda5882b0f"} Feb 24 02:10:24.576947 master-0 kubenswrapper[7864]: I0224 02:10:24.576826 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" event={"ID":"0ce6dd93-084c-4e15-8b7c-e0829a6df14e","Type":"ContainerStarted","Data":"7a26b2c8abf7070791144c3b808d314860b4cb305adbb9095d34928ddbac7f4a"} Feb 24 02:10:24.587210 master-0 kubenswrapper[7864]: I0224 02:10:24.587101 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk" podStartSLOduration=4.190648349 podStartE2EDuration="7.587071044s" podCreationTimestamp="2026-02-24 02:10:17 +0000 UTC" firstStartedPulling="2026-02-24 02:10:19.545364252 +0000 UTC m=+383.873017874" lastFinishedPulling="2026-02-24 02:10:22.941786957 +0000 UTC m=+387.269440569" observedRunningTime="2026-02-24 02:10:24.586737046 +0000 UTC m=+388.914390708" watchObservedRunningTime="2026-02-24 02:10:24.587071044 +0000 UTC m=+388.914724716" Feb 24 02:10:24.622461 master-0 kubenswrapper[7864]: I0224 02:10:24.622244 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-c5wlk" podStartSLOduration=2.002321519 podStartE2EDuration="6.622216023s" podCreationTimestamp="2026-02-24 02:10:18 +0000 UTC" firstStartedPulling="2026-02-24 02:10:19.347274406 +0000 UTC m=+383.674928038" lastFinishedPulling="2026-02-24 02:10:23.96716889 +0000 UTC m=+388.294822542" observedRunningTime="2026-02-24 02:10:24.620117758 +0000 UTC m=+388.947771390" watchObservedRunningTime="2026-02-24 02:10:24.622216023 +0000 UTC m=+388.949869655" Feb 24 02:10:24.670394 master-0 kubenswrapper[7864]: I0224 02:10:24.670361 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kkwwl" Feb 24 02:10:25.592178 master-0 kubenswrapper[7864]: I0224 02:10:25.592090 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hfpql" event={"ID":"df42c69b-1a0e-41f5-9006-17540369b9ad","Type":"ContainerStarted","Data":"67ca08a71ef0ee4b6264d0683316a5400dc6c91bab3f8b6764b01d1d93cebc51"} Feb 24 02:10:25.594776 master-0 kubenswrapper[7864]: I0224 02:10:25.594705 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" event={"ID":"8e0c87ae-6387-4c00-b03d-582566907fb6","Type":"ContainerStarted","Data":"2b6a32bb9499d220c3167aa1a2fb2a91c9d1624533d8361accf480baf5e26efd"} Feb 24 02:10:25.604676 master-0 kubenswrapper[7864]: I0224 02:10:25.604607 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-bkc9s" event={"ID":"0127e0d5-9961-4ff6-851d-884e71e1dcf2","Type":"ContainerStarted","Data":"c48f95c28d464405a814f3600c47bc4d976bf90d59f1a44943118946c66b1b12"} Feb 24 02:10:25.627505 master-0 kubenswrapper[7864]: I0224 02:10:25.627375 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-hfpql" podStartSLOduration=3.627267339 podStartE2EDuration="3.627267339s" podCreationTimestamp="2026-02-24 02:10:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:10:25.625246936 +0000 UTC m=+389.952900568" watchObservedRunningTime="2026-02-24 02:10:25.627267339 +0000 UTC m=+389.954920971" Feb 24 02:10:25.668626 master-0 kubenswrapper[7864]: I0224 02:10:25.665220 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-bkc9s" podStartSLOduration=4.750257699 podStartE2EDuration="9.665189752s" podCreationTimestamp="2026-02-24 02:10:16 +0000 UTC" firstStartedPulling="2026-02-24 02:10:18.026556866 +0000 UTC m=+382.354210478" lastFinishedPulling="2026-02-24 02:10:22.941488869 +0000 UTC m=+387.269142531" observedRunningTime="2026-02-24 02:10:25.661475813 +0000 UTC m=+389.989129515" watchObservedRunningTime="2026-02-24 02:10:25.665189752 +0000 UTC m=+389.992843414" Feb 24 02:10:25.703245 master-0 kubenswrapper[7864]: I0224 02:10:25.702952 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" podStartSLOduration=2.966687561 podStartE2EDuration="7.702912179s" podCreationTimestamp="2026-02-24 02:10:18 +0000 UTC" firstStartedPulling="2026-02-24 02:10:19.752914998 +0000 UTC m=+384.080568620" lastFinishedPulling="2026-02-24 02:10:24.489139576 +0000 UTC m=+388.816793238" observedRunningTime="2026-02-24 02:10:25.695556114 +0000 UTC m=+390.023209766" watchObservedRunningTime="2026-02-24 02:10:25.702912179 +0000 UTC m=+390.030565801" Feb 24 02:10:29.895394 master-0 kubenswrapper[7864]: I0224 02:10:29.894112 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-54cb48566c-xzpl4"] Feb 24 02:10:29.901000 master-0 kubenswrapper[7864]: I0224 02:10:29.895920 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-xzpl4" Feb 24 02:10:29.901000 master-0 kubenswrapper[7864]: I0224 02:10:29.897877 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 24 02:10:29.901000 master-0 kubenswrapper[7864]: I0224 02:10:29.898833 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-rpcz4" Feb 24 02:10:29.910756 master-0 kubenswrapper[7864]: I0224 02:10:29.908044 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-54cb48566c-xzpl4"] Feb 24 02:10:30.064260 master-0 kubenswrapper[7864]: I0224 02:10:30.063171 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d-proxy-tls\") pod \"machine-config-controller-54cb48566c-xzpl4\" (UID: \"e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-xzpl4" Feb 24 02:10:30.064260 master-0 kubenswrapper[7864]: I0224 02:10:30.063277 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-xzpl4\" (UID: \"e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-xzpl4" Feb 24 02:10:30.064260 master-0 kubenswrapper[7864]: I0224 02:10:30.063487 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6wvl\" (UniqueName: \"kubernetes.io/projected/e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d-kube-api-access-w6wvl\") pod \"machine-config-controller-54cb48566c-xzpl4\" (UID: \"e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-xzpl4" Feb 24 02:10:30.166885 master-0 kubenswrapper[7864]: I0224 02:10:30.164513 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6wvl\" (UniqueName: \"kubernetes.io/projected/e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d-kube-api-access-w6wvl\") pod \"machine-config-controller-54cb48566c-xzpl4\" (UID: \"e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-xzpl4" Feb 24 02:10:30.166885 master-0 kubenswrapper[7864]: I0224 02:10:30.164625 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d-proxy-tls\") pod \"machine-config-controller-54cb48566c-xzpl4\" (UID: \"e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-xzpl4" Feb 24 02:10:30.166885 master-0 kubenswrapper[7864]: I0224 02:10:30.164673 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-xzpl4\" (UID: \"e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-xzpl4" Feb 24 02:10:30.166885 master-0 kubenswrapper[7864]: I0224 02:10:30.165665 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-xzpl4\" (UID: \"e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-xzpl4" Feb 24 02:10:30.172167 master-0 kubenswrapper[7864]: I0224 02:10:30.172136 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d-proxy-tls\") pod \"machine-config-controller-54cb48566c-xzpl4\" (UID: \"e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-xzpl4" Feb 24 02:10:30.178916 master-0 kubenswrapper[7864]: I0224 02:10:30.178882 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4znnj" Feb 24 02:10:30.194807 master-0 kubenswrapper[7864]: I0224 02:10:30.194732 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6wvl\" (UniqueName: \"kubernetes.io/projected/e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d-kube-api-access-w6wvl\") pod \"machine-config-controller-54cb48566c-xzpl4\" (UID: \"e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-xzpl4" Feb 24 02:10:30.231022 master-0 kubenswrapper[7864]: I0224 02:10:30.230991 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-xzpl4" Feb 24 02:10:30.238533 master-0 kubenswrapper[7864]: I0224 02:10:30.238499 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4znnj" Feb 24 02:10:30.257958 master-0 kubenswrapper[7864]: I0224 02:10:30.257907 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qqt7p" Feb 24 02:10:30.654666 master-0 kubenswrapper[7864]: I0224 02:10:30.651451 7864 generic.go:334] "Generic (PLEG): container finished" podID="8e0c87ae-6387-4c00-b03d-582566907fb6" containerID="2b6a32bb9499d220c3167aa1a2fb2a91c9d1624533d8361accf480baf5e26efd" exitCode=0 Feb 24 02:10:30.654666 master-0 kubenswrapper[7864]: I0224 02:10:30.651545 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" event={"ID":"8e0c87ae-6387-4c00-b03d-582566907fb6","Type":"ContainerDied","Data":"2b6a32bb9499d220c3167aa1a2fb2a91c9d1624533d8361accf480baf5e26efd"} Feb 24 02:10:30.654666 master-0 kubenswrapper[7864]: I0224 02:10:30.652391 7864 scope.go:117] "RemoveContainer" containerID="2b6a32bb9499d220c3167aa1a2fb2a91c9d1624533d8361accf480baf5e26efd" Feb 24 02:10:31.169636 master-0 kubenswrapper[7864]: I0224 02:10:31.169493 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-54cb48566c-xzpl4"] Feb 24 02:10:31.182121 master-0 kubenswrapper[7864]: W0224 02:10:31.182072 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode76f58c7_471f_4f1d_bb1f_5df1af4eeb5d.slice/crio-824e46bd462387249603454ca45d03ebca612478cdeb336ee057821a4d25b262 WatchSource:0}: Error finding container 824e46bd462387249603454ca45d03ebca612478cdeb336ee057821a4d25b262: Status 404 returned error can't find the container with id 824e46bd462387249603454ca45d03ebca612478cdeb336ee057821a4d25b262 Feb 24 02:10:31.646458 master-0 kubenswrapper[7864]: I0224 02:10:31.644877 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531640-kptmw"] Feb 24 02:10:31.646458 master-0 kubenswrapper[7864]: I0224 02:10:31.645733 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531640-kptmw" Feb 24 02:10:31.655228 master-0 kubenswrapper[7864]: I0224 02:10:31.653529 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-7b65dc9fcb-22sgl"] Feb 24 02:10:31.655228 master-0 kubenswrapper[7864]: I0224 02:10:31.653787 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 02:10:31.655228 master-0 kubenswrapper[7864]: I0224 02:10:31.654754 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:10:31.659081 master-0 kubenswrapper[7864]: I0224 02:10:31.655912 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-9gkp2"] Feb 24 02:10:31.659081 master-0 kubenswrapper[7864]: I0224 02:10:31.656868 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-9gkp2" Feb 24 02:10:31.659081 master-0 kubenswrapper[7864]: I0224 02:10:31.657228 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 24 02:10:31.659081 master-0 kubenswrapper[7864]: I0224 02:10:31.657403 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 24 02:10:31.659081 master-0 kubenswrapper[7864]: I0224 02:10:31.658281 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 24 02:10:31.659081 master-0 kubenswrapper[7864]: I0224 02:10:31.658777 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 24 02:10:31.659081 master-0 kubenswrapper[7864]: I0224 02:10:31.659023 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-58fb6744f5-l4wh6"] Feb 24 02:10:31.659894 master-0 kubenswrapper[7864]: I0224 02:10:31.659041 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 24 02:10:31.660414 master-0 kubenswrapper[7864]: I0224 02:10:31.660055 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-l4wh6" Feb 24 02:10:31.662043 master-0 kubenswrapper[7864]: I0224 02:10:31.662012 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 24 02:10:31.665788 master-0 kubenswrapper[7864]: I0224 02:10:31.665749 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531640-kptmw"] Feb 24 02:10:31.665910 master-0 kubenswrapper[7864]: I0224 02:10:31.665797 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw" event={"ID":"9d706583-f8dc-4a3c-832b-7e8249d0c662","Type":"ContainerStarted","Data":"4dd31d54c217b125b138c8239ebb34954acd0d48c9fd9ddcce9e901bcfa40ed0"} Feb 24 02:10:31.665910 master-0 kubenswrapper[7864]: I0224 02:10:31.665823 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw" event={"ID":"9d706583-f8dc-4a3c-832b-7e8249d0c662","Type":"ContainerStarted","Data":"9c01867bbd8b0b17d3adb371822b696215e7163cff7ac59ddc753559ec32f43a"} Feb 24 02:10:31.702954 master-0 kubenswrapper[7864]: I0224 02:10:31.701998 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 24 02:10:31.707149 master-0 kubenswrapper[7864]: I0224 02:10:31.707057 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-58fb6744f5-l4wh6"] Feb 24 02:10:31.721437 master-0 kubenswrapper[7864]: I0224 02:10:31.721354 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" event={"ID":"8e0c87ae-6387-4c00-b03d-582566907fb6","Type":"ContainerStarted","Data":"28c8964d64effe5cfb9aee600d94edf8ec500cf08e78ee4ba28d38f4864c5e27"} Feb 24 02:10:31.723254 master-0 kubenswrapper[7864]: I0224 02:10:31.723217 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-9gkp2"] Feb 24 02:10:31.726968 master-0 kubenswrapper[7864]: I0224 02:10:31.725615 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-xzpl4" event={"ID":"e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d","Type":"ContainerStarted","Data":"82424b9b463daf23d9c394c69448182da116e0694938488e2079645b0e8398dd"} Feb 24 02:10:31.726968 master-0 kubenswrapper[7864]: I0224 02:10:31.725647 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-xzpl4" event={"ID":"e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d","Type":"ContainerStarted","Data":"25626cd2d8668a56750cfaaad01d432499b9f72732dea0fb561e1b3a7aadf5c7"} Feb 24 02:10:31.726968 master-0 kubenswrapper[7864]: I0224 02:10:31.725664 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-xzpl4" event={"ID":"e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d","Type":"ContainerStarted","Data":"824e46bd462387249603454ca45d03ebca612478cdeb336ee057821a4d25b262"} Feb 24 02:10:31.791254 master-0 kubenswrapper[7864]: I0224 02:10:31.791160 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-xzpl4" podStartSLOduration=2.791139672 podStartE2EDuration="2.791139672s" podCreationTimestamp="2026-02-24 02:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:10:31.78838894 +0000 UTC m=+396.116042562" watchObservedRunningTime="2026-02-24 02:10:31.791139672 +0000 UTC m=+396.118793294" Feb 24 02:10:31.811601 master-0 kubenswrapper[7864]: I0224 02:10:31.811526 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqqwj\" (UniqueName: \"kubernetes.io/projected/ca1250a6-30f0-4cc0-b9b0-eabde42aefcf-kube-api-access-fqqwj\") pod \"network-check-source-58fb6744f5-l4wh6\" (UID: \"ca1250a6-30f0-4cc0-b9b0-eabde42aefcf\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-l4wh6" Feb 24 02:10:31.811839 master-0 kubenswrapper[7864]: I0224 02:10:31.811635 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a08a1e4-cf92-4733-a8af-c7ac5b21e925-service-ca-bundle\") pod \"router-default-7b65dc9fcb-22sgl\" (UID: \"6a08a1e4-cf92-4733-a8af-c7ac5b21e925\") " pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:10:31.811839 master-0 kubenswrapper[7864]: I0224 02:10:31.811801 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a08a1e4-cf92-4733-a8af-c7ac5b21e925-metrics-certs\") pod \"router-default-7b65dc9fcb-22sgl\" (UID: \"6a08a1e4-cf92-4733-a8af-c7ac5b21e925\") " pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:10:31.811839 master-0 kubenswrapper[7864]: I0224 02:10:31.811834 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24983c94-f158-4a07-854b-2e5455374f19-config-volume\") pod \"collect-profiles-29531640-kptmw\" (UID: \"24983c94-f158-4a07-854b-2e5455374f19\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531640-kptmw" Feb 24 02:10:31.811995 master-0 kubenswrapper[7864]: I0224 02:10:31.811887 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc5kx\" (UniqueName: \"kubernetes.io/projected/6a08a1e4-cf92-4733-a8af-c7ac5b21e925-kube-api-access-qc5kx\") pod \"router-default-7b65dc9fcb-22sgl\" (UID: \"6a08a1e4-cf92-4733-a8af-c7ac5b21e925\") " pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:10:31.811995 master-0 kubenswrapper[7864]: I0224 02:10:31.811950 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/22a83952-32ec-48f7-85cd-209b62362ae2-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-9gkp2\" (UID: \"22a83952-32ec-48f7-85cd-209b62362ae2\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-9gkp2" Feb 24 02:10:31.811995 master-0 kubenswrapper[7864]: I0224 02:10:31.811972 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6a08a1e4-cf92-4733-a8af-c7ac5b21e925-stats-auth\") pod \"router-default-7b65dc9fcb-22sgl\" (UID: \"6a08a1e4-cf92-4733-a8af-c7ac5b21e925\") " pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:10:31.812121 master-0 kubenswrapper[7864]: I0224 02:10:31.812008 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htvbl\" (UniqueName: \"kubernetes.io/projected/24983c94-f158-4a07-854b-2e5455374f19-kube-api-access-htvbl\") pod \"collect-profiles-29531640-kptmw\" (UID: \"24983c94-f158-4a07-854b-2e5455374f19\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531640-kptmw" Feb 24 02:10:31.812121 master-0 kubenswrapper[7864]: I0224 02:10:31.812034 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24983c94-f158-4a07-854b-2e5455374f19-secret-volume\") pod \"collect-profiles-29531640-kptmw\" (UID: \"24983c94-f158-4a07-854b-2e5455374f19\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531640-kptmw" Feb 24 02:10:31.812121 master-0 kubenswrapper[7864]: I0224 02:10:31.812089 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6a08a1e4-cf92-4733-a8af-c7ac5b21e925-default-certificate\") pod \"router-default-7b65dc9fcb-22sgl\" (UID: \"6a08a1e4-cf92-4733-a8af-c7ac5b21e925\") " pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:10:31.877941 master-0 kubenswrapper[7864]: I0224 02:10:31.877880 7864 scope.go:117] "RemoveContainer" containerID="4291e0907ef19a32f51af939efa4c04ce2db9b3453891c7e12142da00a40e520" Feb 24 02:10:31.913510 master-0 kubenswrapper[7864]: I0224 02:10:31.913462 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6a08a1e4-cf92-4733-a8af-c7ac5b21e925-default-certificate\") pod \"router-default-7b65dc9fcb-22sgl\" (UID: \"6a08a1e4-cf92-4733-a8af-c7ac5b21e925\") " pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:10:31.913819 master-0 kubenswrapper[7864]: I0224 02:10:31.913772 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqqwj\" (UniqueName: \"kubernetes.io/projected/ca1250a6-30f0-4cc0-b9b0-eabde42aefcf-kube-api-access-fqqwj\") pod \"network-check-source-58fb6744f5-l4wh6\" (UID: \"ca1250a6-30f0-4cc0-b9b0-eabde42aefcf\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-l4wh6" Feb 24 02:10:31.913959 master-0 kubenswrapper[7864]: I0224 02:10:31.913881 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a08a1e4-cf92-4733-a8af-c7ac5b21e925-service-ca-bundle\") pod \"router-default-7b65dc9fcb-22sgl\" (UID: \"6a08a1e4-cf92-4733-a8af-c7ac5b21e925\") " pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:10:31.914023 master-0 kubenswrapper[7864]: I0224 02:10:31.914002 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a08a1e4-cf92-4733-a8af-c7ac5b21e925-metrics-certs\") pod \"router-default-7b65dc9fcb-22sgl\" (UID: \"6a08a1e4-cf92-4733-a8af-c7ac5b21e925\") " pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:10:31.914786 master-0 kubenswrapper[7864]: I0224 02:10:31.914747 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a08a1e4-cf92-4733-a8af-c7ac5b21e925-service-ca-bundle\") pod \"router-default-7b65dc9fcb-22sgl\" (UID: \"6a08a1e4-cf92-4733-a8af-c7ac5b21e925\") " pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:10:31.914960 master-0 kubenswrapper[7864]: I0224 02:10:31.914926 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24983c94-f158-4a07-854b-2e5455374f19-config-volume\") pod \"collect-profiles-29531640-kptmw\" (UID: \"24983c94-f158-4a07-854b-2e5455374f19\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531640-kptmw" Feb 24 02:10:31.915075 master-0 kubenswrapper[7864]: I0224 02:10:31.915043 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc5kx\" (UniqueName: \"kubernetes.io/projected/6a08a1e4-cf92-4733-a8af-c7ac5b21e925-kube-api-access-qc5kx\") pod \"router-default-7b65dc9fcb-22sgl\" (UID: \"6a08a1e4-cf92-4733-a8af-c7ac5b21e925\") " pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:10:31.915113 master-0 kubenswrapper[7864]: I0224 02:10:31.915085 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/22a83952-32ec-48f7-85cd-209b62362ae2-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-9gkp2\" (UID: \"22a83952-32ec-48f7-85cd-209b62362ae2\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-9gkp2" Feb 24 02:10:31.915149 master-0 kubenswrapper[7864]: I0224 02:10:31.915128 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6a08a1e4-cf92-4733-a8af-c7ac5b21e925-stats-auth\") pod \"router-default-7b65dc9fcb-22sgl\" (UID: \"6a08a1e4-cf92-4733-a8af-c7ac5b21e925\") " pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:10:31.915261 master-0 kubenswrapper[7864]: I0224 02:10:31.915233 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htvbl\" (UniqueName: \"kubernetes.io/projected/24983c94-f158-4a07-854b-2e5455374f19-kube-api-access-htvbl\") pod \"collect-profiles-29531640-kptmw\" (UID: \"24983c94-f158-4a07-854b-2e5455374f19\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531640-kptmw" Feb 24 02:10:31.915316 master-0 kubenswrapper[7864]: I0224 02:10:31.915299 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24983c94-f158-4a07-854b-2e5455374f19-secret-volume\") pod \"collect-profiles-29531640-kptmw\" (UID: \"24983c94-f158-4a07-854b-2e5455374f19\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531640-kptmw" Feb 24 02:10:31.915385 master-0 kubenswrapper[7864]: I0224 02:10:31.915360 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24983c94-f158-4a07-854b-2e5455374f19-config-volume\") pod \"collect-profiles-29531640-kptmw\" (UID: \"24983c94-f158-4a07-854b-2e5455374f19\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531640-kptmw" Feb 24 02:10:31.918217 master-0 kubenswrapper[7864]: I0224 02:10:31.918186 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a08a1e4-cf92-4733-a8af-c7ac5b21e925-metrics-certs\") pod \"router-default-7b65dc9fcb-22sgl\" (UID: \"6a08a1e4-cf92-4733-a8af-c7ac5b21e925\") " pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:10:31.919238 master-0 kubenswrapper[7864]: I0224 02:10:31.919210 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24983c94-f158-4a07-854b-2e5455374f19-secret-volume\") pod \"collect-profiles-29531640-kptmw\" (UID: \"24983c94-f158-4a07-854b-2e5455374f19\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531640-kptmw" Feb 24 02:10:31.920382 master-0 kubenswrapper[7864]: I0224 02:10:31.920337 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6a08a1e4-cf92-4733-a8af-c7ac5b21e925-default-certificate\") pod \"router-default-7b65dc9fcb-22sgl\" (UID: \"6a08a1e4-cf92-4733-a8af-c7ac5b21e925\") " pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:10:31.920484 master-0 kubenswrapper[7864]: I0224 02:10:31.920447 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/22a83952-32ec-48f7-85cd-209b62362ae2-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-9gkp2\" (UID: \"22a83952-32ec-48f7-85cd-209b62362ae2\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-9gkp2" Feb 24 02:10:31.921213 master-0 kubenswrapper[7864]: I0224 02:10:31.921168 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6a08a1e4-cf92-4733-a8af-c7ac5b21e925-stats-auth\") pod \"router-default-7b65dc9fcb-22sgl\" (UID: \"6a08a1e4-cf92-4733-a8af-c7ac5b21e925\") " pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:10:31.923210 master-0 kubenswrapper[7864]: I0224 02:10:31.923163 7864 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 02:10:31.938567 master-0 kubenswrapper[7864]: I0224 02:10:31.938512 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqqwj\" (UniqueName: \"kubernetes.io/projected/ca1250a6-30f0-4cc0-b9b0-eabde42aefcf-kube-api-access-fqqwj\") pod \"network-check-source-58fb6744f5-l4wh6\" (UID: \"ca1250a6-30f0-4cc0-b9b0-eabde42aefcf\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-l4wh6" Feb 24 02:10:31.942835 master-0 kubenswrapper[7864]: I0224 02:10:31.941786 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htvbl\" (UniqueName: \"kubernetes.io/projected/24983c94-f158-4a07-854b-2e5455374f19-kube-api-access-htvbl\") pod \"collect-profiles-29531640-kptmw\" (UID: \"24983c94-f158-4a07-854b-2e5455374f19\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531640-kptmw" Feb 24 02:10:31.943401 master-0 kubenswrapper[7864]: I0224 02:10:31.943245 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc5kx\" (UniqueName: \"kubernetes.io/projected/6a08a1e4-cf92-4733-a8af-c7ac5b21e925-kube-api-access-qc5kx\") pod \"router-default-7b65dc9fcb-22sgl\" (UID: \"6a08a1e4-cf92-4733-a8af-c7ac5b21e925\") " pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:10:32.061158 master-0 kubenswrapper[7864]: I0224 02:10:32.061110 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531640-kptmw" Feb 24 02:10:32.081191 master-0 kubenswrapper[7864]: I0224 02:10:32.081143 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:10:32.091715 master-0 kubenswrapper[7864]: I0224 02:10:32.091666 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-9gkp2" Feb 24 02:10:32.113177 master-0 kubenswrapper[7864]: I0224 02:10:32.113138 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-l4wh6" Feb 24 02:10:32.514022 master-0 kubenswrapper[7864]: I0224 02:10:32.513945 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531640-kptmw"] Feb 24 02:10:32.520344 master-0 kubenswrapper[7864]: W0224 02:10:32.520287 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24983c94_f158_4a07_854b_2e5455374f19.slice/crio-4953fab92508c627d5109fd9998dd27ef32a95f892628beaf8d18c65fdcda821 WatchSource:0}: Error finding container 4953fab92508c627d5109fd9998dd27ef32a95f892628beaf8d18c65fdcda821: Status 404 returned error can't find the container with id 4953fab92508c627d5109fd9998dd27ef32a95f892628beaf8d18c65fdcda821 Feb 24 02:10:32.623760 master-0 kubenswrapper[7864]: I0224 02:10:32.623712 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-58fb6744f5-l4wh6"] Feb 24 02:10:32.628389 master-0 kubenswrapper[7864]: I0224 02:10:32.628348 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-9gkp2"] Feb 24 02:10:32.739882 master-0 kubenswrapper[7864]: I0224 02:10:32.739839 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw" event={"ID":"9d706583-f8dc-4a3c-832b-7e8249d0c662","Type":"ContainerStarted","Data":"0fbe3052c7d44e803671e80911250aeacc2950ee3e17cc34415c4b35604a8836"} Feb 24 02:10:32.747693 master-0 kubenswrapper[7864]: I0224 02:10:32.743167 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-8l58x_f6e7b773-7ecd-4a5c-8bef-d672f371e7e5/snapshot-controller/2.log" Feb 24 02:10:32.747693 master-0 kubenswrapper[7864]: I0224 02:10:32.743257 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" event={"ID":"f6e7b773-7ecd-4a5c-8bef-d672f371e7e5","Type":"ContainerStarted","Data":"361cb30b3b5fa4aef37729ebe34ed6dc589b5812da2325814ef9feeb894335c5"} Feb 24 02:10:32.747693 master-0 kubenswrapper[7864]: I0224 02:10:32.744388 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" event={"ID":"6a08a1e4-cf92-4733-a8af-c7ac5b21e925","Type":"ContainerStarted","Data":"2dab12c36fbca650a107bc58df00044fd6561209f9c466f04a4c8ce72b69201d"} Feb 24 02:10:32.747693 master-0 kubenswrapper[7864]: I0224 02:10:32.745927 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531640-kptmw" event={"ID":"24983c94-f158-4a07-854b-2e5455374f19","Type":"ContainerStarted","Data":"e51ee7952dfd445e552e98f87e6cac337f269d310845fcc3274f65c031cd5dd3"} Feb 24 02:10:32.747693 master-0 kubenswrapper[7864]: I0224 02:10:32.746072 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531640-kptmw" event={"ID":"24983c94-f158-4a07-854b-2e5455374f19","Type":"ContainerStarted","Data":"4953fab92508c627d5109fd9998dd27ef32a95f892628beaf8d18c65fdcda821"} Feb 24 02:10:32.764223 master-0 kubenswrapper[7864]: I0224 02:10:32.764068 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw" podStartSLOduration=4.910281537 podStartE2EDuration="12.764037078s" podCreationTimestamp="2026-02-24 02:10:20 +0000 UTC" firstStartedPulling="2026-02-24 02:10:22.94153912 +0000 UTC m=+387.269192782" lastFinishedPulling="2026-02-24 02:10:30.795294701 +0000 UTC m=+395.122948323" observedRunningTime="2026-02-24 02:10:32.756085058 +0000 UTC m=+397.083738690" watchObservedRunningTime="2026-02-24 02:10:32.764037078 +0000 UTC m=+397.091690730" Feb 24 02:10:32.861092 master-0 kubenswrapper[7864]: I0224 02:10:32.861000 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29531640-kptmw" podStartSLOduration=472.86097445 podStartE2EDuration="7m52.86097445s" podCreationTimestamp="2026-02-24 02:02:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:10:32.796688391 +0000 UTC m=+397.124342023" watchObservedRunningTime="2026-02-24 02:10:32.86097445 +0000 UTC m=+397.188628112" Feb 24 02:10:32.929805 master-0 kubenswrapper[7864]: I0224 02:10:32.929763 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-brpmb" Feb 24 02:10:33.604677 master-0 kubenswrapper[7864]: I0224 02:10:33.599787 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-798b897698-rqrlc"] Feb 24 02:10:33.604677 master-0 kubenswrapper[7864]: I0224 02:10:33.600338 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-798b897698-rqrlc" podUID="b1fb12ad-9035-4039-9be6-6f8a84b93063" containerName="kube-rbac-proxy" containerID="cri-o://a0afb79e704d94990f37c7377f85e018c90abaa7d13c2a20dd0aac3f294b7e15" gracePeriod=30 Feb 24 02:10:33.604677 master-0 kubenswrapper[7864]: I0224 02:10:33.600419 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-798b897698-rqrlc" podUID="b1fb12ad-9035-4039-9be6-6f8a84b93063" containerName="machine-approver-controller" containerID="cri-o://704cc93c8cee0a4135d186aa5dbd6348038b2819c9f9b5e8637ce4b259a4b3a4" gracePeriod=30 Feb 24 02:10:33.762152 master-0 kubenswrapper[7864]: I0224 02:10:33.762087 7864 generic.go:334] "Generic (PLEG): container finished" podID="b1fb12ad-9035-4039-9be6-6f8a84b93063" containerID="704cc93c8cee0a4135d186aa5dbd6348038b2819c9f9b5e8637ce4b259a4b3a4" exitCode=0 Feb 24 02:10:33.762152 master-0 kubenswrapper[7864]: I0224 02:10:33.762127 7864 generic.go:334] "Generic (PLEG): container finished" podID="b1fb12ad-9035-4039-9be6-6f8a84b93063" containerID="a0afb79e704d94990f37c7377f85e018c90abaa7d13c2a20dd0aac3f294b7e15" exitCode=0 Feb 24 02:10:33.762542 master-0 kubenswrapper[7864]: I0224 02:10:33.762185 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-rqrlc" event={"ID":"b1fb12ad-9035-4039-9be6-6f8a84b93063","Type":"ContainerDied","Data":"704cc93c8cee0a4135d186aa5dbd6348038b2819c9f9b5e8637ce4b259a4b3a4"} Feb 24 02:10:33.762542 master-0 kubenswrapper[7864]: I0224 02:10:33.762258 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-rqrlc" event={"ID":"b1fb12ad-9035-4039-9be6-6f8a84b93063","Type":"ContainerDied","Data":"a0afb79e704d94990f37c7377f85e018c90abaa7d13c2a20dd0aac3f294b7e15"} Feb 24 02:10:34.343658 master-0 kubenswrapper[7864]: I0224 02:10:34.343549 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-drf28"] Feb 24 02:10:34.344552 master-0 kubenswrapper[7864]: I0224 02:10:34.344507 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-drf28" Feb 24 02:10:34.346292 master-0 kubenswrapper[7864]: I0224 02:10:34.346211 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-5kctc" Feb 24 02:10:34.349332 master-0 kubenswrapper[7864]: I0224 02:10:34.349280 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 24 02:10:34.350147 master-0 kubenswrapper[7864]: I0224 02:10:34.350102 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 24 02:10:34.483855 master-0 kubenswrapper[7864]: I0224 02:10:34.483750 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1-node-bootstrap-token\") pod \"machine-config-server-drf28\" (UID: \"9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1\") " pod="openshift-machine-config-operator/machine-config-server-drf28" Feb 24 02:10:34.483855 master-0 kubenswrapper[7864]: I0224 02:10:34.483864 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1-certs\") pod \"machine-config-server-drf28\" (UID: \"9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1\") " pod="openshift-machine-config-operator/machine-config-server-drf28" Feb 24 02:10:34.484266 master-0 kubenswrapper[7864]: I0224 02:10:34.484066 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv6zq\" (UniqueName: \"kubernetes.io/projected/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1-kube-api-access-rv6zq\") pod \"machine-config-server-drf28\" (UID: \"9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1\") " pod="openshift-machine-config-operator/machine-config-server-drf28" Feb 24 02:10:34.586431 master-0 kubenswrapper[7864]: I0224 02:10:34.586328 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1-node-bootstrap-token\") pod \"machine-config-server-drf28\" (UID: \"9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1\") " pod="openshift-machine-config-operator/machine-config-server-drf28" Feb 24 02:10:34.586431 master-0 kubenswrapper[7864]: I0224 02:10:34.586409 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1-certs\") pod \"machine-config-server-drf28\" (UID: \"9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1\") " pod="openshift-machine-config-operator/machine-config-server-drf28" Feb 24 02:10:34.586762 master-0 kubenswrapper[7864]: I0224 02:10:34.586448 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv6zq\" (UniqueName: \"kubernetes.io/projected/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1-kube-api-access-rv6zq\") pod \"machine-config-server-drf28\" (UID: \"9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1\") " pod="openshift-machine-config-operator/machine-config-server-drf28" Feb 24 02:10:34.590953 master-0 kubenswrapper[7864]: I0224 02:10:34.590911 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1-node-bootstrap-token\") pod \"machine-config-server-drf28\" (UID: \"9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1\") " pod="openshift-machine-config-operator/machine-config-server-drf28" Feb 24 02:10:34.591289 master-0 kubenswrapper[7864]: I0224 02:10:34.591222 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1-certs\") pod \"machine-config-server-drf28\" (UID: \"9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1\") " pod="openshift-machine-config-operator/machine-config-server-drf28" Feb 24 02:10:34.613711 master-0 kubenswrapper[7864]: I0224 02:10:34.613554 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv6zq\" (UniqueName: \"kubernetes.io/projected/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1-kube-api-access-rv6zq\") pod \"machine-config-server-drf28\" (UID: \"9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1\") " pod="openshift-machine-config-operator/machine-config-server-drf28" Feb 24 02:10:34.660653 master-0 kubenswrapper[7864]: I0224 02:10:34.660587 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-drf28" Feb 24 02:10:35.577258 master-0 kubenswrapper[7864]: W0224 02:10:35.577169 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22a83952_32ec_48f7_85cd_209b62362ae2.slice/crio-0a122f4d5531f3489b5545a54ec94812a3a4adf1ceb59316f98f88f87840e7dc WatchSource:0}: Error finding container 0a122f4d5531f3489b5545a54ec94812a3a4adf1ceb59316f98f88f87840e7dc: Status 404 returned error can't find the container with id 0a122f4d5531f3489b5545a54ec94812a3a4adf1ceb59316f98f88f87840e7dc Feb 24 02:10:35.782557 master-0 kubenswrapper[7864]: I0224 02:10:35.782328 7864 generic.go:334] "Generic (PLEG): container finished" podID="24983c94-f158-4a07-854b-2e5455374f19" containerID="e51ee7952dfd445e552e98f87e6cac337f269d310845fcc3274f65c031cd5dd3" exitCode=0 Feb 24 02:10:35.782557 master-0 kubenswrapper[7864]: I0224 02:10:35.782428 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531640-kptmw" event={"ID":"24983c94-f158-4a07-854b-2e5455374f19","Type":"ContainerDied","Data":"e51ee7952dfd445e552e98f87e6cac337f269d310845fcc3274f65c031cd5dd3"} Feb 24 02:10:35.784992 master-0 kubenswrapper[7864]: I0224 02:10:35.784913 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-9gkp2" event={"ID":"22a83952-32ec-48f7-85cd-209b62362ae2","Type":"ContainerStarted","Data":"0a122f4d5531f3489b5545a54ec94812a3a4adf1ceb59316f98f88f87840e7dc"} Feb 24 02:10:35.786828 master-0 kubenswrapper[7864]: I0224 02:10:35.786791 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-l4wh6" event={"ID":"ca1250a6-30f0-4cc0-b9b0-eabde42aefcf","Type":"ContainerStarted","Data":"fb67bb4fcbc0cf30dc19aad2f8b3b13f31473c855e7d30010f86d687f8822d44"} Feb 24 02:10:36.182132 master-0 kubenswrapper[7864]: I0224 02:10:36.182041 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-798b897698-rqrlc" Feb 24 02:10:36.317620 master-0 kubenswrapper[7864]: I0224 02:10:36.317530 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1fb12ad-9035-4039-9be6-6f8a84b93063-config\") pod \"b1fb12ad-9035-4039-9be6-6f8a84b93063\" (UID: \"b1fb12ad-9035-4039-9be6-6f8a84b93063\") " Feb 24 02:10:36.317970 master-0 kubenswrapper[7864]: I0224 02:10:36.317730 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhfzs\" (UniqueName: \"kubernetes.io/projected/b1fb12ad-9035-4039-9be6-6f8a84b93063-kube-api-access-nhfzs\") pod \"b1fb12ad-9035-4039-9be6-6f8a84b93063\" (UID: \"b1fb12ad-9035-4039-9be6-6f8a84b93063\") " Feb 24 02:10:36.317970 master-0 kubenswrapper[7864]: I0224 02:10:36.317781 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b1fb12ad-9035-4039-9be6-6f8a84b93063-auth-proxy-config\") pod \"b1fb12ad-9035-4039-9be6-6f8a84b93063\" (UID: \"b1fb12ad-9035-4039-9be6-6f8a84b93063\") " Feb 24 02:10:36.317970 master-0 kubenswrapper[7864]: I0224 02:10:36.317822 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b1fb12ad-9035-4039-9be6-6f8a84b93063-machine-approver-tls\") pod \"b1fb12ad-9035-4039-9be6-6f8a84b93063\" (UID: \"b1fb12ad-9035-4039-9be6-6f8a84b93063\") " Feb 24 02:10:36.318377 master-0 kubenswrapper[7864]: I0224 02:10:36.318211 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1fb12ad-9035-4039-9be6-6f8a84b93063-config" (OuterVolumeSpecName: "config") pod "b1fb12ad-9035-4039-9be6-6f8a84b93063" (UID: "b1fb12ad-9035-4039-9be6-6f8a84b93063"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:10:36.318631 master-0 kubenswrapper[7864]: I0224 02:10:36.318552 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1fb12ad-9035-4039-9be6-6f8a84b93063-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "b1fb12ad-9035-4039-9be6-6f8a84b93063" (UID: "b1fb12ad-9035-4039-9be6-6f8a84b93063"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:10:36.322007 master-0 kubenswrapper[7864]: I0224 02:10:36.321938 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1fb12ad-9035-4039-9be6-6f8a84b93063-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "b1fb12ad-9035-4039-9be6-6f8a84b93063" (UID: "b1fb12ad-9035-4039-9be6-6f8a84b93063"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:10:36.327682 master-0 kubenswrapper[7864]: I0224 02:10:36.327634 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1fb12ad-9035-4039-9be6-6f8a84b93063-kube-api-access-nhfzs" (OuterVolumeSpecName: "kube-api-access-nhfzs") pod "b1fb12ad-9035-4039-9be6-6f8a84b93063" (UID: "b1fb12ad-9035-4039-9be6-6f8a84b93063"). InnerVolumeSpecName "kube-api-access-nhfzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:10:36.419675 master-0 kubenswrapper[7864]: I0224 02:10:36.419619 7864 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhfzs\" (UniqueName: \"kubernetes.io/projected/b1fb12ad-9035-4039-9be6-6f8a84b93063-kube-api-access-nhfzs\") on node \"master-0\" DevicePath \"\"" Feb 24 02:10:36.419811 master-0 kubenswrapper[7864]: I0224 02:10:36.419682 7864 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b1fb12ad-9035-4039-9be6-6f8a84b93063-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:10:36.419811 master-0 kubenswrapper[7864]: I0224 02:10:36.419707 7864 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b1fb12ad-9035-4039-9be6-6f8a84b93063-machine-approver-tls\") on node \"master-0\" DevicePath \"\"" Feb 24 02:10:36.420361 master-0 kubenswrapper[7864]: I0224 02:10:36.420329 7864 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1fb12ad-9035-4039-9be6-6f8a84b93063-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:10:36.800209 master-0 kubenswrapper[7864]: I0224 02:10:36.797788 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-798b897698-rqrlc" Feb 24 02:10:36.800209 master-0 kubenswrapper[7864]: I0224 02:10:36.799692 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-rqrlc" event={"ID":"b1fb12ad-9035-4039-9be6-6f8a84b93063","Type":"ContainerDied","Data":"1d7ad5dce5cca89cf38ab4bf4d6df280fd4b670e4a45f436939e2c9cc9abc228"} Feb 24 02:10:36.800209 master-0 kubenswrapper[7864]: I0224 02:10:36.799778 7864 scope.go:117] "RemoveContainer" containerID="704cc93c8cee0a4135d186aa5dbd6348038b2819c9f9b5e8637ce4b259a4b3a4" Feb 24 02:10:36.843612 master-0 kubenswrapper[7864]: I0224 02:10:36.841867 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-798b897698-rqrlc"] Feb 24 02:10:36.849526 master-0 kubenswrapper[7864]: I0224 02:10:36.849477 7864 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-798b897698-rqrlc"] Feb 24 02:10:36.909762 master-0 kubenswrapper[7864]: I0224 02:10:36.909678 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx"] Feb 24 02:10:36.910002 master-0 kubenswrapper[7864]: E0224 02:10:36.909960 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1fb12ad-9035-4039-9be6-6f8a84b93063" containerName="machine-approver-controller" Feb 24 02:10:36.910002 master-0 kubenswrapper[7864]: I0224 02:10:36.909986 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1fb12ad-9035-4039-9be6-6f8a84b93063" containerName="machine-approver-controller" Feb 24 02:10:36.910155 master-0 kubenswrapper[7864]: E0224 02:10:36.910014 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1fb12ad-9035-4039-9be6-6f8a84b93063" containerName="kube-rbac-proxy" Feb 24 02:10:36.910155 master-0 kubenswrapper[7864]: I0224 02:10:36.910025 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1fb12ad-9035-4039-9be6-6f8a84b93063" containerName="kube-rbac-proxy" Feb 24 02:10:36.910155 master-0 kubenswrapper[7864]: I0224 02:10:36.910148 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1fb12ad-9035-4039-9be6-6f8a84b93063" containerName="machine-approver-controller" Feb 24 02:10:36.910338 master-0 kubenswrapper[7864]: I0224 02:10:36.910165 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1fb12ad-9035-4039-9be6-6f8a84b93063" containerName="kube-rbac-proxy" Feb 24 02:10:36.910968 master-0 kubenswrapper[7864]: I0224 02:10:36.910924 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" Feb 24 02:10:36.922414 master-0 kubenswrapper[7864]: I0224 02:10:36.914810 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 24 02:10:36.922414 master-0 kubenswrapper[7864]: I0224 02:10:36.915109 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 24 02:10:36.922414 master-0 kubenswrapper[7864]: I0224 02:10:36.915257 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 24 02:10:36.922414 master-0 kubenswrapper[7864]: I0224 02:10:36.915385 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 24 02:10:36.922414 master-0 kubenswrapper[7864]: I0224 02:10:36.915437 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-7xngw" Feb 24 02:10:36.922414 master-0 kubenswrapper[7864]: I0224 02:10:36.915516 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 24 02:10:37.032062 master-0 kubenswrapper[7864]: I0224 02:10:37.031986 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-config\") pod \"machine-approver-7dd9c7d7b9-sjqsx\" (UID: \"8ebd1a97-ff7b-4a10-a1b5-956e427478a8\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" Feb 24 02:10:37.032320 master-0 kubenswrapper[7864]: I0224 02:10:37.032071 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-sjqsx\" (UID: \"8ebd1a97-ff7b-4a10-a1b5-956e427478a8\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" Feb 24 02:10:37.032320 master-0 kubenswrapper[7864]: I0224 02:10:37.032174 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gckc2\" (UniqueName: \"kubernetes.io/projected/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-kube-api-access-gckc2\") pod \"machine-approver-7dd9c7d7b9-sjqsx\" (UID: \"8ebd1a97-ff7b-4a10-a1b5-956e427478a8\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" Feb 24 02:10:37.032320 master-0 kubenswrapper[7864]: I0224 02:10:37.032245 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-sjqsx\" (UID: \"8ebd1a97-ff7b-4a10-a1b5-956e427478a8\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" Feb 24 02:10:37.134393 master-0 kubenswrapper[7864]: I0224 02:10:37.134252 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-config\") pod \"machine-approver-7dd9c7d7b9-sjqsx\" (UID: \"8ebd1a97-ff7b-4a10-a1b5-956e427478a8\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" Feb 24 02:10:37.134393 master-0 kubenswrapper[7864]: I0224 02:10:37.134322 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-sjqsx\" (UID: \"8ebd1a97-ff7b-4a10-a1b5-956e427478a8\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" Feb 24 02:10:37.134393 master-0 kubenswrapper[7864]: I0224 02:10:37.134368 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gckc2\" (UniqueName: \"kubernetes.io/projected/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-kube-api-access-gckc2\") pod \"machine-approver-7dd9c7d7b9-sjqsx\" (UID: \"8ebd1a97-ff7b-4a10-a1b5-956e427478a8\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" Feb 24 02:10:37.134591 master-0 kubenswrapper[7864]: I0224 02:10:37.134409 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-sjqsx\" (UID: \"8ebd1a97-ff7b-4a10-a1b5-956e427478a8\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" Feb 24 02:10:37.135483 master-0 kubenswrapper[7864]: I0224 02:10:37.135437 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-config\") pod \"machine-approver-7dd9c7d7b9-sjqsx\" (UID: \"8ebd1a97-ff7b-4a10-a1b5-956e427478a8\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" Feb 24 02:10:37.135769 master-0 kubenswrapper[7864]: I0224 02:10:37.135707 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-sjqsx\" (UID: \"8ebd1a97-ff7b-4a10-a1b5-956e427478a8\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" Feb 24 02:10:37.139353 master-0 kubenswrapper[7864]: I0224 02:10:37.139310 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-sjqsx\" (UID: \"8ebd1a97-ff7b-4a10-a1b5-956e427478a8\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" Feb 24 02:10:37.161860 master-0 kubenswrapper[7864]: I0224 02:10:37.161339 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gckc2\" (UniqueName: \"kubernetes.io/projected/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-kube-api-access-gckc2\") pod \"machine-approver-7dd9c7d7b9-sjqsx\" (UID: \"8ebd1a97-ff7b-4a10-a1b5-956e427478a8\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" Feb 24 02:10:37.186991 master-0 kubenswrapper[7864]: W0224 02:10:37.186765 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f34dc85_8fd3_4c8c_ad30_32a956f6f9e1.slice/crio-560cc2fc6affd50d504fd0043ec0076b50148137946f51caca10417e2832ae2a WatchSource:0}: Error finding container 560cc2fc6affd50d504fd0043ec0076b50148137946f51caca10417e2832ae2a: Status 404 returned error can't find the container with id 560cc2fc6affd50d504fd0043ec0076b50148137946f51caca10417e2832ae2a Feb 24 02:10:37.198389 master-0 kubenswrapper[7864]: I0224 02:10:37.197519 7864 scope.go:117] "RemoveContainer" containerID="a0afb79e704d94990f37c7377f85e018c90abaa7d13c2a20dd0aac3f294b7e15" Feb 24 02:10:37.254894 master-0 kubenswrapper[7864]: I0224 02:10:37.254834 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" Feb 24 02:10:37.297670 master-0 kubenswrapper[7864]: I0224 02:10:37.297627 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531640-kptmw" Feb 24 02:10:37.449428 master-0 kubenswrapper[7864]: I0224 02:10:37.449352 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htvbl\" (UniqueName: \"kubernetes.io/projected/24983c94-f158-4a07-854b-2e5455374f19-kube-api-access-htvbl\") pod \"24983c94-f158-4a07-854b-2e5455374f19\" (UID: \"24983c94-f158-4a07-854b-2e5455374f19\") " Feb 24 02:10:37.449635 master-0 kubenswrapper[7864]: I0224 02:10:37.449491 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24983c94-f158-4a07-854b-2e5455374f19-config-volume\") pod \"24983c94-f158-4a07-854b-2e5455374f19\" (UID: \"24983c94-f158-4a07-854b-2e5455374f19\") " Feb 24 02:10:37.449721 master-0 kubenswrapper[7864]: I0224 02:10:37.449656 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24983c94-f158-4a07-854b-2e5455374f19-secret-volume\") pod \"24983c94-f158-4a07-854b-2e5455374f19\" (UID: \"24983c94-f158-4a07-854b-2e5455374f19\") " Feb 24 02:10:37.453243 master-0 kubenswrapper[7864]: I0224 02:10:37.453187 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24983c94-f158-4a07-854b-2e5455374f19-config-volume" (OuterVolumeSpecName: "config-volume") pod "24983c94-f158-4a07-854b-2e5455374f19" (UID: "24983c94-f158-4a07-854b-2e5455374f19"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:10:37.455192 master-0 kubenswrapper[7864]: I0224 02:10:37.455136 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24983c94-f158-4a07-854b-2e5455374f19-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "24983c94-f158-4a07-854b-2e5455374f19" (UID: "24983c94-f158-4a07-854b-2e5455374f19"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:10:37.458990 master-0 kubenswrapper[7864]: I0224 02:10:37.458910 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24983c94-f158-4a07-854b-2e5455374f19-kube-api-access-htvbl" (OuterVolumeSpecName: "kube-api-access-htvbl") pod "24983c94-f158-4a07-854b-2e5455374f19" (UID: "24983c94-f158-4a07-854b-2e5455374f19"). InnerVolumeSpecName "kube-api-access-htvbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:10:37.551553 master-0 kubenswrapper[7864]: I0224 02:10:37.551508 7864 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htvbl\" (UniqueName: \"kubernetes.io/projected/24983c94-f158-4a07-854b-2e5455374f19-kube-api-access-htvbl\") on node \"master-0\" DevicePath \"\"" Feb 24 02:10:37.551553 master-0 kubenswrapper[7864]: I0224 02:10:37.551545 7864 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24983c94-f158-4a07-854b-2e5455374f19-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 24 02:10:37.551553 master-0 kubenswrapper[7864]: I0224 02:10:37.551556 7864 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24983c94-f158-4a07-854b-2e5455374f19-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 24 02:10:37.820914 master-0 kubenswrapper[7864]: I0224 02:10:37.820851 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-drf28" event={"ID":"9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1","Type":"ContainerStarted","Data":"2a11c76f0140f7173ebd4cdba2e1203079bbe90c60c221d6fa54a28e0ae0592e"} Feb 24 02:10:37.820914 master-0 kubenswrapper[7864]: I0224 02:10:37.820908 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-drf28" event={"ID":"9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1","Type":"ContainerStarted","Data":"560cc2fc6affd50d504fd0043ec0076b50148137946f51caca10417e2832ae2a"} Feb 24 02:10:37.824017 master-0 kubenswrapper[7864]: I0224 02:10:37.823973 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" event={"ID":"0ce6dd93-084c-4e15-8b7c-e0829a6df14e","Type":"ContainerStarted","Data":"4b48f973de3ca235282063a3a1d4565ab0f94d9aba4a7a4a30b638daf156cb07"} Feb 24 02:10:37.827424 master-0 kubenswrapper[7864]: I0224 02:10:37.827392 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-l4wh6" event={"ID":"ca1250a6-30f0-4cc0-b9b0-eabde42aefcf","Type":"ContainerStarted","Data":"42b614867a6be905f8b67fbb1eadce2d63350b32efa0ed0966971ba56c068fcf"} Feb 24 02:10:37.832196 master-0 kubenswrapper[7864]: I0224 02:10:37.832117 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" event={"ID":"6a08a1e4-cf92-4733-a8af-c7ac5b21e925","Type":"ContainerStarted","Data":"160afcf676f240f4edef48c62c951182da5bbf1b49c67d215747997188263486"} Feb 24 02:10:37.834293 master-0 kubenswrapper[7864]: I0224 02:10:37.834259 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531640-kptmw" Feb 24 02:10:37.834377 master-0 kubenswrapper[7864]: I0224 02:10:37.834258 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531640-kptmw" event={"ID":"24983c94-f158-4a07-854b-2e5455374f19","Type":"ContainerDied","Data":"4953fab92508c627d5109fd9998dd27ef32a95f892628beaf8d18c65fdcda821"} Feb 24 02:10:37.834431 master-0 kubenswrapper[7864]: I0224 02:10:37.834384 7864 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4953fab92508c627d5109fd9998dd27ef32a95f892628beaf8d18c65fdcda821" Feb 24 02:10:37.836614 master-0 kubenswrapper[7864]: I0224 02:10:37.836566 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-9gkp2" event={"ID":"22a83952-32ec-48f7-85cd-209b62362ae2","Type":"ContainerStarted","Data":"48f55c332467fced473c4c4e91af307834aa39f6e1f504defa9413e83cb73702"} Feb 24 02:10:37.836899 master-0 kubenswrapper[7864]: I0224 02:10:37.836878 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-9gkp2" Feb 24 02:10:37.837836 master-0 kubenswrapper[7864]: I0224 02:10:37.837792 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" event={"ID":"8ebd1a97-ff7b-4a10-a1b5-956e427478a8","Type":"ContainerStarted","Data":"b42c212cb365bd4dad9063fd3d49d84292444529270d009b20ccad68831b287a"} Feb 24 02:10:37.837836 master-0 kubenswrapper[7864]: I0224 02:10:37.837827 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" event={"ID":"8ebd1a97-ff7b-4a10-a1b5-956e427478a8","Type":"ContainerStarted","Data":"276e47463c76f9595550735ca5a2eb97f44bfa685298a20ea61ee705f8a41bd4"} Feb 24 02:10:37.847094 master-0 kubenswrapper[7864]: I0224 02:10:37.846197 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-9gkp2" Feb 24 02:10:37.848111 master-0 kubenswrapper[7864]: I0224 02:10:37.848058 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-drf28" podStartSLOduration=3.848044469 podStartE2EDuration="3.848044469s" podCreationTimestamp="2026-02-24 02:10:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:10:37.847160315 +0000 UTC m=+402.174813937" watchObservedRunningTime="2026-02-24 02:10:37.848044469 +0000 UTC m=+402.175698081" Feb 24 02:10:37.875721 master-0 kubenswrapper[7864]: I0224 02:10:37.870899 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-l4wh6" podStartSLOduration=448.870890863 podStartE2EDuration="7m28.870890863s" podCreationTimestamp="2026-02-24 02:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:10:37.869035074 +0000 UTC m=+402.196688726" watchObservedRunningTime="2026-02-24 02:10:37.870890863 +0000 UTC m=+402.198544485" Feb 24 02:10:37.884815 master-0 kubenswrapper[7864]: I0224 02:10:37.884175 7864 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1fb12ad-9035-4039-9be6-6f8a84b93063" path="/var/lib/kubelet/pods/b1fb12ad-9035-4039-9be6-6f8a84b93063/volumes" Feb 24 02:10:37.899695 master-0 kubenswrapper[7864]: I0224 02:10:37.895138 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podStartSLOduration=370.787083657 podStartE2EDuration="6m15.895115823s" podCreationTimestamp="2026-02-24 02:04:22 +0000 UTC" firstStartedPulling="2026-02-24 02:10:32.121708 +0000 UTC m=+396.449361662" lastFinishedPulling="2026-02-24 02:10:37.229740166 +0000 UTC m=+401.557393828" observedRunningTime="2026-02-24 02:10:37.888466837 +0000 UTC m=+402.216120459" watchObservedRunningTime="2026-02-24 02:10:37.895115823 +0000 UTC m=+402.222769445" Feb 24 02:10:37.911702 master-0 kubenswrapper[7864]: I0224 02:10:37.910442 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-9gkp2" podStartSLOduration=360.204517927 podStartE2EDuration="6m1.910434478s" podCreationTimestamp="2026-02-24 02:04:36 +0000 UTC" firstStartedPulling="2026-02-24 02:10:35.580886383 +0000 UTC m=+399.908540005" lastFinishedPulling="2026-02-24 02:10:37.286802894 +0000 UTC m=+401.614456556" observedRunningTime="2026-02-24 02:10:37.908962259 +0000 UTC m=+402.236615881" watchObservedRunningTime="2026-02-24 02:10:37.910434478 +0000 UTC m=+402.238088100" Feb 24 02:10:38.082200 master-0 kubenswrapper[7864]: I0224 02:10:38.082138 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:10:38.084913 master-0 kubenswrapper[7864]: I0224 02:10:38.084876 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:10:38.084913 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:10:38.084913 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:10:38.084913 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:10:38.085093 master-0 kubenswrapper[7864]: I0224 02:10:38.084937 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:10:38.367922 master-0 kubenswrapper[7864]: I0224 02:10:38.367848 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" podStartSLOduration=5.585208687 podStartE2EDuration="18.367826798s" podCreationTimestamp="2026-02-24 02:10:20 +0000 UTC" firstStartedPulling="2026-02-24 02:10:24.415001896 +0000 UTC m=+388.742655548" lastFinishedPulling="2026-02-24 02:10:37.197619857 +0000 UTC m=+401.525273659" observedRunningTime="2026-02-24 02:10:37.932925452 +0000 UTC m=+402.260579074" watchObservedRunningTime="2026-02-24 02:10:38.367826798 +0000 UTC m=+402.695480420" Feb 24 02:10:38.370189 master-0 kubenswrapper[7864]: I0224 02:10:38.370164 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-754bc4d665-66lml"] Feb 24 02:10:38.370411 master-0 kubenswrapper[7864]: E0224 02:10:38.370392 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24983c94-f158-4a07-854b-2e5455374f19" containerName="collect-profiles" Feb 24 02:10:38.370411 master-0 kubenswrapper[7864]: I0224 02:10:38.370408 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="24983c94-f158-4a07-854b-2e5455374f19" containerName="collect-profiles" Feb 24 02:10:38.370527 master-0 kubenswrapper[7864]: I0224 02:10:38.370510 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="24983c94-f158-4a07-854b-2e5455374f19" containerName="collect-profiles" Feb 24 02:10:38.371093 master-0 kubenswrapper[7864]: I0224 02:10:38.371072 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" Feb 24 02:10:38.373159 master-0 kubenswrapper[7864]: I0224 02:10:38.373112 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 24 02:10:38.373484 master-0 kubenswrapper[7864]: I0224 02:10:38.373460 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 24 02:10:38.373776 master-0 kubenswrapper[7864]: I0224 02:10:38.373748 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 24 02:10:38.373933 master-0 kubenswrapper[7864]: I0224 02:10:38.373910 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-shkn8" Feb 24 02:10:38.389568 master-0 kubenswrapper[7864]: I0224 02:10:38.389530 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-754bc4d665-66lml"] Feb 24 02:10:38.465260 master-0 kubenswrapper[7864]: I0224 02:10:38.465213 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd796\" (UniqueName: \"kubernetes.io/projected/df2b8111-41c6-4333-b473-4c08fb836f70-kube-api-access-cd796\") pod \"prometheus-operator-754bc4d665-66lml\" (UID: \"df2b8111-41c6-4333-b473-4c08fb836f70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" Feb 24 02:10:38.465524 master-0 kubenswrapper[7864]: I0224 02:10:38.465508 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/df2b8111-41c6-4333-b473-4c08fb836f70-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-66lml\" (UID: \"df2b8111-41c6-4333-b473-4c08fb836f70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" Feb 24 02:10:38.465707 master-0 kubenswrapper[7864]: I0224 02:10:38.465690 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/df2b8111-41c6-4333-b473-4c08fb836f70-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-66lml\" (UID: \"df2b8111-41c6-4333-b473-4c08fb836f70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" Feb 24 02:10:38.465832 master-0 kubenswrapper[7864]: I0224 02:10:38.465813 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/df2b8111-41c6-4333-b473-4c08fb836f70-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-66lml\" (UID: \"df2b8111-41c6-4333-b473-4c08fb836f70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" Feb 24 02:10:38.567719 master-0 kubenswrapper[7864]: I0224 02:10:38.567643 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/df2b8111-41c6-4333-b473-4c08fb836f70-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-66lml\" (UID: \"df2b8111-41c6-4333-b473-4c08fb836f70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" Feb 24 02:10:38.567872 master-0 kubenswrapper[7864]: I0224 02:10:38.567778 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/df2b8111-41c6-4333-b473-4c08fb836f70-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-66lml\" (UID: \"df2b8111-41c6-4333-b473-4c08fb836f70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" Feb 24 02:10:38.567872 master-0 kubenswrapper[7864]: I0224 02:10:38.567836 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/df2b8111-41c6-4333-b473-4c08fb836f70-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-66lml\" (UID: \"df2b8111-41c6-4333-b473-4c08fb836f70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" Feb 24 02:10:38.568086 master-0 kubenswrapper[7864]: I0224 02:10:38.567942 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd796\" (UniqueName: \"kubernetes.io/projected/df2b8111-41c6-4333-b473-4c08fb836f70-kube-api-access-cd796\") pod \"prometheus-operator-754bc4d665-66lml\" (UID: \"df2b8111-41c6-4333-b473-4c08fb836f70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" Feb 24 02:10:38.568953 master-0 kubenswrapper[7864]: I0224 02:10:38.568898 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/df2b8111-41c6-4333-b473-4c08fb836f70-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-66lml\" (UID: \"df2b8111-41c6-4333-b473-4c08fb836f70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" Feb 24 02:10:38.573926 master-0 kubenswrapper[7864]: I0224 02:10:38.573876 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/df2b8111-41c6-4333-b473-4c08fb836f70-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-66lml\" (UID: \"df2b8111-41c6-4333-b473-4c08fb836f70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" Feb 24 02:10:38.574625 master-0 kubenswrapper[7864]: I0224 02:10:38.574540 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/df2b8111-41c6-4333-b473-4c08fb836f70-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-66lml\" (UID: \"df2b8111-41c6-4333-b473-4c08fb836f70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" Feb 24 02:10:38.597807 master-0 kubenswrapper[7864]: I0224 02:10:38.597751 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd796\" (UniqueName: \"kubernetes.io/projected/df2b8111-41c6-4333-b473-4c08fb836f70-kube-api-access-cd796\") pod \"prometheus-operator-754bc4d665-66lml\" (UID: \"df2b8111-41c6-4333-b473-4c08fb836f70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" Feb 24 02:10:38.684406 master-0 kubenswrapper[7864]: I0224 02:10:38.684276 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" Feb 24 02:10:38.868390 master-0 kubenswrapper[7864]: I0224 02:10:38.865486 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" event={"ID":"8ebd1a97-ff7b-4a10-a1b5-956e427478a8","Type":"ContainerStarted","Data":"1ad25f42402e4374c8b94191386be9d7bc2003ced71ea11e24c2117025405399"} Feb 24 02:10:38.904063 master-0 kubenswrapper[7864]: I0224 02:10:38.903850 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" podStartSLOduration=2.903815624 podStartE2EDuration="2.903815624s" podCreationTimestamp="2026-02-24 02:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:10:38.900029704 +0000 UTC m=+403.227683366" watchObservedRunningTime="2026-02-24 02:10:38.903815624 +0000 UTC m=+403.231469276" Feb 24 02:10:39.085472 master-0 kubenswrapper[7864]: I0224 02:10:39.085370 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:10:39.085472 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:10:39.085472 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:10:39.085472 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:10:39.085977 master-0 kubenswrapper[7864]: I0224 02:10:39.085498 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:10:39.252730 master-0 kubenswrapper[7864]: W0224 02:10:39.252491 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf2b8111_41c6_4333_b473_4c08fb836f70.slice/crio-513f9261949841afe139d9cdba0a1314c71b8cc3ca522e4a37e97a5c0f7cd056 WatchSource:0}: Error finding container 513f9261949841afe139d9cdba0a1314c71b8cc3ca522e4a37e97a5c0f7cd056: Status 404 returned error can't find the container with id 513f9261949841afe139d9cdba0a1314c71b8cc3ca522e4a37e97a5c0f7cd056 Feb 24 02:10:39.253225 master-0 kubenswrapper[7864]: I0224 02:10:39.253148 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-754bc4d665-66lml"] Feb 24 02:10:39.889136 master-0 kubenswrapper[7864]: I0224 02:10:39.889061 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" event={"ID":"df2b8111-41c6-4333-b473-4c08fb836f70","Type":"ContainerStarted","Data":"513f9261949841afe139d9cdba0a1314c71b8cc3ca522e4a37e97a5c0f7cd056"} Feb 24 02:10:39.968230 master-0 kubenswrapper[7864]: I0224 02:10:39.968120 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw"] Feb 24 02:10:39.969145 master-0 kubenswrapper[7864]: I0224 02:10:39.969086 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw" podUID="9d706583-f8dc-4a3c-832b-7e8249d0c662" containerName="kube-rbac-proxy" containerID="cri-o://0fbe3052c7d44e803671e80911250aeacc2950ee3e17cc34415c4b35604a8836" gracePeriod=30 Feb 24 02:10:39.969303 master-0 kubenswrapper[7864]: I0224 02:10:39.969210 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw" podUID="9d706583-f8dc-4a3c-832b-7e8249d0c662" containerName="config-sync-controllers" containerID="cri-o://4dd31d54c217b125b138c8239ebb34954acd0d48c9fd9ddcce9e901bcfa40ed0" gracePeriod=30 Feb 24 02:10:39.969409 master-0 kubenswrapper[7864]: I0224 02:10:39.969039 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw" podUID="9d706583-f8dc-4a3c-832b-7e8249d0c662" containerName="cluster-cloud-controller-manager" containerID="cri-o://9c01867bbd8b0b17d3adb371822b696215e7163cff7ac59ddc753559ec32f43a" gracePeriod=30 Feb 24 02:10:40.084730 master-0 kubenswrapper[7864]: I0224 02:10:40.084641 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:10:40.084730 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:10:40.084730 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:10:40.084730 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:10:40.084993 master-0 kubenswrapper[7864]: I0224 02:10:40.084732 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:10:40.148064 master-0 kubenswrapper[7864]: I0224 02:10:40.147945 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw" Feb 24 02:10:40.311900 master-0 kubenswrapper[7864]: I0224 02:10:40.311830 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9d706583-f8dc-4a3c-832b-7e8249d0c662-images\") pod \"9d706583-f8dc-4a3c-832b-7e8249d0c662\" (UID: \"9d706583-f8dc-4a3c-832b-7e8249d0c662\") " Feb 24 02:10:40.312129 master-0 kubenswrapper[7864]: I0224 02:10:40.312012 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9d706583-f8dc-4a3c-832b-7e8249d0c662-host-etc-kube\") pod \"9d706583-f8dc-4a3c-832b-7e8249d0c662\" (UID: \"9d706583-f8dc-4a3c-832b-7e8249d0c662\") " Feb 24 02:10:40.312228 master-0 kubenswrapper[7864]: I0224 02:10:40.312183 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d706583-f8dc-4a3c-832b-7e8249d0c662-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "9d706583-f8dc-4a3c-832b-7e8249d0c662" (UID: "9d706583-f8dc-4a3c-832b-7e8249d0c662"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:10:40.312278 master-0 kubenswrapper[7864]: I0224 02:10:40.312201 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d706583-f8dc-4a3c-832b-7e8249d0c662-cloud-controller-manager-operator-tls\") pod \"9d706583-f8dc-4a3c-832b-7e8249d0c662\" (UID: \"9d706583-f8dc-4a3c-832b-7e8249d0c662\") " Feb 24 02:10:40.312397 master-0 kubenswrapper[7864]: I0224 02:10:40.312362 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpg5p\" (UniqueName: \"kubernetes.io/projected/9d706583-f8dc-4a3c-832b-7e8249d0c662-kube-api-access-bpg5p\") pod \"9d706583-f8dc-4a3c-832b-7e8249d0c662\" (UID: \"9d706583-f8dc-4a3c-832b-7e8249d0c662\") " Feb 24 02:10:40.312486 master-0 kubenswrapper[7864]: I0224 02:10:40.312462 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d706583-f8dc-4a3c-832b-7e8249d0c662-auth-proxy-config\") pod \"9d706583-f8dc-4a3c-832b-7e8249d0c662\" (UID: \"9d706583-f8dc-4a3c-832b-7e8249d0c662\") " Feb 24 02:10:40.312601 master-0 kubenswrapper[7864]: I0224 02:10:40.312539 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d706583-f8dc-4a3c-832b-7e8249d0c662-images" (OuterVolumeSpecName: "images") pod "9d706583-f8dc-4a3c-832b-7e8249d0c662" (UID: "9d706583-f8dc-4a3c-832b-7e8249d0c662"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:10:40.312957 master-0 kubenswrapper[7864]: I0224 02:10:40.312928 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d706583-f8dc-4a3c-832b-7e8249d0c662-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "9d706583-f8dc-4a3c-832b-7e8249d0c662" (UID: "9d706583-f8dc-4a3c-832b-7e8249d0c662"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:10:40.313149 master-0 kubenswrapper[7864]: I0224 02:10:40.313123 7864 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d706583-f8dc-4a3c-832b-7e8249d0c662-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:10:40.313199 master-0 kubenswrapper[7864]: I0224 02:10:40.313151 7864 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9d706583-f8dc-4a3c-832b-7e8249d0c662-images\") on node \"master-0\" DevicePath \"\"" Feb 24 02:10:40.313199 master-0 kubenswrapper[7864]: I0224 02:10:40.313180 7864 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9d706583-f8dc-4a3c-832b-7e8249d0c662-host-etc-kube\") on node \"master-0\" DevicePath \"\"" Feb 24 02:10:40.320826 master-0 kubenswrapper[7864]: I0224 02:10:40.320792 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d706583-f8dc-4a3c-832b-7e8249d0c662-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "9d706583-f8dc-4a3c-832b-7e8249d0c662" (UID: "9d706583-f8dc-4a3c-832b-7e8249d0c662"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:10:40.321626 master-0 kubenswrapper[7864]: I0224 02:10:40.321528 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d706583-f8dc-4a3c-832b-7e8249d0c662-kube-api-access-bpg5p" (OuterVolumeSpecName: "kube-api-access-bpg5p") pod "9d706583-f8dc-4a3c-832b-7e8249d0c662" (UID: "9d706583-f8dc-4a3c-832b-7e8249d0c662"). InnerVolumeSpecName "kube-api-access-bpg5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:10:40.414880 master-0 kubenswrapper[7864]: I0224 02:10:40.414757 7864 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d706583-f8dc-4a3c-832b-7e8249d0c662-cloud-controller-manager-operator-tls\") on node \"master-0\" DevicePath \"\"" Feb 24 02:10:40.414880 master-0 kubenswrapper[7864]: I0224 02:10:40.414824 7864 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpg5p\" (UniqueName: \"kubernetes.io/projected/9d706583-f8dc-4a3c-832b-7e8249d0c662-kube-api-access-bpg5p\") on node \"master-0\" DevicePath \"\"" Feb 24 02:10:41.084771 master-0 kubenswrapper[7864]: I0224 02:10:41.084657 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:10:41.084771 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:10:41.084771 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:10:41.084771 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:10:41.085602 master-0 kubenswrapper[7864]: I0224 02:10:41.084815 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:10:41.204110 master-0 kubenswrapper[7864]: I0224 02:10:41.204036 7864 generic.go:334] "Generic (PLEG): container finished" podID="9d706583-f8dc-4a3c-832b-7e8249d0c662" containerID="0fbe3052c7d44e803671e80911250aeacc2950ee3e17cc34415c4b35604a8836" exitCode=0 Feb 24 02:10:41.204110 master-0 kubenswrapper[7864]: I0224 02:10:41.204095 7864 generic.go:334] "Generic (PLEG): container finished" podID="9d706583-f8dc-4a3c-832b-7e8249d0c662" containerID="4dd31d54c217b125b138c8239ebb34954acd0d48c9fd9ddcce9e901bcfa40ed0" exitCode=0 Feb 24 02:10:41.204110 master-0 kubenswrapper[7864]: I0224 02:10:41.204114 7864 generic.go:334] "Generic (PLEG): container finished" podID="9d706583-f8dc-4a3c-832b-7e8249d0c662" containerID="9c01867bbd8b0b17d3adb371822b696215e7163cff7ac59ddc753559ec32f43a" exitCode=0 Feb 24 02:10:41.205800 master-0 kubenswrapper[7864]: I0224 02:10:41.205703 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw" event={"ID":"9d706583-f8dc-4a3c-832b-7e8249d0c662","Type":"ContainerDied","Data":"0fbe3052c7d44e803671e80911250aeacc2950ee3e17cc34415c4b35604a8836"} Feb 24 02:10:41.211038 master-0 kubenswrapper[7864]: I0224 02:10:41.207969 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw" Feb 24 02:10:41.211382 master-0 kubenswrapper[7864]: I0224 02:10:41.210988 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw" event={"ID":"9d706583-f8dc-4a3c-832b-7e8249d0c662","Type":"ContainerDied","Data":"4dd31d54c217b125b138c8239ebb34954acd0d48c9fd9ddcce9e901bcfa40ed0"} Feb 24 02:10:41.211555 master-0 kubenswrapper[7864]: I0224 02:10:41.211526 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw" event={"ID":"9d706583-f8dc-4a3c-832b-7e8249d0c662","Type":"ContainerDied","Data":"9c01867bbd8b0b17d3adb371822b696215e7163cff7ac59ddc753559ec32f43a"} Feb 24 02:10:41.213462 master-0 kubenswrapper[7864]: I0224 02:10:41.211822 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw" event={"ID":"9d706583-f8dc-4a3c-832b-7e8249d0c662","Type":"ContainerDied","Data":"acbd6c24283dc82192e4eb0fdc26b53de15fac7e1a937183378a366b3365f0ee"} Feb 24 02:10:41.213757 master-0 kubenswrapper[7864]: I0224 02:10:41.213661 7864 scope.go:117] "RemoveContainer" containerID="0fbe3052c7d44e803671e80911250aeacc2950ee3e17cc34415c4b35604a8836" Feb 24 02:10:41.243973 master-0 kubenswrapper[7864]: I0224 02:10:41.243899 7864 scope.go:117] "RemoveContainer" containerID="4dd31d54c217b125b138c8239ebb34954acd0d48c9fd9ddcce9e901bcfa40ed0" Feb 24 02:10:41.279049 master-0 kubenswrapper[7864]: I0224 02:10:41.278987 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw"] Feb 24 02:10:41.282066 master-0 kubenswrapper[7864]: I0224 02:10:41.282033 7864 scope.go:117] "RemoveContainer" containerID="9c01867bbd8b0b17d3adb371822b696215e7163cff7ac59ddc753559ec32f43a" Feb 24 02:10:41.283649 master-0 kubenswrapper[7864]: I0224 02:10:41.283619 7864 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-r88hw"] Feb 24 02:10:41.298550 master-0 kubenswrapper[7864]: I0224 02:10:41.298520 7864 scope.go:117] "RemoveContainer" containerID="0fbe3052c7d44e803671e80911250aeacc2950ee3e17cc34415c4b35604a8836" Feb 24 02:10:41.299220 master-0 kubenswrapper[7864]: E0224 02:10:41.299181 7864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fbe3052c7d44e803671e80911250aeacc2950ee3e17cc34415c4b35604a8836\": container with ID starting with 0fbe3052c7d44e803671e80911250aeacc2950ee3e17cc34415c4b35604a8836 not found: ID does not exist" containerID="0fbe3052c7d44e803671e80911250aeacc2950ee3e17cc34415c4b35604a8836" Feb 24 02:10:41.299290 master-0 kubenswrapper[7864]: I0224 02:10:41.299236 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fbe3052c7d44e803671e80911250aeacc2950ee3e17cc34415c4b35604a8836"} err="failed to get container status \"0fbe3052c7d44e803671e80911250aeacc2950ee3e17cc34415c4b35604a8836\": rpc error: code = NotFound desc = could not find container \"0fbe3052c7d44e803671e80911250aeacc2950ee3e17cc34415c4b35604a8836\": container with ID starting with 0fbe3052c7d44e803671e80911250aeacc2950ee3e17cc34415c4b35604a8836 not found: ID does not exist" Feb 24 02:10:41.299290 master-0 kubenswrapper[7864]: I0224 02:10:41.299271 7864 scope.go:117] "RemoveContainer" containerID="4dd31d54c217b125b138c8239ebb34954acd0d48c9fd9ddcce9e901bcfa40ed0" Feb 24 02:10:41.299810 master-0 kubenswrapper[7864]: E0224 02:10:41.299774 7864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dd31d54c217b125b138c8239ebb34954acd0d48c9fd9ddcce9e901bcfa40ed0\": container with ID starting with 4dd31d54c217b125b138c8239ebb34954acd0d48c9fd9ddcce9e901bcfa40ed0 not found: ID does not exist" containerID="4dd31d54c217b125b138c8239ebb34954acd0d48c9fd9ddcce9e901bcfa40ed0" Feb 24 02:10:41.300007 master-0 kubenswrapper[7864]: I0224 02:10:41.299810 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dd31d54c217b125b138c8239ebb34954acd0d48c9fd9ddcce9e901bcfa40ed0"} err="failed to get container status \"4dd31d54c217b125b138c8239ebb34954acd0d48c9fd9ddcce9e901bcfa40ed0\": rpc error: code = NotFound desc = could not find container \"4dd31d54c217b125b138c8239ebb34954acd0d48c9fd9ddcce9e901bcfa40ed0\": container with ID starting with 4dd31d54c217b125b138c8239ebb34954acd0d48c9fd9ddcce9e901bcfa40ed0 not found: ID does not exist" Feb 24 02:10:41.300007 master-0 kubenswrapper[7864]: I0224 02:10:41.299831 7864 scope.go:117] "RemoveContainer" containerID="9c01867bbd8b0b17d3adb371822b696215e7163cff7ac59ddc753559ec32f43a" Feb 24 02:10:41.300864 master-0 kubenswrapper[7864]: E0224 02:10:41.300809 7864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c01867bbd8b0b17d3adb371822b696215e7163cff7ac59ddc753559ec32f43a\": container with ID starting with 9c01867bbd8b0b17d3adb371822b696215e7163cff7ac59ddc753559ec32f43a not found: ID does not exist" containerID="9c01867bbd8b0b17d3adb371822b696215e7163cff7ac59ddc753559ec32f43a" Feb 24 02:10:41.300948 master-0 kubenswrapper[7864]: I0224 02:10:41.300865 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c01867bbd8b0b17d3adb371822b696215e7163cff7ac59ddc753559ec32f43a"} err="failed to get container status \"9c01867bbd8b0b17d3adb371822b696215e7163cff7ac59ddc753559ec32f43a\": rpc error: code = NotFound desc = could not find container \"9c01867bbd8b0b17d3adb371822b696215e7163cff7ac59ddc753559ec32f43a\": container with ID starting with 9c01867bbd8b0b17d3adb371822b696215e7163cff7ac59ddc753559ec32f43a not found: ID does not exist" Feb 24 02:10:41.300948 master-0 kubenswrapper[7864]: I0224 02:10:41.300902 7864 scope.go:117] "RemoveContainer" containerID="0fbe3052c7d44e803671e80911250aeacc2950ee3e17cc34415c4b35604a8836" Feb 24 02:10:41.301241 master-0 kubenswrapper[7864]: I0224 02:10:41.301202 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fbe3052c7d44e803671e80911250aeacc2950ee3e17cc34415c4b35604a8836"} err="failed to get container status \"0fbe3052c7d44e803671e80911250aeacc2950ee3e17cc34415c4b35604a8836\": rpc error: code = NotFound desc = could not find container \"0fbe3052c7d44e803671e80911250aeacc2950ee3e17cc34415c4b35604a8836\": container with ID starting with 0fbe3052c7d44e803671e80911250aeacc2950ee3e17cc34415c4b35604a8836 not found: ID does not exist" Feb 24 02:10:41.301241 master-0 kubenswrapper[7864]: I0224 02:10:41.301231 7864 scope.go:117] "RemoveContainer" containerID="4dd31d54c217b125b138c8239ebb34954acd0d48c9fd9ddcce9e901bcfa40ed0" Feb 24 02:10:41.301797 master-0 kubenswrapper[7864]: I0224 02:10:41.301771 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dd31d54c217b125b138c8239ebb34954acd0d48c9fd9ddcce9e901bcfa40ed0"} err="failed to get container status \"4dd31d54c217b125b138c8239ebb34954acd0d48c9fd9ddcce9e901bcfa40ed0\": rpc error: code = NotFound desc = could not find container \"4dd31d54c217b125b138c8239ebb34954acd0d48c9fd9ddcce9e901bcfa40ed0\": container with ID starting with 4dd31d54c217b125b138c8239ebb34954acd0d48c9fd9ddcce9e901bcfa40ed0 not found: ID does not exist" Feb 24 02:10:41.301881 master-0 kubenswrapper[7864]: I0224 02:10:41.301798 7864 scope.go:117] "RemoveContainer" containerID="9c01867bbd8b0b17d3adb371822b696215e7163cff7ac59ddc753559ec32f43a" Feb 24 02:10:41.303806 master-0 kubenswrapper[7864]: I0224 02:10:41.303775 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c01867bbd8b0b17d3adb371822b696215e7163cff7ac59ddc753559ec32f43a"} err="failed to get container status \"9c01867bbd8b0b17d3adb371822b696215e7163cff7ac59ddc753559ec32f43a\": rpc error: code = NotFound desc = could not find container \"9c01867bbd8b0b17d3adb371822b696215e7163cff7ac59ddc753559ec32f43a\": container with ID starting with 9c01867bbd8b0b17d3adb371822b696215e7163cff7ac59ddc753559ec32f43a not found: ID does not exist" Feb 24 02:10:41.303913 master-0 kubenswrapper[7864]: I0224 02:10:41.303897 7864 scope.go:117] "RemoveContainer" containerID="0fbe3052c7d44e803671e80911250aeacc2950ee3e17cc34415c4b35604a8836" Feb 24 02:10:41.305092 master-0 kubenswrapper[7864]: I0224 02:10:41.304960 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fbe3052c7d44e803671e80911250aeacc2950ee3e17cc34415c4b35604a8836"} err="failed to get container status \"0fbe3052c7d44e803671e80911250aeacc2950ee3e17cc34415c4b35604a8836\": rpc error: code = NotFound desc = could not find container \"0fbe3052c7d44e803671e80911250aeacc2950ee3e17cc34415c4b35604a8836\": container with ID starting with 0fbe3052c7d44e803671e80911250aeacc2950ee3e17cc34415c4b35604a8836 not found: ID does not exist" Feb 24 02:10:41.305166 master-0 kubenswrapper[7864]: I0224 02:10:41.305097 7864 scope.go:117] "RemoveContainer" containerID="4dd31d54c217b125b138c8239ebb34954acd0d48c9fd9ddcce9e901bcfa40ed0" Feb 24 02:10:41.305418 master-0 kubenswrapper[7864]: I0224 02:10:41.305393 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dd31d54c217b125b138c8239ebb34954acd0d48c9fd9ddcce9e901bcfa40ed0"} err="failed to get container status \"4dd31d54c217b125b138c8239ebb34954acd0d48c9fd9ddcce9e901bcfa40ed0\": rpc error: code = NotFound desc = could not find container \"4dd31d54c217b125b138c8239ebb34954acd0d48c9fd9ddcce9e901bcfa40ed0\": container with ID starting with 4dd31d54c217b125b138c8239ebb34954acd0d48c9fd9ddcce9e901bcfa40ed0 not found: ID does not exist" Feb 24 02:10:41.305539 master-0 kubenswrapper[7864]: I0224 02:10:41.305522 7864 scope.go:117] "RemoveContainer" containerID="9c01867bbd8b0b17d3adb371822b696215e7163cff7ac59ddc753559ec32f43a" Feb 24 02:10:41.306093 master-0 kubenswrapper[7864]: I0224 02:10:41.306046 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c01867bbd8b0b17d3adb371822b696215e7163cff7ac59ddc753559ec32f43a"} err="failed to get container status \"9c01867bbd8b0b17d3adb371822b696215e7163cff7ac59ddc753559ec32f43a\": rpc error: code = NotFound desc = could not find container \"9c01867bbd8b0b17d3adb371822b696215e7163cff7ac59ddc753559ec32f43a\": container with ID starting with 9c01867bbd8b0b17d3adb371822b696215e7163cff7ac59ddc753559ec32f43a not found: ID does not exist" Feb 24 02:10:41.321471 master-0 kubenswrapper[7864]: I0224 02:10:41.321317 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt"] Feb 24 02:10:41.321743 master-0 kubenswrapper[7864]: E0224 02:10:41.321716 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d706583-f8dc-4a3c-832b-7e8249d0c662" containerName="config-sync-controllers" Feb 24 02:10:41.321791 master-0 kubenswrapper[7864]: I0224 02:10:41.321747 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d706583-f8dc-4a3c-832b-7e8249d0c662" containerName="config-sync-controllers" Feb 24 02:10:41.321823 master-0 kubenswrapper[7864]: E0224 02:10:41.321799 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d706583-f8dc-4a3c-832b-7e8249d0c662" containerName="kube-rbac-proxy" Feb 24 02:10:41.321823 master-0 kubenswrapper[7864]: I0224 02:10:41.321816 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d706583-f8dc-4a3c-832b-7e8249d0c662" containerName="kube-rbac-proxy" Feb 24 02:10:41.321889 master-0 kubenswrapper[7864]: E0224 02:10:41.321850 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d706583-f8dc-4a3c-832b-7e8249d0c662" containerName="cluster-cloud-controller-manager" Feb 24 02:10:41.321889 master-0 kubenswrapper[7864]: I0224 02:10:41.321864 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d706583-f8dc-4a3c-832b-7e8249d0c662" containerName="cluster-cloud-controller-manager" Feb 24 02:10:41.322107 master-0 kubenswrapper[7864]: I0224 02:10:41.322066 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d706583-f8dc-4a3c-832b-7e8249d0c662" containerName="cluster-cloud-controller-manager" Feb 24 02:10:41.322159 master-0 kubenswrapper[7864]: I0224 02:10:41.322110 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d706583-f8dc-4a3c-832b-7e8249d0c662" containerName="kube-rbac-proxy" Feb 24 02:10:41.322159 master-0 kubenswrapper[7864]: I0224 02:10:41.322134 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d706583-f8dc-4a3c-832b-7e8249d0c662" containerName="config-sync-controllers" Feb 24 02:10:41.323654 master-0 kubenswrapper[7864]: I0224 02:10:41.323621 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:10:41.326111 master-0 kubenswrapper[7864]: I0224 02:10:41.326069 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-lprdj" Feb 24 02:10:41.327131 master-0 kubenswrapper[7864]: I0224 02:10:41.326952 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 24 02:10:41.327250 master-0 kubenswrapper[7864]: I0224 02:10:41.327215 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 24 02:10:41.327681 master-0 kubenswrapper[7864]: I0224 02:10:41.327652 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 24 02:10:41.328676 master-0 kubenswrapper[7864]: I0224 02:10:41.328637 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 24 02:10:41.329531 master-0 kubenswrapper[7864]: I0224 02:10:41.329491 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 24 02:10:41.353451 master-0 kubenswrapper[7864]: I0224 02:10:41.351726 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-46vmq_cabdddba-5507-4e47-98ef-a00c6d0f305d/authentication-operator/2.log" Feb 24 02:10:41.430406 master-0 kubenswrapper[7864]: I0224 02:10:41.430337 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e70a9f5-1154-40e9-a487-21e36e7f420a-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-8znkt\" (UID: \"8e70a9f5-1154-40e9-a487-21e36e7f420a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:10:41.430519 master-0 kubenswrapper[7864]: I0224 02:10:41.430424 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e70a9f5-1154-40e9-a487-21e36e7f420a-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-8znkt\" (UID: \"8e70a9f5-1154-40e9-a487-21e36e7f420a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:10:41.430519 master-0 kubenswrapper[7864]: I0224 02:10:41.430477 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8e70a9f5-1154-40e9-a487-21e36e7f420a-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-8znkt\" (UID: \"8e70a9f5-1154-40e9-a487-21e36e7f420a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:10:41.430612 master-0 kubenswrapper[7864]: I0224 02:10:41.430540 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8e70a9f5-1154-40e9-a487-21e36e7f420a-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-8znkt\" (UID: \"8e70a9f5-1154-40e9-a487-21e36e7f420a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:10:41.430771 master-0 kubenswrapper[7864]: I0224 02:10:41.430720 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb7jb\" (UniqueName: \"kubernetes.io/projected/8e70a9f5-1154-40e9-a487-21e36e7f420a-kube-api-access-mb7jb\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-8znkt\" (UID: \"8e70a9f5-1154-40e9-a487-21e36e7f420a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:10:41.531976 master-0 kubenswrapper[7864]: I0224 02:10:41.531888 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb7jb\" (UniqueName: \"kubernetes.io/projected/8e70a9f5-1154-40e9-a487-21e36e7f420a-kube-api-access-mb7jb\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-8znkt\" (UID: \"8e70a9f5-1154-40e9-a487-21e36e7f420a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:10:41.532176 master-0 kubenswrapper[7864]: I0224 02:10:41.531994 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e70a9f5-1154-40e9-a487-21e36e7f420a-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-8znkt\" (UID: \"8e70a9f5-1154-40e9-a487-21e36e7f420a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:10:41.532176 master-0 kubenswrapper[7864]: I0224 02:10:41.532043 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e70a9f5-1154-40e9-a487-21e36e7f420a-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-8znkt\" (UID: \"8e70a9f5-1154-40e9-a487-21e36e7f420a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:10:41.532176 master-0 kubenswrapper[7864]: I0224 02:10:41.532087 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8e70a9f5-1154-40e9-a487-21e36e7f420a-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-8znkt\" (UID: \"8e70a9f5-1154-40e9-a487-21e36e7f420a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:10:41.532455 master-0 kubenswrapper[7864]: I0224 02:10:41.532378 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8e70a9f5-1154-40e9-a487-21e36e7f420a-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-8znkt\" (UID: \"8e70a9f5-1154-40e9-a487-21e36e7f420a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:10:41.532881 master-0 kubenswrapper[7864]: I0224 02:10:41.532803 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8e70a9f5-1154-40e9-a487-21e36e7f420a-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-8znkt\" (UID: \"8e70a9f5-1154-40e9-a487-21e36e7f420a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:10:41.533601 master-0 kubenswrapper[7864]: I0224 02:10:41.533515 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e70a9f5-1154-40e9-a487-21e36e7f420a-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-8znkt\" (UID: \"8e70a9f5-1154-40e9-a487-21e36e7f420a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:10:41.533705 master-0 kubenswrapper[7864]: I0224 02:10:41.533670 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8e70a9f5-1154-40e9-a487-21e36e7f420a-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-8znkt\" (UID: \"8e70a9f5-1154-40e9-a487-21e36e7f420a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:10:41.540468 master-0 kubenswrapper[7864]: I0224 02:10:41.540401 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e70a9f5-1154-40e9-a487-21e36e7f420a-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-8znkt\" (UID: \"8e70a9f5-1154-40e9-a487-21e36e7f420a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:10:41.551878 master-0 kubenswrapper[7864]: I0224 02:10:41.551811 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-46vmq_cabdddba-5507-4e47-98ef-a00c6d0f305d/authentication-operator/3.log" Feb 24 02:10:41.562392 master-0 kubenswrapper[7864]: I0224 02:10:41.562311 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb7jb\" (UniqueName: \"kubernetes.io/projected/8e70a9f5-1154-40e9-a487-21e36e7f420a-kube-api-access-mb7jb\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-8znkt\" (UID: \"8e70a9f5-1154-40e9-a487-21e36e7f420a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:10:41.642821 master-0 kubenswrapper[7864]: I0224 02:10:41.642658 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:10:41.677921 master-0 kubenswrapper[7864]: W0224 02:10:41.677835 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e70a9f5_1154_40e9_a487_21e36e7f420a.slice/crio-1cf9c3623efa047b3c733a0c601bd847d659d71e97fdf999c590347704f0d5c3 WatchSource:0}: Error finding container 1cf9c3623efa047b3c733a0c601bd847d659d71e97fdf999c590347704f0d5c3: Status 404 returned error can't find the container with id 1cf9c3623efa047b3c733a0c601bd847d659d71e97fdf999c590347704f0d5c3 Feb 24 02:10:41.752969 master-0 kubenswrapper[7864]: I0224 02:10:41.752913 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-7b65dc9fcb-22sgl_6a08a1e4-cf92-4733-a8af-c7ac5b21e925/router/0.log" Feb 24 02:10:41.890021 master-0 kubenswrapper[7864]: I0224 02:10:41.889959 7864 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d706583-f8dc-4a3c-832b-7e8249d0c662" path="/var/lib/kubelet/pods/9d706583-f8dc-4a3c-832b-7e8249d0c662/volumes" Feb 24 02:10:41.948917 master-0 kubenswrapper[7864]: I0224 02:10:41.948760 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-77597cc7cf-8j2k2_b176946a-c056-441c-9145-b88ca4d75758/fix-audit-permissions/0.log" Feb 24 02:10:42.083771 master-0 kubenswrapper[7864]: I0224 02:10:42.082854 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:10:42.352228 master-0 kubenswrapper[7864]: I0224 02:10:42.351546 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:10:42.352228 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:10:42.352228 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:10:42.352228 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:10:42.352228 master-0 kubenswrapper[7864]: I0224 02:10:42.351664 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:10:42.360851 master-0 kubenswrapper[7864]: I0224 02:10:42.360789 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-77597cc7cf-8j2k2_b176946a-c056-441c-9145-b88ca4d75758/oauth-apiserver/0.log" Feb 24 02:10:42.404157 master-0 kubenswrapper[7864]: I0224 02:10:42.385353 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" event={"ID":"8e70a9f5-1154-40e9-a487-21e36e7f420a","Type":"ContainerStarted","Data":"d769b62bae7da248060974b37bc61a65e0831df5e231d5c8e62a89bc58f3df85"} Feb 24 02:10:42.404157 master-0 kubenswrapper[7864]: I0224 02:10:42.385444 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" event={"ID":"8e70a9f5-1154-40e9-a487-21e36e7f420a","Type":"ContainerStarted","Data":"1cf9c3623efa047b3c733a0c601bd847d659d71e97fdf999c590347704f0d5c3"} Feb 24 02:10:42.404157 master-0 kubenswrapper[7864]: I0224 02:10:42.394241 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-mtrdk_91168f3d-70eb-4351-bb83-5411a96ad29d/kube-rbac-proxy/0.log" Feb 24 02:10:42.555279 master-0 kubenswrapper[7864]: I0224 02:10:42.554885 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-mtrdk_91168f3d-70eb-4351-bb83-5411a96ad29d/cluster-autoscaler-operator/0.log" Feb 24 02:10:42.751805 master-0 kubenswrapper[7864]: I0224 02:10:42.751749 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k98fq_7b4e3ba0-5194-4e20-8f12-dea4b67504fe/cluster-baremetal-operator/0.log" Feb 24 02:10:43.084561 master-0 kubenswrapper[7864]: I0224 02:10:43.084455 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:10:43.084561 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:10:43.084561 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:10:43.084561 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:10:43.085080 master-0 kubenswrapper[7864]: I0224 02:10:43.084624 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:10:43.151358 master-0 kubenswrapper[7864]: I0224 02:10:43.150430 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k98fq_7b4e3ba0-5194-4e20-8f12-dea4b67504fe/cluster-baremetal-operator/1.log" Feb 24 02:10:43.357673 master-0 kubenswrapper[7864]: I0224 02:10:43.357614 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k98fq_7b4e3ba0-5194-4e20-8f12-dea4b67504fe/baremetal-kube-rbac-proxy/0.log" Feb 24 02:10:43.409463 master-0 kubenswrapper[7864]: I0224 02:10:43.409400 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" event={"ID":"df2b8111-41c6-4333-b473-4c08fb836f70","Type":"ContainerStarted","Data":"94827312ec38c6658c55e209ea3bcc5483bed338d5de6a56306adc1c033c902b"} Feb 24 02:10:43.411906 master-0 kubenswrapper[7864]: I0224 02:10:43.411844 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" event={"ID":"8e70a9f5-1154-40e9-a487-21e36e7f420a","Type":"ContainerStarted","Data":"85b8f7888ec204fa9fea6a7b1efb488127963819c9272f83acc89eb73dc0b286"} Feb 24 02:10:43.411906 master-0 kubenswrapper[7864]: I0224 02:10:43.411876 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" event={"ID":"8e70a9f5-1154-40e9-a487-21e36e7f420a","Type":"ContainerStarted","Data":"092cbfe8313c87ea7a79610f389e04195756c11c4aca575ebaf70dbe1a3f496d"} Feb 24 02:10:43.441206 master-0 kubenswrapper[7864]: I0224 02:10:43.441100 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" podStartSLOduration=2.441074406 podStartE2EDuration="2.441074406s" podCreationTimestamp="2026-02-24 02:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:10:43.43637285 +0000 UTC m=+407.764026512" watchObservedRunningTime="2026-02-24 02:10:43.441074406 +0000 UTC m=+407.768728068" Feb 24 02:10:43.551169 master-0 kubenswrapper[7864]: I0224 02:10:43.551115 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-ckntz_a4cea44a-1c6e-465f-97df-2c951056cb85/control-plane-machine-set-operator/0.log" Feb 24 02:10:43.756406 master-0 kubenswrapper[7864]: I0224 02:10:43.756345 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5c7cf458b4-dsjgm_0ce6dd93-084c-4e15-8b7c-e0829a6df14e/kube-rbac-proxy/0.log" Feb 24 02:10:43.955843 master-0 kubenswrapper[7864]: I0224 02:10:43.955708 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5c7cf458b4-dsjgm_0ce6dd93-084c-4e15-8b7c-e0829a6df14e/machine-api-operator/0.log" Feb 24 02:10:44.085693 master-0 kubenswrapper[7864]: I0224 02:10:44.085539 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:10:44.085693 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:10:44.085693 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:10:44.085693 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:10:44.085693 master-0 kubenswrapper[7864]: I0224 02:10:44.085646 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:10:44.153938 master-0 kubenswrapper[7864]: I0224 02:10:44.153835 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-f94476f49-c5wlk_011c6603-d533-4449-b409-f6f698a3bd50/cluster-storage-operator/0.log" Feb 24 02:10:44.348673 master-0 kubenswrapper[7864]: I0224 02:10:44.348470 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-8l58x_f6e7b773-7ecd-4a5c-8bef-d672f371e7e5/snapshot-controller/2.log" Feb 24 02:10:44.423920 master-0 kubenswrapper[7864]: I0224 02:10:44.423826 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" event={"ID":"df2b8111-41c6-4333-b473-4c08fb836f70","Type":"ContainerStarted","Data":"43bd616c2dbad772613b397d816c6f3ebc1ceb3dea2da9e16799c92367bf939a"} Feb 24 02:10:44.481550 master-0 kubenswrapper[7864]: I0224 02:10:44.478614 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" podStartSLOduration=2.638246417 podStartE2EDuration="6.478561632s" podCreationTimestamp="2026-02-24 02:10:38 +0000 UTC" firstStartedPulling="2026-02-24 02:10:39.255681193 +0000 UTC m=+403.583334855" lastFinishedPulling="2026-02-24 02:10:43.095996408 +0000 UTC m=+407.423650070" observedRunningTime="2026-02-24 02:10:44.471395488 +0000 UTC m=+408.799049150" watchObservedRunningTime="2026-02-24 02:10:44.478561632 +0000 UTC m=+408.806215284" Feb 24 02:10:44.555832 master-0 kubenswrapper[7864]: I0224 02:10:44.555764 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-8l58x_f6e7b773-7ecd-4a5c-8bef-d672f371e7e5/snapshot-controller/3.log" Feb 24 02:10:44.753850 master-0 kubenswrapper[7864]: I0224 02:10:44.753796 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-6fb4df594f-c95qc_7b098bd4-5751-4b01-8409-0688fd29233e/csi-snapshot-controller-operator/0.log" Feb 24 02:10:44.954515 master-0 kubenswrapper[7864]: I0224 02:10:44.954402 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-6fb4df594f-c95qc_7b098bd4-5751-4b01-8409-0688fd29233e/csi-snapshot-controller-operator/1.log" Feb 24 02:10:45.085609 master-0 kubenswrapper[7864]: I0224 02:10:45.085393 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:10:45.085609 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:10:45.085609 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:10:45.085609 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:10:45.085609 master-0 kubenswrapper[7864]: I0224 02:10:45.085500 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:10:45.155118 master-0 kubenswrapper[7864]: I0224 02:10:45.155024 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-jb9vb_fbe9964a-9e82-48e9-82b0-7c07e4cec3a2/etcd-operator/0.log" Feb 24 02:10:45.352821 master-0 kubenswrapper[7864]: I0224 02:10:45.352627 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-jb9vb_fbe9964a-9e82-48e9-82b0-7c07e4cec3a2/etcd-operator/1.log" Feb 24 02:10:45.556617 master-0 kubenswrapper[7864]: I0224 02:10:45.556513 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_64b7ea36-8849-4955-80b5-c7e7c12fcc29/installer/0.log" Feb 24 02:10:45.750196 master-0 kubenswrapper[7864]: I0224 02:10:45.750110 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-2492q_f85222bf-f51a-4232-8db1-1e6ee593617b/kube-apiserver-operator/1.log" Feb 24 02:10:45.949756 master-0 kubenswrapper[7864]: I0224 02:10:45.949681 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-2492q_f85222bf-f51a-4232-8db1-1e6ee593617b/kube-apiserver-operator/2.log" Feb 24 02:10:46.084787 master-0 kubenswrapper[7864]: I0224 02:10:46.084638 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:10:46.084787 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:10:46.084787 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:10:46.084787 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:10:46.084787 master-0 kubenswrapper[7864]: I0224 02:10:46.084734 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:10:46.152517 master-0 kubenswrapper[7864]: I0224 02:10:46.152412 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_687e92a6cecf1e2beeef16a0b322ad08/setup/0.log" Feb 24 02:10:46.361734 master-0 kubenswrapper[7864]: I0224 02:10:46.360442 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_687e92a6cecf1e2beeef16a0b322ad08/kube-apiserver/0.log" Feb 24 02:10:46.552301 master-0 kubenswrapper[7864]: I0224 02:10:46.547698 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_687e92a6cecf1e2beeef16a0b322ad08/kube-apiserver-insecure-readyz/0.log" Feb 24 02:10:46.757271 master-0 kubenswrapper[7864]: I0224 02:10:46.756408 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_bd02da41-8a48-4436-ae58-6363e7554898/installer/0.log" Feb 24 02:10:46.770980 master-0 kubenswrapper[7864]: I0224 02:10:46.770919 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6"] Feb 24 02:10:46.772678 master-0 kubenswrapper[7864]: I0224 02:10:46.772660 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" Feb 24 02:10:46.775471 master-0 kubenswrapper[7864]: I0224 02:10:46.775455 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-fdtcj" Feb 24 02:10:46.775749 master-0 kubenswrapper[7864]: I0224 02:10:46.775736 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 24 02:10:46.775997 master-0 kubenswrapper[7864]: I0224 02:10:46.775897 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 24 02:10:46.792548 master-0 kubenswrapper[7864]: I0224 02:10:46.792509 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-2qn8m"] Feb 24 02:10:46.794288 master-0 kubenswrapper[7864]: I0224 02:10:46.794270 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:10:46.795338 master-0 kubenswrapper[7864]: I0224 02:10:46.795234 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6"] Feb 24 02:10:46.803243 master-0 kubenswrapper[7864]: I0224 02:10:46.803215 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 24 02:10:46.803469 master-0 kubenswrapper[7864]: I0224 02:10:46.803453 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 24 02:10:46.803665 master-0 kubenswrapper[7864]: I0224 02:10:46.803463 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-c5cjc" Feb 24 02:10:46.823496 master-0 kubenswrapper[7864]: I0224 02:10:46.818730 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-59584d565f-f6f26"] Feb 24 02:10:46.823496 master-0 kubenswrapper[7864]: I0224 02:10:46.820919 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:10:46.824764 master-0 kubenswrapper[7864]: I0224 02:10:46.823833 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 24 02:10:46.824764 master-0 kubenswrapper[7864]: I0224 02:10:46.824099 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 24 02:10:46.824764 master-0 kubenswrapper[7864]: I0224 02:10:46.824106 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-m4t4r" Feb 24 02:10:46.829601 master-0 kubenswrapper[7864]: I0224 02:10:46.826948 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 24 02:10:46.836889 master-0 kubenswrapper[7864]: I0224 02:10:46.835459 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/a5305004-5311-4bc4-ad7c-6670f97c89cb-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:10:46.836889 master-0 kubenswrapper[7864]: I0224 02:10:46.835513 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kznmr\" (UniqueName: \"kubernetes.io/projected/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-api-access-kznmr\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:10:46.836889 master-0 kubenswrapper[7864]: I0224 02:10:46.835541 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98725\" (UniqueName: \"kubernetes.io/projected/24765ff1-5e7d-4100-ad81-8f73555fc0a2-kube-api-access-98725\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:10:46.836889 master-0 kubenswrapper[7864]: I0224 02:10:46.835562 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/608a8a56-daee-4fa1-8300-42155217c68b-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-swtr6\" (UID: \"608a8a56-daee-4fa1-8300-42155217c68b\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" Feb 24 02:10:46.836889 master-0 kubenswrapper[7864]: I0224 02:10:46.835636 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-wtmp\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:10:46.836889 master-0 kubenswrapper[7864]: I0224 02:10:46.835753 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px2vd\" (UniqueName: \"kubernetes.io/projected/608a8a56-daee-4fa1-8300-42155217c68b-kube-api-access-px2vd\") pod \"openshift-state-metrics-6dbff8cb4c-swtr6\" (UID: \"608a8a56-daee-4fa1-8300-42155217c68b\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" Feb 24 02:10:46.836889 master-0 kubenswrapper[7864]: I0224 02:10:46.835885 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/24765ff1-5e7d-4100-ad81-8f73555fc0a2-metrics-client-ca\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:10:46.836889 master-0 kubenswrapper[7864]: I0224 02:10:46.835923 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:10:46.836889 master-0 kubenswrapper[7864]: I0224 02:10:46.836029 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/608a8a56-daee-4fa1-8300-42155217c68b-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-swtr6\" (UID: \"608a8a56-daee-4fa1-8300-42155217c68b\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" Feb 24 02:10:46.836889 master-0 kubenswrapper[7864]: I0224 02:10:46.836193 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:10:46.836889 master-0 kubenswrapper[7864]: I0224 02:10:46.836266 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/608a8a56-daee-4fa1-8300-42155217c68b-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-swtr6\" (UID: \"608a8a56-daee-4fa1-8300-42155217c68b\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" Feb 24 02:10:46.836889 master-0 kubenswrapper[7864]: I0224 02:10:46.836366 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/24765ff1-5e7d-4100-ad81-8f73555fc0a2-root\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:10:46.836889 master-0 kubenswrapper[7864]: I0224 02:10:46.836484 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a5305004-5311-4bc4-ad7c-6670f97c89cb-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:10:46.836889 master-0 kubenswrapper[7864]: I0224 02:10:46.836513 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-tls\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:10:46.836889 master-0 kubenswrapper[7864]: I0224 02:10:46.836643 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:10:46.836889 master-0 kubenswrapper[7864]: I0224 02:10:46.836668 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:10:46.836889 master-0 kubenswrapper[7864]: I0224 02:10:46.836700 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-textfile\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:10:46.836889 master-0 kubenswrapper[7864]: I0224 02:10:46.836791 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/24765ff1-5e7d-4100-ad81-8f73555fc0a2-sys\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:10:46.853136 master-0 kubenswrapper[7864]: I0224 02:10:46.853056 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-59584d565f-f6f26"] Feb 24 02:10:46.939212 master-0 kubenswrapper[7864]: I0224 02:10:46.939147 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a5305004-5311-4bc4-ad7c-6670f97c89cb-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:10:46.939212 master-0 kubenswrapper[7864]: I0224 02:10:46.939208 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-tls\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:10:46.939544 master-0 kubenswrapper[7864]: I0224 02:10:46.939289 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:10:46.939544 master-0 kubenswrapper[7864]: I0224 02:10:46.939314 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:10:46.939544 master-0 kubenswrapper[7864]: I0224 02:10:46.939344 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-textfile\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:10:46.939544 master-0 kubenswrapper[7864]: I0224 02:10:46.939393 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/24765ff1-5e7d-4100-ad81-8f73555fc0a2-sys\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:10:46.939544 master-0 kubenswrapper[7864]: I0224 02:10:46.939442 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/a5305004-5311-4bc4-ad7c-6670f97c89cb-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:10:46.939544 master-0 kubenswrapper[7864]: I0224 02:10:46.939471 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kznmr\" (UniqueName: \"kubernetes.io/projected/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-api-access-kznmr\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:10:46.939544 master-0 kubenswrapper[7864]: I0224 02:10:46.939499 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98725\" (UniqueName: \"kubernetes.io/projected/24765ff1-5e7d-4100-ad81-8f73555fc0a2-kube-api-access-98725\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:10:46.939544 master-0 kubenswrapper[7864]: I0224 02:10:46.939525 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/608a8a56-daee-4fa1-8300-42155217c68b-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-swtr6\" (UID: \"608a8a56-daee-4fa1-8300-42155217c68b\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" Feb 24 02:10:46.939910 master-0 kubenswrapper[7864]: I0224 02:10:46.939590 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-wtmp\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:10:46.939910 master-0 kubenswrapper[7864]: I0224 02:10:46.939637 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px2vd\" (UniqueName: \"kubernetes.io/projected/608a8a56-daee-4fa1-8300-42155217c68b-kube-api-access-px2vd\") pod \"openshift-state-metrics-6dbff8cb4c-swtr6\" (UID: \"608a8a56-daee-4fa1-8300-42155217c68b\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" Feb 24 02:10:46.939910 master-0 kubenswrapper[7864]: I0224 02:10:46.939696 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/24765ff1-5e7d-4100-ad81-8f73555fc0a2-metrics-client-ca\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:10:46.939910 master-0 kubenswrapper[7864]: I0224 02:10:46.939721 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:10:46.939910 master-0 kubenswrapper[7864]: I0224 02:10:46.939745 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/608a8a56-daee-4fa1-8300-42155217c68b-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-swtr6\" (UID: \"608a8a56-daee-4fa1-8300-42155217c68b\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" Feb 24 02:10:46.939910 master-0 kubenswrapper[7864]: I0224 02:10:46.939790 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:10:46.939910 master-0 kubenswrapper[7864]: I0224 02:10:46.939817 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/608a8a56-daee-4fa1-8300-42155217c68b-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-swtr6\" (UID: \"608a8a56-daee-4fa1-8300-42155217c68b\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" Feb 24 02:10:46.939910 master-0 kubenswrapper[7864]: I0224 02:10:46.939843 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/24765ff1-5e7d-4100-ad81-8f73555fc0a2-root\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:10:46.940223 master-0 kubenswrapper[7864]: I0224 02:10:46.939930 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/24765ff1-5e7d-4100-ad81-8f73555fc0a2-root\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:10:46.941019 master-0 kubenswrapper[7864]: I0224 02:10:46.940973 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-textfile\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:10:46.941099 master-0 kubenswrapper[7864]: I0224 02:10:46.941017 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a5305004-5311-4bc4-ad7c-6670f97c89cb-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:10:46.941224 master-0 kubenswrapper[7864]: I0224 02:10:46.941198 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/24765ff1-5e7d-4100-ad81-8f73555fc0a2-sys\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:10:46.943659 master-0 kubenswrapper[7864]: I0224 02:10:46.942276 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-wtmp\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:10:46.943659 master-0 kubenswrapper[7864]: I0224 02:10:46.942541 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/608a8a56-daee-4fa1-8300-42155217c68b-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-swtr6\" (UID: \"608a8a56-daee-4fa1-8300-42155217c68b\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" Feb 24 02:10:46.943659 master-0 kubenswrapper[7864]: I0224 02:10:46.943180 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:10:46.943659 master-0 kubenswrapper[7864]: I0224 02:10:46.943225 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/a5305004-5311-4bc4-ad7c-6670f97c89cb-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:10:46.943659 master-0 kubenswrapper[7864]: I0224 02:10:46.943400 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/24765ff1-5e7d-4100-ad81-8f73555fc0a2-metrics-client-ca\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:10:46.952114 master-0 kubenswrapper[7864]: I0224 02:10:46.945560 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/608a8a56-daee-4fa1-8300-42155217c68b-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-swtr6\" (UID: \"608a8a56-daee-4fa1-8300-42155217c68b\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" Feb 24 02:10:46.952114 master-0 kubenswrapper[7864]: I0224 02:10:46.946408 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-tls\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:10:46.952114 master-0 kubenswrapper[7864]: I0224 02:10:46.946799 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:10:46.952114 master-0 kubenswrapper[7864]: I0224 02:10:46.947008 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/608a8a56-daee-4fa1-8300-42155217c68b-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-swtr6\" (UID: \"608a8a56-daee-4fa1-8300-42155217c68b\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" Feb 24 02:10:46.952114 master-0 kubenswrapper[7864]: I0224 02:10:46.947559 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:10:46.952114 master-0 kubenswrapper[7864]: I0224 02:10:46.950731 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-tl97n_a02536a3-7d3e-4e74-9625-aefed518ec35/kube-controller-manager-operator/1.log" Feb 24 02:10:46.953157 master-0 kubenswrapper[7864]: I0224 02:10:46.953124 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:10:46.962340 master-0 kubenswrapper[7864]: I0224 02:10:46.961998 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98725\" (UniqueName: \"kubernetes.io/projected/24765ff1-5e7d-4100-ad81-8f73555fc0a2-kube-api-access-98725\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:10:46.964066 master-0 kubenswrapper[7864]: I0224 02:10:46.964026 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px2vd\" (UniqueName: \"kubernetes.io/projected/608a8a56-daee-4fa1-8300-42155217c68b-kube-api-access-px2vd\") pod \"openshift-state-metrics-6dbff8cb4c-swtr6\" (UID: \"608a8a56-daee-4fa1-8300-42155217c68b\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" Feb 24 02:10:46.972077 master-0 kubenswrapper[7864]: I0224 02:10:46.971077 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kznmr\" (UniqueName: \"kubernetes.io/projected/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-api-access-kznmr\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:10:47.084709 master-0 kubenswrapper[7864]: I0224 02:10:47.084526 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:10:47.084709 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:10:47.084709 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:10:47.084709 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:10:47.084709 master-0 kubenswrapper[7864]: I0224 02:10:47.084653 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:10:47.115659 master-0 kubenswrapper[7864]: I0224 02:10:47.115562 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" Feb 24 02:10:47.151157 master-0 kubenswrapper[7864]: I0224 02:10:47.148786 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:10:47.151157 master-0 kubenswrapper[7864]: I0224 02:10:47.150160 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-tl97n_a02536a3-7d3e-4e74-9625-aefed518ec35/kube-controller-manager-operator/2.log" Feb 24 02:10:47.160478 master-0 kubenswrapper[7864]: I0224 02:10:47.160405 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:10:47.200768 master-0 kubenswrapper[7864]: W0224 02:10:47.200711 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24765ff1_5e7d_4100_ad81_8f73555fc0a2.slice/crio-a547b5d4267673c7d0d24b2e2ba4109ca6066121198db068b1fa1a5a39df064e WatchSource:0}: Error finding container a547b5d4267673c7d0d24b2e2ba4109ca6066121198db068b1fa1a5a39df064e: Status 404 returned error can't find the container with id a547b5d4267673c7d0d24b2e2ba4109ca6066121198db068b1fa1a5a39df064e Feb 24 02:10:47.359336 master-0 kubenswrapper[7864]: I0224 02:10:47.359194 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_c9ad9373c007a4fcd25e70622bdc8deb/kube-controller-manager/3.log" Feb 24 02:10:47.460070 master-0 kubenswrapper[7864]: I0224 02:10:47.459943 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-2qn8m" event={"ID":"24765ff1-5e7d-4100-ad81-8f73555fc0a2","Type":"ContainerStarted","Data":"a547b5d4267673c7d0d24b2e2ba4109ca6066121198db068b1fa1a5a39df064e"} Feb 24 02:10:47.556000 master-0 kubenswrapper[7864]: I0224 02:10:47.555689 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_c9ad9373c007a4fcd25e70622bdc8deb/cluster-policy-controller/0.log" Feb 24 02:10:47.654805 master-0 kubenswrapper[7864]: I0224 02:10:47.654746 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6"] Feb 24 02:10:47.661634 master-0 kubenswrapper[7864]: W0224 02:10:47.661475 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod608a8a56_daee_4fa1_8300_42155217c68b.slice/crio-9538eb885cdee2fa9a588e118eb2c741ad080c47591f7c5e45f680a3f6d76460 WatchSource:0}: Error finding container 9538eb885cdee2fa9a588e118eb2c741ad080c47591f7c5e45f680a3f6d76460: Status 404 returned error can't find the container with id 9538eb885cdee2fa9a588e118eb2c741ad080c47591f7c5e45f680a3f6d76460 Feb 24 02:10:47.697520 master-0 kubenswrapper[7864]: I0224 02:10:47.697434 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-59584d565f-f6f26"] Feb 24 02:10:47.706647 master-0 kubenswrapper[7864]: W0224 02:10:47.705856 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5305004_5311_4bc4_ad7c_6670f97c89cb.slice/crio-a8543f0f38e0eb6ba966eb5116372824fd9c0ed337e28520e9ed982545e8ec8c WatchSource:0}: Error finding container a8543f0f38e0eb6ba966eb5116372824fd9c0ed337e28520e9ed982545e8ec8c: Status 404 returned error can't find the container with id a8543f0f38e0eb6ba966eb5116372824fd9c0ed337e28520e9ed982545e8ec8c Feb 24 02:10:47.762675 master-0 kubenswrapper[7864]: I0224 02:10:47.762603 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_c9ad9373c007a4fcd25e70622bdc8deb/kube-controller-manager/4.log" Feb 24 02:10:47.952864 master-0 kubenswrapper[7864]: I0224 02:10:47.952487 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_c9ad9373c007a4fcd25e70622bdc8deb/cluster-policy-controller/1.log" Feb 24 02:10:48.084071 master-0 kubenswrapper[7864]: I0224 02:10:48.083808 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:10:48.084071 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:10:48.084071 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:10:48.084071 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:10:48.084482 master-0 kubenswrapper[7864]: I0224 02:10:48.083912 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:10:48.156960 master-0 kubenswrapper[7864]: I0224 02:10:48.155952 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_56c3cb71c9851003c8de7e7c5db4b87e/kube-scheduler/0.log" Feb 24 02:10:48.357485 master-0 kubenswrapper[7864]: I0224 02:10:48.357402 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_56c3cb71c9851003c8de7e7c5db4b87e/kube-scheduler/1.log" Feb 24 02:10:48.476647 master-0 kubenswrapper[7864]: I0224 02:10:48.471951 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" event={"ID":"608a8a56-daee-4fa1-8300-42155217c68b","Type":"ContainerStarted","Data":"20f9b5f75c2cde17a1d4633c252f670c4f9f5295d80a5639b06ba7c15a2a2e27"} Feb 24 02:10:48.476647 master-0 kubenswrapper[7864]: I0224 02:10:48.472037 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" event={"ID":"608a8a56-daee-4fa1-8300-42155217c68b","Type":"ContainerStarted","Data":"aec27e3292c40382740e058d73f54f54825380bfa9c5ef79af6e0003ccd5e974"} Feb 24 02:10:48.476647 master-0 kubenswrapper[7864]: I0224 02:10:48.472057 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" event={"ID":"608a8a56-daee-4fa1-8300-42155217c68b","Type":"ContainerStarted","Data":"9538eb885cdee2fa9a588e118eb2c741ad080c47591f7c5e45f680a3f6d76460"} Feb 24 02:10:48.476647 master-0 kubenswrapper[7864]: I0224 02:10:48.474084 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" event={"ID":"a5305004-5311-4bc4-ad7c-6670f97c89cb","Type":"ContainerStarted","Data":"a8543f0f38e0eb6ba966eb5116372824fd9c0ed337e28520e9ed982545e8ec8c"} Feb 24 02:10:48.554883 master-0 kubenswrapper[7864]: I0224 02:10:48.554835 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_683deae1-94b1-4c17-a73f-ad628a09134b/installer/0.log" Feb 24 02:10:48.752657 master-0 kubenswrapper[7864]: I0224 02:10:48.751970 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-77cd4d9559-8tttg_f5463fbf-ac21-4058-9a3b-30d0e5ea31b7/kube-scheduler-operator-container/1.log" Feb 24 02:10:48.950345 master-0 kubenswrapper[7864]: I0224 02:10:48.949852 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-77cd4d9559-8tttg_f5463fbf-ac21-4058-9a3b-30d0e5ea31b7/kube-scheduler-operator-container/2.log" Feb 24 02:10:49.084461 master-0 kubenswrapper[7864]: I0224 02:10:49.084416 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:10:49.084461 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:10:49.084461 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:10:49.084461 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:10:49.084787 master-0 kubenswrapper[7864]: I0224 02:10:49.084499 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:10:49.148549 master-0 kubenswrapper[7864]: I0224 02:10:49.148505 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-8586dccc9b-sl5hz_b36d8451-0fda-4d9d-a850-d05c8f847016/openshift-apiserver-operator/1.log" Feb 24 02:10:49.357975 master-0 kubenswrapper[7864]: I0224 02:10:49.357808 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-8586dccc9b-sl5hz_b36d8451-0fda-4d9d-a850-d05c8f847016/openshift-apiserver-operator/2.log" Feb 24 02:10:49.491503 master-0 kubenswrapper[7864]: I0224 02:10:49.491426 7864 generic.go:334] "Generic (PLEG): container finished" podID="24765ff1-5e7d-4100-ad81-8f73555fc0a2" containerID="1f52501726ed970c81fbc87519c42dbbcb0a0375319ca30b25aeac0dc7303da1" exitCode=0 Feb 24 02:10:49.491906 master-0 kubenswrapper[7864]: I0224 02:10:49.491498 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-2qn8m" event={"ID":"24765ff1-5e7d-4100-ad81-8f73555fc0a2","Type":"ContainerDied","Data":"1f52501726ed970c81fbc87519c42dbbcb0a0375319ca30b25aeac0dc7303da1"} Feb 24 02:10:49.547837 master-0 kubenswrapper[7864]: I0224 02:10:49.547370 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-79dc9447fd-x64vl_25190a18-bdac-479b-b526-840d28636be3/fix-audit-permissions/0.log" Feb 24 02:10:49.756149 master-0 kubenswrapper[7864]: I0224 02:10:49.756080 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-79dc9447fd-x64vl_25190a18-bdac-479b-b526-840d28636be3/openshift-apiserver/0.log" Feb 24 02:10:49.925567 master-0 kubenswrapper[7864]: I0224 02:10:49.925463 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 24 02:10:49.955132 master-0 kubenswrapper[7864]: I0224 02:10:49.954618 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-79dc9447fd-x64vl_25190a18-bdac-479b-b526-840d28636be3/openshift-apiserver-check-endpoints/0.log" Feb 24 02:10:50.084798 master-0 kubenswrapper[7864]: I0224 02:10:50.084740 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:10:50.084798 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:10:50.084798 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:10:50.084798 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:10:50.085114 master-0 kubenswrapper[7864]: I0224 02:10:50.084808 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:10:50.154610 master-0 kubenswrapper[7864]: I0224 02:10:50.152714 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-jb9vb_fbe9964a-9e82-48e9-82b0-7c07e4cec3a2/etcd-operator/0.log" Feb 24 02:10:50.348052 master-0 kubenswrapper[7864]: I0224 02:10:50.348014 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-jb9vb_fbe9964a-9e82-48e9-82b0-7c07e4cec3a2/etcd-operator/1.log" Feb 24 02:10:50.507038 master-0 kubenswrapper[7864]: I0224 02:10:50.506896 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" event={"ID":"608a8a56-daee-4fa1-8300-42155217c68b","Type":"ContainerStarted","Data":"860e1ea0a41f00b850cab433b6728eb3878d47cbf363a792c5a1a2425dd74bf4"} Feb 24 02:10:50.511109 master-0 kubenswrapper[7864]: I0224 02:10:50.511052 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-2qn8m" event={"ID":"24765ff1-5e7d-4100-ad81-8f73555fc0a2","Type":"ContainerStarted","Data":"505153a6b58ee5fdb40b64cf1449d1ab8536604ceefcf028d98144e91d2cd947"} Feb 24 02:10:50.511109 master-0 kubenswrapper[7864]: I0224 02:10:50.511105 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-2qn8m" event={"ID":"24765ff1-5e7d-4100-ad81-8f73555fc0a2","Type":"ContainerStarted","Data":"634ece1d92bdb1ceb44b0e5c54c19504b4ec18f00008defdfe406f50026a70a8"} Feb 24 02:10:50.516630 master-0 kubenswrapper[7864]: I0224 02:10:50.516583 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" event={"ID":"a5305004-5311-4bc4-ad7c-6670f97c89cb","Type":"ContainerStarted","Data":"8ae69ce84c01e5e4b4fac1f5290af78bea77d7988dab5915ebc9b71d7bb9c8b4"} Feb 24 02:10:50.516630 master-0 kubenswrapper[7864]: I0224 02:10:50.516612 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" event={"ID":"a5305004-5311-4bc4-ad7c-6670f97c89cb","Type":"ContainerStarted","Data":"c5800339748674f2134be8e9b847e6be2d094f1a815b59844e33e063e8189399"} Feb 24 02:10:50.516630 master-0 kubenswrapper[7864]: I0224 02:10:50.516623 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" event={"ID":"a5305004-5311-4bc4-ad7c-6670f97c89cb","Type":"ContainerStarted","Data":"2072e285d4993b0d4fb9ce8970b1145b2c5b69f9a6f5dae58087e7fa262a83b4"} Feb 24 02:10:50.543431 master-0 kubenswrapper[7864]: I0224 02:10:50.543299 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" podStartSLOduration=2.9645494340000003 podStartE2EDuration="4.543266722s" podCreationTimestamp="2026-02-24 02:10:46 +0000 UTC" firstStartedPulling="2026-02-24 02:10:48.058708147 +0000 UTC m=+412.386361779" lastFinishedPulling="2026-02-24 02:10:49.637425445 +0000 UTC m=+413.965079067" observedRunningTime="2026-02-24 02:10:50.54094091 +0000 UTC m=+414.868594572" watchObservedRunningTime="2026-02-24 02:10:50.543266722 +0000 UTC m=+414.870920384" Feb 24 02:10:50.556961 master-0 kubenswrapper[7864]: I0224 02:10:50.556881 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-c7fgn_7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c/openshift-controller-manager-operator/0.log" Feb 24 02:10:50.617054 master-0 kubenswrapper[7864]: I0224 02:10:50.616936 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=1.616904476 podStartE2EDuration="1.616904476s" podCreationTimestamp="2026-02-24 02:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:10:50.610404371 +0000 UTC m=+414.938058003" watchObservedRunningTime="2026-02-24 02:10:50.616904476 +0000 UTC m=+414.944558138" Feb 24 02:10:50.692599 master-0 kubenswrapper[7864]: I0224 02:10:50.691509 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-2qn8m" podStartSLOduration=3.537462491 podStartE2EDuration="4.691457965s" podCreationTimestamp="2026-02-24 02:10:46 +0000 UTC" firstStartedPulling="2026-02-24 02:10:47.205438996 +0000 UTC m=+411.533092628" lastFinishedPulling="2026-02-24 02:10:48.35943445 +0000 UTC m=+412.687088102" observedRunningTime="2026-02-24 02:10:50.686844851 +0000 UTC m=+415.014498493" watchObservedRunningTime="2026-02-24 02:10:50.691457965 +0000 UTC m=+415.019111597" Feb 24 02:10:50.719600 master-0 kubenswrapper[7864]: I0224 02:10:50.719503 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" podStartSLOduration=2.822374423 podStartE2EDuration="4.71947567s" podCreationTimestamp="2026-02-24 02:10:46 +0000 UTC" firstStartedPulling="2026-02-24 02:10:47.716688291 +0000 UTC m=+412.044341943" lastFinishedPulling="2026-02-24 02:10:49.613789568 +0000 UTC m=+413.941443190" observedRunningTime="2026-02-24 02:10:50.718213436 +0000 UTC m=+415.045867058" watchObservedRunningTime="2026-02-24 02:10:50.71947567 +0000 UTC m=+415.047129292" Feb 24 02:10:50.754603 master-0 kubenswrapper[7864]: I0224 02:10:50.750807 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-c7fgn_7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c/openshift-controller-manager-operator/1.log" Feb 24 02:10:50.955932 master-0 kubenswrapper[7864]: I0224 02:10:50.955881 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-57df7db547-2v9c5_bd1a99d5-e213-42b3-9538-44f68d993184/controller-manager/0.log" Feb 24 02:10:51.085476 master-0 kubenswrapper[7864]: I0224 02:10:51.085377 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:10:51.085476 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:10:51.085476 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:10:51.085476 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:10:51.086081 master-0 kubenswrapper[7864]: I0224 02:10:51.085516 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:10:51.169969 master-0 kubenswrapper[7864]: I0224 02:10:51.169766 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-57df7db547-2v9c5_bd1a99d5-e213-42b3-9538-44f68d993184/controller-manager/1.log" Feb 24 02:10:51.359822 master-0 kubenswrapper[7864]: I0224 02:10:51.359614 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-56cd46585c-nhkd9_8e6fd0d2-d629-4399-b008-979f28390943/route-controller-manager/0.log" Feb 24 02:10:51.561635 master-0 kubenswrapper[7864]: I0224 02:10:51.561507 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-596f79dd6f-8cg5c_12b89e05-a503-47aa-90b2-4d741e015b19/catalog-operator/0.log" Feb 24 02:10:51.749219 master-0 kubenswrapper[7864]: I0224 02:10:51.749146 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29531640-kptmw_24983c94-f158-4a07-854b-2e5455374f19/collect-profiles/0.log" Feb 24 02:10:51.954985 master-0 kubenswrapper[7864]: I0224 02:10:51.954892 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-5499d7f7bb-5g6nc_02f1d753-983a-4c4a-b1a0-560de173859a/olm-operator/0.log" Feb 24 02:10:52.085262 master-0 kubenswrapper[7864]: I0224 02:10:52.085038 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:10:52.085262 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:10:52.085262 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:10:52.085262 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:10:52.085262 master-0 kubenswrapper[7864]: I0224 02:10:52.085166 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:10:52.150176 master-0 kubenswrapper[7864]: I0224 02:10:52.150046 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-2hllb_6320dbb5-b84d-4a57-8c65-fbed8421f84a/kube-rbac-proxy/0.log" Feb 24 02:10:52.151375 master-0 kubenswrapper[7864]: I0224 02:10:52.151312 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-7b9cc5984b-smpdl"] Feb 24 02:10:52.152154 master-0 kubenswrapper[7864]: I0224 02:10:52.152114 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:10:52.155837 master-0 kubenswrapper[7864]: I0224 02:10:52.155734 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 24 02:10:52.156092 master-0 kubenswrapper[7864]: I0224 02:10:52.156029 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-n76llk2nkkst" Feb 24 02:10:52.157641 master-0 kubenswrapper[7864]: I0224 02:10:52.157561 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 24 02:10:52.158020 master-0 kubenswrapper[7864]: I0224 02:10:52.157941 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 24 02:10:52.158357 master-0 kubenswrapper[7864]: I0224 02:10:52.158258 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-mdmqh" Feb 24 02:10:52.159174 master-0 kubenswrapper[7864]: I0224 02:10:52.159118 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 24 02:10:52.184673 master-0 kubenswrapper[7864]: I0224 02:10:52.184519 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7b9cc5984b-smpdl"] Feb 24 02:10:52.238516 master-0 kubenswrapper[7864]: I0224 02:10:52.238415 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-secret-metrics-server-tls\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:10:52.238516 master-0 kubenswrapper[7864]: I0224 02:10:52.238468 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/8c396c41-c617-4631-9700-a7052af5a276-metrics-server-audit-profiles\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:10:52.238516 master-0 kubenswrapper[7864]: I0224 02:10:52.238521 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/8c396c41-c617-4631-9700-a7052af5a276-audit-log\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:10:52.239014 master-0 kubenswrapper[7864]: I0224 02:10:52.238598 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c396c41-c617-4631-9700-a7052af5a276-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:10:52.239014 master-0 kubenswrapper[7864]: I0224 02:10:52.238627 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4grf\" (UniqueName: \"kubernetes.io/projected/8c396c41-c617-4631-9700-a7052af5a276-kube-api-access-n4grf\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:10:52.239014 master-0 kubenswrapper[7864]: I0224 02:10:52.238657 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-secret-metrics-client-certs\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:10:52.239014 master-0 kubenswrapper[7864]: I0224 02:10:52.238685 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-client-ca-bundle\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:10:52.340937 master-0 kubenswrapper[7864]: I0224 02:10:52.340749 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/8c396c41-c617-4631-9700-a7052af5a276-metrics-server-audit-profiles\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:10:52.340937 master-0 kubenswrapper[7864]: I0224 02:10:52.340829 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-secret-metrics-server-tls\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:10:52.341321 master-0 kubenswrapper[7864]: I0224 02:10:52.341117 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/8c396c41-c617-4631-9700-a7052af5a276-audit-log\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:10:52.341321 master-0 kubenswrapper[7864]: I0224 02:10:52.341184 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c396c41-c617-4631-9700-a7052af5a276-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:10:52.341557 master-0 kubenswrapper[7864]: I0224 02:10:52.341462 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4grf\" (UniqueName: \"kubernetes.io/projected/8c396c41-c617-4631-9700-a7052af5a276-kube-api-access-n4grf\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:10:52.341687 master-0 kubenswrapper[7864]: I0224 02:10:52.341657 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-secret-metrics-client-certs\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:10:52.341916 master-0 kubenswrapper[7864]: I0224 02:10:52.341862 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-client-ca-bundle\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:10:52.342204 master-0 kubenswrapper[7864]: I0224 02:10:52.342135 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/8c396c41-c617-4631-9700-a7052af5a276-audit-log\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:10:52.343385 master-0 kubenswrapper[7864]: I0224 02:10:52.343329 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c396c41-c617-4631-9700-a7052af5a276-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:10:52.344156 master-0 kubenswrapper[7864]: I0224 02:10:52.344096 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/8c396c41-c617-4631-9700-a7052af5a276-metrics-server-audit-profiles\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:10:52.347461 master-0 kubenswrapper[7864]: I0224 02:10:52.347393 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-secret-metrics-server-tls\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:10:52.348407 master-0 kubenswrapper[7864]: I0224 02:10:52.348194 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-secret-metrics-client-certs\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:10:52.348525 master-0 kubenswrapper[7864]: I0224 02:10:52.348457 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-client-ca-bundle\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:10:52.357313 master-0 kubenswrapper[7864]: I0224 02:10:52.357234 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-2hllb_6320dbb5-b84d-4a57-8c65-fbed8421f84a/package-server-manager/0.log" Feb 24 02:10:52.378254 master-0 kubenswrapper[7864]: I0224 02:10:52.378197 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4grf\" (UniqueName: \"kubernetes.io/projected/8c396c41-c617-4631-9700-a7052af5a276-kube-api-access-n4grf\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:10:52.513859 master-0 kubenswrapper[7864]: I0224 02:10:52.513737 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:10:52.563744 master-0 kubenswrapper[7864]: I0224 02:10:52.562018 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-597975fc65-xcl6c_9cad383a-cb69-41a8-aec8-23ee1c930430/packageserver/0.log" Feb 24 02:10:53.036772 master-0 kubenswrapper[7864]: I0224 02:10:53.036698 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7b9cc5984b-smpdl"] Feb 24 02:10:53.085483 master-0 kubenswrapper[7864]: I0224 02:10:53.085399 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:10:53.085483 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:10:53.085483 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:10:53.085483 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:10:53.086353 master-0 kubenswrapper[7864]: I0224 02:10:53.085533 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:10:53.547118 master-0 kubenswrapper[7864]: I0224 02:10:53.547019 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" event={"ID":"8c396c41-c617-4631-9700-a7052af5a276","Type":"ContainerStarted","Data":"1684de33a1b99214450be6d5f8c060f5a9f1a7c517e642d15f1d667f8a119c75"} Feb 24 02:10:54.086336 master-0 kubenswrapper[7864]: I0224 02:10:54.086231 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:10:54.086336 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:10:54.086336 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:10:54.086336 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:10:54.087423 master-0 kubenswrapper[7864]: I0224 02:10:54.086381 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:10:55.085240 master-0 kubenswrapper[7864]: I0224 02:10:55.085075 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:10:55.085240 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:10:55.085240 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:10:55.085240 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:10:55.085240 master-0 kubenswrapper[7864]: I0224 02:10:55.085214 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:10:55.569450 master-0 kubenswrapper[7864]: I0224 02:10:55.569366 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" event={"ID":"8c396c41-c617-4631-9700-a7052af5a276","Type":"ContainerStarted","Data":"6e72c582c52cc1175706db2ef4a54c95fdecae69c4b7d4caf28fde6f98e8eaa4"} Feb 24 02:10:55.684640 master-0 kubenswrapper[7864]: I0224 02:10:55.684438 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" podStartSLOduration=1.924384505 podStartE2EDuration="3.684371658s" podCreationTimestamp="2026-02-24 02:10:52 +0000 UTC" firstStartedPulling="2026-02-24 02:10:53.050234791 +0000 UTC m=+417.377888443" lastFinishedPulling="2026-02-24 02:10:54.810221944 +0000 UTC m=+419.137875596" observedRunningTime="2026-02-24 02:10:55.679749743 +0000 UTC m=+420.007403395" watchObservedRunningTime="2026-02-24 02:10:55.684371658 +0000 UTC m=+420.012025320" Feb 24 02:10:56.084213 master-0 kubenswrapper[7864]: I0224 02:10:56.084137 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:10:56.084213 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:10:56.084213 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:10:56.084213 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:10:56.084749 master-0 kubenswrapper[7864]: I0224 02:10:56.084237 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:10:57.085937 master-0 kubenswrapper[7864]: I0224 02:10:57.085824 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:10:57.085937 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:10:57.085937 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:10:57.085937 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:10:57.087185 master-0 kubenswrapper[7864]: I0224 02:10:57.085964 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:10:58.086308 master-0 kubenswrapper[7864]: I0224 02:10:58.086222 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:10:58.086308 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:10:58.086308 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:10:58.086308 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:10:58.087327 master-0 kubenswrapper[7864]: I0224 02:10:58.086343 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:10:59.085973 master-0 kubenswrapper[7864]: I0224 02:10:59.085895 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:10:59.085973 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:10:59.085973 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:10:59.085973 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:10:59.087022 master-0 kubenswrapper[7864]: I0224 02:10:59.086019 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:00.085509 master-0 kubenswrapper[7864]: I0224 02:11:00.085415 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:00.085509 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:00.085509 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:00.085509 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:00.086005 master-0 kubenswrapper[7864]: I0224 02:11:00.085533 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:01.085798 master-0 kubenswrapper[7864]: I0224 02:11:01.085680 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:01.085798 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:01.085798 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:01.085798 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:01.086872 master-0 kubenswrapper[7864]: I0224 02:11:01.085827 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:02.084452 master-0 kubenswrapper[7864]: I0224 02:11:02.084296 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:02.084452 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:02.084452 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:02.084452 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:02.084916 master-0 kubenswrapper[7864]: I0224 02:11:02.084442 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:03.084440 master-0 kubenswrapper[7864]: I0224 02:11:03.084354 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:03.084440 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:03.084440 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:03.084440 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:03.085724 master-0 kubenswrapper[7864]: I0224 02:11:03.084455 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:04.085636 master-0 kubenswrapper[7864]: I0224 02:11:04.085514 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:04.085636 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:04.085636 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:04.085636 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:04.086692 master-0 kubenswrapper[7864]: I0224 02:11:04.085667 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:05.085470 master-0 kubenswrapper[7864]: I0224 02:11:05.085386 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:05.085470 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:05.085470 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:05.085470 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:05.086547 master-0 kubenswrapper[7864]: I0224 02:11:05.085491 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:06.085435 master-0 kubenswrapper[7864]: I0224 02:11:06.085368 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:06.085435 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:06.085435 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:06.085435 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:06.086749 master-0 kubenswrapper[7864]: I0224 02:11:06.086699 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:07.084614 master-0 kubenswrapper[7864]: I0224 02:11:07.084479 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:07.084614 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:07.084614 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:07.084614 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:07.085178 master-0 kubenswrapper[7864]: I0224 02:11:07.084633 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:08.086250 master-0 kubenswrapper[7864]: I0224 02:11:08.086150 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:08.086250 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:08.086250 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:08.086250 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:08.087459 master-0 kubenswrapper[7864]: I0224 02:11:08.086257 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:09.084984 master-0 kubenswrapper[7864]: I0224 02:11:09.084896 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:09.084984 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:09.084984 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:09.084984 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:09.085846 master-0 kubenswrapper[7864]: I0224 02:11:09.084998 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:10.083923 master-0 kubenswrapper[7864]: I0224 02:11:10.083814 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:10.083923 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:10.083923 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:10.083923 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:10.085123 master-0 kubenswrapper[7864]: I0224 02:11:10.083945 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:11.084977 master-0 kubenswrapper[7864]: I0224 02:11:11.084826 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:11.084977 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:11.084977 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:11.084977 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:11.084977 master-0 kubenswrapper[7864]: I0224 02:11:11.084941 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:12.084687 master-0 kubenswrapper[7864]: I0224 02:11:12.084607 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:12.084687 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:12.084687 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:12.084687 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:12.085924 master-0 kubenswrapper[7864]: I0224 02:11:12.084710 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:12.514461 master-0 kubenswrapper[7864]: I0224 02:11:12.514373 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:11:12.514765 master-0 kubenswrapper[7864]: I0224 02:11:12.514481 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:11:13.085698 master-0 kubenswrapper[7864]: I0224 02:11:13.085566 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:13.085698 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:13.085698 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:13.085698 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:13.086812 master-0 kubenswrapper[7864]: I0224 02:11:13.085713 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:14.084878 master-0 kubenswrapper[7864]: I0224 02:11:14.084762 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:14.084878 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:14.084878 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:14.084878 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:14.084878 master-0 kubenswrapper[7864]: I0224 02:11:14.084879 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:15.085605 master-0 kubenswrapper[7864]: I0224 02:11:15.085497 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:15.085605 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:15.085605 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:15.085605 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:15.087059 master-0 kubenswrapper[7864]: I0224 02:11:15.085626 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:16.084394 master-0 kubenswrapper[7864]: I0224 02:11:16.084314 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:16.084394 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:16.084394 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:16.084394 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:16.084980 master-0 kubenswrapper[7864]: I0224 02:11:16.084414 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:17.085626 master-0 kubenswrapper[7864]: I0224 02:11:17.085509 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:17.085626 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:17.085626 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:17.085626 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:17.087242 master-0 kubenswrapper[7864]: I0224 02:11:17.085666 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:18.084997 master-0 kubenswrapper[7864]: I0224 02:11:18.084895 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:18.084997 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:18.084997 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:18.084997 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:18.085676 master-0 kubenswrapper[7864]: I0224 02:11:18.085015 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:19.085916 master-0 kubenswrapper[7864]: I0224 02:11:19.085820 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:19.085916 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:19.085916 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:19.085916 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:19.085916 master-0 kubenswrapper[7864]: I0224 02:11:19.085923 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:20.085044 master-0 kubenswrapper[7864]: I0224 02:11:20.084891 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:20.085044 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:20.085044 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:20.085044 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:20.085816 master-0 kubenswrapper[7864]: I0224 02:11:20.085055 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:21.085340 master-0 kubenswrapper[7864]: I0224 02:11:21.085267 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:21.085340 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:21.085340 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:21.085340 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:21.086694 master-0 kubenswrapper[7864]: I0224 02:11:21.085354 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:22.084600 master-0 kubenswrapper[7864]: I0224 02:11:22.084460 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:22.084600 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:22.084600 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:22.084600 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:22.085074 master-0 kubenswrapper[7864]: I0224 02:11:22.084627 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:23.086305 master-0 kubenswrapper[7864]: I0224 02:11:23.086196 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:23.086305 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:23.086305 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:23.086305 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:23.087438 master-0 kubenswrapper[7864]: I0224 02:11:23.086330 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:24.085511 master-0 kubenswrapper[7864]: I0224 02:11:24.085403 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:24.085511 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:24.085511 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:24.085511 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:24.086013 master-0 kubenswrapper[7864]: I0224 02:11:24.085541 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:25.085223 master-0 kubenswrapper[7864]: I0224 02:11:25.085120 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:25.085223 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:25.085223 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:25.085223 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:25.086242 master-0 kubenswrapper[7864]: I0224 02:11:25.085245 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:26.085655 master-0 kubenswrapper[7864]: I0224 02:11:26.085365 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:26.085655 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:26.085655 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:26.085655 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:26.085655 master-0 kubenswrapper[7864]: I0224 02:11:26.085513 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:27.086159 master-0 kubenswrapper[7864]: I0224 02:11:27.086032 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:27.086159 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:27.086159 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:27.086159 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:27.086159 master-0 kubenswrapper[7864]: I0224 02:11:27.086159 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:28.085323 master-0 kubenswrapper[7864]: I0224 02:11:28.085117 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:28.085323 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:28.085323 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:28.085323 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:28.085323 master-0 kubenswrapper[7864]: I0224 02:11:28.085282 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:29.085360 master-0 kubenswrapper[7864]: I0224 02:11:29.085271 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:29.085360 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:29.085360 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:29.085360 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:29.086456 master-0 kubenswrapper[7864]: I0224 02:11:29.085398 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:30.084792 master-0 kubenswrapper[7864]: I0224 02:11:30.084665 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:30.084792 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:30.084792 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:30.084792 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:30.084792 master-0 kubenswrapper[7864]: I0224 02:11:30.084772 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:31.085104 master-0 kubenswrapper[7864]: I0224 02:11:31.085008 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:31.085104 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:31.085104 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:31.085104 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:31.086168 master-0 kubenswrapper[7864]: I0224 02:11:31.085127 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:32.085799 master-0 kubenswrapper[7864]: I0224 02:11:32.085706 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:32.085799 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:32.085799 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:32.085799 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:32.086856 master-0 kubenswrapper[7864]: I0224 02:11:32.085852 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:32.524558 master-0 kubenswrapper[7864]: I0224 02:11:32.524463 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:11:32.531615 master-0 kubenswrapper[7864]: I0224 02:11:32.531543 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:11:33.084945 master-0 kubenswrapper[7864]: I0224 02:11:33.084799 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:33.084945 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:33.084945 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:33.084945 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:33.084945 master-0 kubenswrapper[7864]: I0224 02:11:33.084924 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:34.085383 master-0 kubenswrapper[7864]: I0224 02:11:34.085290 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:34.085383 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:34.085383 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:34.085383 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:34.086449 master-0 kubenswrapper[7864]: I0224 02:11:34.085427 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:35.084991 master-0 kubenswrapper[7864]: I0224 02:11:35.084842 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:35.084991 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:35.084991 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:35.084991 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:35.085466 master-0 kubenswrapper[7864]: I0224 02:11:35.085007 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:36.086037 master-0 kubenswrapper[7864]: I0224 02:11:36.085949 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:36.086037 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:36.086037 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:36.086037 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:36.087082 master-0 kubenswrapper[7864]: I0224 02:11:36.086056 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:37.086892 master-0 kubenswrapper[7864]: I0224 02:11:37.086163 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:37.086892 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:37.086892 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:37.086892 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:37.086892 master-0 kubenswrapper[7864]: I0224 02:11:37.086271 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:38.084922 master-0 kubenswrapper[7864]: I0224 02:11:38.084790 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:38.084922 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:38.084922 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:38.084922 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:38.085530 master-0 kubenswrapper[7864]: I0224 02:11:38.084946 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:39.085421 master-0 kubenswrapper[7864]: I0224 02:11:39.085315 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:39.085421 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:39.085421 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:39.085421 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:39.086733 master-0 kubenswrapper[7864]: I0224 02:11:39.085449 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:40.085032 master-0 kubenswrapper[7864]: I0224 02:11:40.084939 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:40.085032 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:40.085032 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:40.085032 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:40.085499 master-0 kubenswrapper[7864]: I0224 02:11:40.085065 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:41.085256 master-0 kubenswrapper[7864]: I0224 02:11:41.085162 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:41.085256 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:41.085256 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:41.085256 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:41.086305 master-0 kubenswrapper[7864]: I0224 02:11:41.085275 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:42.084714 master-0 kubenswrapper[7864]: I0224 02:11:42.084623 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:42.084714 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:42.084714 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:42.084714 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:42.085195 master-0 kubenswrapper[7864]: I0224 02:11:42.084730 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:43.085501 master-0 kubenswrapper[7864]: I0224 02:11:43.085414 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:43.085501 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:43.085501 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:43.085501 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:43.086769 master-0 kubenswrapper[7864]: I0224 02:11:43.085511 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:44.167052 master-0 kubenswrapper[7864]: I0224 02:11:44.154848 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:44.167052 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:44.167052 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:44.167052 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:44.167052 master-0 kubenswrapper[7864]: I0224 02:11:44.154954 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:45.085186 master-0 kubenswrapper[7864]: I0224 02:11:45.085077 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:45.085186 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:45.085186 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:45.085186 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:45.085186 master-0 kubenswrapper[7864]: I0224 02:11:45.085182 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:46.084908 master-0 kubenswrapper[7864]: I0224 02:11:46.084805 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:46.084908 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:46.084908 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:46.084908 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:46.084908 master-0 kubenswrapper[7864]: I0224 02:11:46.084908 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:47.085388 master-0 kubenswrapper[7864]: I0224 02:11:47.085275 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:47.085388 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:47.085388 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:47.085388 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:47.086654 master-0 kubenswrapper[7864]: I0224 02:11:47.085398 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:48.084707 master-0 kubenswrapper[7864]: I0224 02:11:48.084570 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:48.084707 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:48.084707 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:48.084707 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:48.085245 master-0 kubenswrapper[7864]: I0224 02:11:48.084740 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:49.084906 master-0 kubenswrapper[7864]: I0224 02:11:49.084809 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:49.084906 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:49.084906 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:49.084906 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:49.085967 master-0 kubenswrapper[7864]: I0224 02:11:49.084921 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:50.084628 master-0 kubenswrapper[7864]: I0224 02:11:50.084406 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:50.084628 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:50.084628 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:50.084628 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:50.084628 master-0 kubenswrapper[7864]: I0224 02:11:50.084519 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:51.084858 master-0 kubenswrapper[7864]: I0224 02:11:51.084709 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:51.084858 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:51.084858 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:51.084858 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:51.084858 master-0 kubenswrapper[7864]: I0224 02:11:51.084851 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:52.084900 master-0 kubenswrapper[7864]: I0224 02:11:52.084798 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:52.084900 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:52.084900 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:52.084900 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:52.085919 master-0 kubenswrapper[7864]: I0224 02:11:52.084918 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:52.400399 master-0 kubenswrapper[7864]: I0224 02:11:52.400312 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-6dlqb_c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/ingress-operator/1.log" Feb 24 02:11:52.402232 master-0 kubenswrapper[7864]: I0224 02:11:52.402168 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-6dlqb_c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/ingress-operator/0.log" Feb 24 02:11:52.402326 master-0 kubenswrapper[7864]: I0224 02:11:52.402263 7864 generic.go:334] "Generic (PLEG): container finished" podID="c3278a82-ee70-4d6c-9c96-f8cb1bcb9334" containerID="e132e5edce75784f17a5d6083ace7875ddb1c8bd8a8f79447fddc0374107052b" exitCode=1 Feb 24 02:11:52.402326 master-0 kubenswrapper[7864]: I0224 02:11:52.402313 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" event={"ID":"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334","Type":"ContainerDied","Data":"e132e5edce75784f17a5d6083ace7875ddb1c8bd8a8f79447fddc0374107052b"} Feb 24 02:11:52.402457 master-0 kubenswrapper[7864]: I0224 02:11:52.402373 7864 scope.go:117] "RemoveContainer" containerID="bb8e1724e77d6ceb463e444b223fcd8637d9a803be2af1a8dcbebbfedcda21d8" Feb 24 02:11:52.403946 master-0 kubenswrapper[7864]: I0224 02:11:52.403491 7864 scope.go:117] "RemoveContainer" containerID="e132e5edce75784f17a5d6083ace7875ddb1c8bd8a8f79447fddc0374107052b" Feb 24 02:11:52.404180 master-0 kubenswrapper[7864]: E0224 02:11:52.404118 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-6dlqb_openshift-ingress-operator(c3278a82-ee70-4d6c-9c96-f8cb1bcb9334)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" podUID="c3278a82-ee70-4d6c-9c96-f8cb1bcb9334" Feb 24 02:11:53.084638 master-0 kubenswrapper[7864]: I0224 02:11:53.084534 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:53.084638 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:53.084638 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:53.084638 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:53.085110 master-0 kubenswrapper[7864]: I0224 02:11:53.084667 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:53.414557 master-0 kubenswrapper[7864]: I0224 02:11:53.414347 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-6dlqb_c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/ingress-operator/1.log" Feb 24 02:11:54.085558 master-0 kubenswrapper[7864]: I0224 02:11:54.085472 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:54.085558 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:54.085558 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:54.085558 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:54.086109 master-0 kubenswrapper[7864]: I0224 02:11:54.085617 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:55.088899 master-0 kubenswrapper[7864]: I0224 02:11:55.088789 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:55.088899 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:55.088899 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:55.088899 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:55.090014 master-0 kubenswrapper[7864]: I0224 02:11:55.088896 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:56.085793 master-0 kubenswrapper[7864]: I0224 02:11:56.085694 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:56.085793 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:56.085793 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:56.085793 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:56.088836 master-0 kubenswrapper[7864]: I0224 02:11:56.085820 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:57.085753 master-0 kubenswrapper[7864]: I0224 02:11:57.085694 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:57.085753 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:57.085753 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:57.085753 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:57.086903 master-0 kubenswrapper[7864]: I0224 02:11:57.086516 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:58.085567 master-0 kubenswrapper[7864]: I0224 02:11:58.085458 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:58.085567 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:58.085567 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:58.085567 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:58.086891 master-0 kubenswrapper[7864]: I0224 02:11:58.085622 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:11:59.084988 master-0 kubenswrapper[7864]: I0224 02:11:59.084894 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:11:59.084988 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:11:59.084988 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:11:59.084988 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:11:59.085800 master-0 kubenswrapper[7864]: I0224 02:11:59.085007 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:00.084837 master-0 kubenswrapper[7864]: I0224 02:12:00.084766 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:00.084837 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:00.084837 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:00.084837 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:00.085931 master-0 kubenswrapper[7864]: I0224 02:12:00.084864 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:01.085374 master-0 kubenswrapper[7864]: I0224 02:12:01.085280 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:01.085374 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:01.085374 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:01.085374 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:01.086847 master-0 kubenswrapper[7864]: I0224 02:12:01.085418 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:02.084668 master-0 kubenswrapper[7864]: I0224 02:12:02.084540 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:02.084668 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:02.084668 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:02.084668 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:02.085171 master-0 kubenswrapper[7864]: I0224 02:12:02.084725 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:03.084391 master-0 kubenswrapper[7864]: I0224 02:12:03.084305 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:03.084391 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:03.084391 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:03.084391 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:03.085462 master-0 kubenswrapper[7864]: I0224 02:12:03.084427 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:04.084410 master-0 kubenswrapper[7864]: I0224 02:12:04.084325 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:04.084410 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:04.084410 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:04.084410 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:04.085504 master-0 kubenswrapper[7864]: I0224 02:12:04.084443 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:05.085180 master-0 kubenswrapper[7864]: I0224 02:12:05.085047 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:05.085180 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:05.085180 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:05.085180 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:05.086347 master-0 kubenswrapper[7864]: I0224 02:12:05.085200 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:05.881696 master-0 kubenswrapper[7864]: I0224 02:12:05.881557 7864 scope.go:117] "RemoveContainer" containerID="e132e5edce75784f17a5d6083ace7875ddb1c8bd8a8f79447fddc0374107052b" Feb 24 02:12:06.085953 master-0 kubenswrapper[7864]: I0224 02:12:06.085870 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:06.085953 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:06.085953 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:06.085953 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:06.086971 master-0 kubenswrapper[7864]: I0224 02:12:06.085975 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:06.535271 master-0 kubenswrapper[7864]: I0224 02:12:06.535195 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-6dlqb_c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/ingress-operator/1.log" Feb 24 02:12:06.536180 master-0 kubenswrapper[7864]: I0224 02:12:06.535780 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" event={"ID":"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334","Type":"ContainerStarted","Data":"f672e7e1695861ae42afc6a8a2613baf97d7bf401a8f7ddcbb43e4e1e3238a5e"} Feb 24 02:12:07.083989 master-0 kubenswrapper[7864]: I0224 02:12:07.083911 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:07.083989 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:07.083989 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:07.083989 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:07.084390 master-0 kubenswrapper[7864]: I0224 02:12:07.084016 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:08.084645 master-0 kubenswrapper[7864]: I0224 02:12:08.084564 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:08.084645 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:08.084645 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:08.084645 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:08.085324 master-0 kubenswrapper[7864]: I0224 02:12:08.084666 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:09.084783 master-0 kubenswrapper[7864]: I0224 02:12:09.084490 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:09.084783 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:09.084783 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:09.084783 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:09.084783 master-0 kubenswrapper[7864]: I0224 02:12:09.084622 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:10.089439 master-0 kubenswrapper[7864]: I0224 02:12:10.089323 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:10.089439 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:10.089439 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:10.089439 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:10.090381 master-0 kubenswrapper[7864]: I0224 02:12:10.089449 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:11.085313 master-0 kubenswrapper[7864]: I0224 02:12:11.085214 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:11.085313 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:11.085313 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:11.085313 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:11.085313 master-0 kubenswrapper[7864]: I0224 02:12:11.085313 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:12.085360 master-0 kubenswrapper[7864]: I0224 02:12:12.085263 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:12.085360 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:12.085360 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:12.085360 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:12.086651 master-0 kubenswrapper[7864]: I0224 02:12:12.085377 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:13.085620 master-0 kubenswrapper[7864]: I0224 02:12:13.085493 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:13.085620 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:13.085620 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:13.085620 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:13.086687 master-0 kubenswrapper[7864]: I0224 02:12:13.085656 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:14.085855 master-0 kubenswrapper[7864]: I0224 02:12:14.085758 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:14.085855 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:14.085855 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:14.085855 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:14.086912 master-0 kubenswrapper[7864]: I0224 02:12:14.085870 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:15.085199 master-0 kubenswrapper[7864]: I0224 02:12:15.085094 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:15.085199 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:15.085199 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:15.085199 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:15.085752 master-0 kubenswrapper[7864]: I0224 02:12:15.085217 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:16.085159 master-0 kubenswrapper[7864]: I0224 02:12:16.085059 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:16.085159 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:16.085159 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:16.085159 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:16.086349 master-0 kubenswrapper[7864]: I0224 02:12:16.085177 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:17.084420 master-0 kubenswrapper[7864]: I0224 02:12:17.084348 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:17.084420 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:17.084420 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:17.084420 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:17.084837 master-0 kubenswrapper[7864]: I0224 02:12:17.084461 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:18.085143 master-0 kubenswrapper[7864]: I0224 02:12:18.085078 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:18.085143 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:18.085143 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:18.085143 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:18.087114 master-0 kubenswrapper[7864]: I0224 02:12:18.087027 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:19.086037 master-0 kubenswrapper[7864]: I0224 02:12:19.085935 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:19.086037 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:19.086037 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:19.086037 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:19.088278 master-0 kubenswrapper[7864]: I0224 02:12:19.086051 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:20.085319 master-0 kubenswrapper[7864]: I0224 02:12:20.085219 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:20.085319 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:20.085319 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:20.085319 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:20.085998 master-0 kubenswrapper[7864]: I0224 02:12:20.085331 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:21.085104 master-0 kubenswrapper[7864]: I0224 02:12:21.084998 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:21.085104 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:21.085104 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:21.085104 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:21.086232 master-0 kubenswrapper[7864]: I0224 02:12:21.085115 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:22.084735 master-0 kubenswrapper[7864]: I0224 02:12:22.084598 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:22.084735 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:22.084735 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:22.084735 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:22.084735 master-0 kubenswrapper[7864]: I0224 02:12:22.084703 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:23.084838 master-0 kubenswrapper[7864]: I0224 02:12:23.084743 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:23.084838 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:23.084838 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:23.084838 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:23.085950 master-0 kubenswrapper[7864]: I0224 02:12:23.084863 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:24.086146 master-0 kubenswrapper[7864]: I0224 02:12:24.086048 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:24.086146 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:24.086146 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:24.086146 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:24.087753 master-0 kubenswrapper[7864]: I0224 02:12:24.086152 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:25.085646 master-0 kubenswrapper[7864]: I0224 02:12:25.085349 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:25.085646 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:25.085646 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:25.085646 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:25.085646 master-0 kubenswrapper[7864]: I0224 02:12:25.085467 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:26.087756 master-0 kubenswrapper[7864]: I0224 02:12:26.087649 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:26.087756 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:26.087756 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:26.087756 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:26.088891 master-0 kubenswrapper[7864]: I0224 02:12:26.087775 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:27.084672 master-0 kubenswrapper[7864]: I0224 02:12:27.084516 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:27.084672 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:27.084672 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:27.084672 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:27.085243 master-0 kubenswrapper[7864]: I0224 02:12:27.084723 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:28.351386 master-0 kubenswrapper[7864]: I0224 02:12:28.351323 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:28.351386 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:28.351386 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:28.351386 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:28.352090 master-0 kubenswrapper[7864]: I0224 02:12:28.351410 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:29.084780 master-0 kubenswrapper[7864]: I0224 02:12:29.084673 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:29.084780 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:29.084780 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:29.084780 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:29.085328 master-0 kubenswrapper[7864]: I0224 02:12:29.084977 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:30.084552 master-0 kubenswrapper[7864]: I0224 02:12:30.084456 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:30.084552 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:30.084552 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:30.084552 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:30.085938 master-0 kubenswrapper[7864]: I0224 02:12:30.084569 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:31.085217 master-0 kubenswrapper[7864]: I0224 02:12:31.085161 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:31.085217 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:31.085217 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:31.085217 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:31.086369 master-0 kubenswrapper[7864]: I0224 02:12:31.086304 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:32.087653 master-0 kubenswrapper[7864]: I0224 02:12:32.087546 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:32.087653 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:32.087653 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:32.087653 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:32.088801 master-0 kubenswrapper[7864]: I0224 02:12:32.087657 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:33.085530 master-0 kubenswrapper[7864]: I0224 02:12:33.085439 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:33.085530 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:33.085530 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:33.085530 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:33.085953 master-0 kubenswrapper[7864]: I0224 02:12:33.085538 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:34.084614 master-0 kubenswrapper[7864]: I0224 02:12:34.084496 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:34.084614 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:34.084614 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:34.084614 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:34.085716 master-0 kubenswrapper[7864]: I0224 02:12:34.084658 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:35.085267 master-0 kubenswrapper[7864]: I0224 02:12:35.085198 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:35.085267 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:35.085267 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:35.085267 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:35.086276 master-0 kubenswrapper[7864]: I0224 02:12:35.085306 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:36.084972 master-0 kubenswrapper[7864]: I0224 02:12:36.084831 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:36.084972 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:36.084972 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:36.084972 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:36.086422 master-0 kubenswrapper[7864]: I0224 02:12:36.084980 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:37.084795 master-0 kubenswrapper[7864]: I0224 02:12:37.084690 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:12:37.084795 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:12:37.084795 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:12:37.084795 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:12:37.085288 master-0 kubenswrapper[7864]: I0224 02:12:37.084837 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:12:37.085288 master-0 kubenswrapper[7864]: I0224 02:12:37.084975 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:12:37.086680 master-0 kubenswrapper[7864]: I0224 02:12:37.086260 7864 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"160afcf676f240f4edef48c62c951182da5bbf1b49c67d215747997188263486"} pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" containerMessage="Container router failed startup probe, will be restarted" Feb 24 02:12:37.086680 master-0 kubenswrapper[7864]: I0224 02:12:37.086344 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" containerID="cri-o://160afcf676f240f4edef48c62c951182da5bbf1b49c67d215747997188263486" gracePeriod=3600 Feb 24 02:12:52.430931 master-0 kubenswrapper[7864]: I0224 02:12:52.430822 7864 patch_prober.go:28] interesting pod/machine-config-daemon-hfpql container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 02:12:52.432008 master-0 kubenswrapper[7864]: I0224 02:12:52.430945 7864 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hfpql" podUID="df42c69b-1a0e-41f5-9006-17540369b9ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 02:13:24.220789 master-0 kubenswrapper[7864]: I0224 02:13:24.220636 7864 generic.go:334] "Generic (PLEG): container finished" podID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerID="160afcf676f240f4edef48c62c951182da5bbf1b49c67d215747997188263486" exitCode=0 Feb 24 02:13:24.220789 master-0 kubenswrapper[7864]: I0224 02:13:24.220748 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" event={"ID":"6a08a1e4-cf92-4733-a8af-c7ac5b21e925","Type":"ContainerDied","Data":"160afcf676f240f4edef48c62c951182da5bbf1b49c67d215747997188263486"} Feb 24 02:13:24.221849 master-0 kubenswrapper[7864]: I0224 02:13:24.220845 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" event={"ID":"6a08a1e4-cf92-4733-a8af-c7ac5b21e925","Type":"ContainerStarted","Data":"2b3213a5477253a9e7e61477d63287c522e2dc114274fd4d6193d811f2552967"} Feb 24 02:13:25.081525 master-0 kubenswrapper[7864]: I0224 02:13:25.081428 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:13:25.085672 master-0 kubenswrapper[7864]: I0224 02:13:25.085613 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:25.085672 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:25.085672 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:25.085672 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:25.085954 master-0 kubenswrapper[7864]: I0224 02:13:25.085706 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:26.085121 master-0 kubenswrapper[7864]: I0224 02:13:26.085025 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:26.085121 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:26.085121 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:26.085121 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:26.086235 master-0 kubenswrapper[7864]: I0224 02:13:26.085135 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:27.085816 master-0 kubenswrapper[7864]: I0224 02:13:27.085711 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:27.085816 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:27.085816 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:27.085816 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:27.086962 master-0 kubenswrapper[7864]: I0224 02:13:27.085837 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:28.085660 master-0 kubenswrapper[7864]: I0224 02:13:28.084976 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:28.085660 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:28.085660 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:28.085660 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:28.085660 master-0 kubenswrapper[7864]: I0224 02:13:28.085119 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:29.084908 master-0 kubenswrapper[7864]: I0224 02:13:29.084791 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:29.084908 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:29.084908 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:29.084908 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:29.085691 master-0 kubenswrapper[7864]: I0224 02:13:29.084916 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:30.085701 master-0 kubenswrapper[7864]: I0224 02:13:30.085565 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:30.085701 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:30.085701 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:30.085701 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:30.086960 master-0 kubenswrapper[7864]: I0224 02:13:30.085718 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:31.084822 master-0 kubenswrapper[7864]: I0224 02:13:31.084706 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:31.084822 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:31.084822 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:31.084822 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:31.085331 master-0 kubenswrapper[7864]: I0224 02:13:31.084829 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:32.081443 master-0 kubenswrapper[7864]: I0224 02:13:32.081361 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:13:32.085633 master-0 kubenswrapper[7864]: I0224 02:13:32.085557 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:32.085633 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:32.085633 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:32.085633 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:32.086045 master-0 kubenswrapper[7864]: I0224 02:13:32.085997 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:33.085203 master-0 kubenswrapper[7864]: I0224 02:13:33.085123 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:33.085203 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:33.085203 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:33.085203 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:33.086491 master-0 kubenswrapper[7864]: I0224 02:13:33.085238 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:34.085114 master-0 kubenswrapper[7864]: I0224 02:13:34.085033 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:34.085114 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:34.085114 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:34.085114 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:34.086161 master-0 kubenswrapper[7864]: I0224 02:13:34.085808 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:35.085226 master-0 kubenswrapper[7864]: I0224 02:13:35.085136 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:35.085226 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:35.085226 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:35.085226 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:35.086527 master-0 kubenswrapper[7864]: I0224 02:13:35.085267 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:36.085605 master-0 kubenswrapper[7864]: I0224 02:13:36.085491 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:36.085605 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:36.085605 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:36.085605 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:36.086749 master-0 kubenswrapper[7864]: I0224 02:13:36.085614 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:37.085785 master-0 kubenswrapper[7864]: I0224 02:13:37.085635 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:37.085785 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:37.085785 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:37.085785 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:37.085785 master-0 kubenswrapper[7864]: I0224 02:13:37.085745 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:38.084984 master-0 kubenswrapper[7864]: I0224 02:13:38.084833 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:38.084984 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:38.084984 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:38.084984 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:38.084984 master-0 kubenswrapper[7864]: I0224 02:13:38.084961 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:39.085785 master-0 kubenswrapper[7864]: I0224 02:13:39.085634 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:39.085785 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:39.085785 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:39.085785 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:39.085785 master-0 kubenswrapper[7864]: I0224 02:13:39.085749 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:40.084998 master-0 kubenswrapper[7864]: I0224 02:13:40.084906 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:40.084998 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:40.084998 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:40.084998 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:40.085535 master-0 kubenswrapper[7864]: I0224 02:13:40.085027 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:41.085234 master-0 kubenswrapper[7864]: I0224 02:13:41.085115 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:41.085234 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:41.085234 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:41.085234 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:41.086605 master-0 kubenswrapper[7864]: I0224 02:13:41.085239 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:42.085487 master-0 kubenswrapper[7864]: I0224 02:13:42.085395 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:42.085487 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:42.085487 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:42.085487 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:42.086720 master-0 kubenswrapper[7864]: I0224 02:13:42.085511 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:43.085038 master-0 kubenswrapper[7864]: I0224 02:13:43.084909 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:43.085038 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:43.085038 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:43.085038 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:43.086234 master-0 kubenswrapper[7864]: I0224 02:13:43.085056 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:44.085563 master-0 kubenswrapper[7864]: I0224 02:13:44.085424 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:44.085563 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:44.085563 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:44.085563 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:44.086898 master-0 kubenswrapper[7864]: I0224 02:13:44.085560 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:45.084200 master-0 kubenswrapper[7864]: I0224 02:13:45.084120 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:45.084200 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:45.084200 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:45.084200 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:45.084597 master-0 kubenswrapper[7864]: I0224 02:13:45.084237 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:46.084829 master-0 kubenswrapper[7864]: I0224 02:13:46.084597 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:46.084829 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:46.084829 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:46.084829 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:46.084829 master-0 kubenswrapper[7864]: I0224 02:13:46.084725 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:47.085320 master-0 kubenswrapper[7864]: I0224 02:13:47.085203 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:47.085320 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:47.085320 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:47.085320 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:47.086666 master-0 kubenswrapper[7864]: I0224 02:13:47.085320 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:48.085840 master-0 kubenswrapper[7864]: I0224 02:13:48.085759 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:48.085840 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:48.085840 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:48.085840 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:48.087120 master-0 kubenswrapper[7864]: I0224 02:13:48.087060 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:49.086901 master-0 kubenswrapper[7864]: I0224 02:13:49.086785 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:49.086901 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:49.086901 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:49.086901 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:49.088137 master-0 kubenswrapper[7864]: I0224 02:13:49.086926 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:50.085326 master-0 kubenswrapper[7864]: I0224 02:13:50.085227 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:50.085326 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:50.085326 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:50.085326 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:50.085904 master-0 kubenswrapper[7864]: I0224 02:13:50.085329 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:51.085350 master-0 kubenswrapper[7864]: I0224 02:13:51.085287 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:51.085350 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:51.085350 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:51.085350 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:51.086524 master-0 kubenswrapper[7864]: I0224 02:13:51.086473 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:52.084152 master-0 kubenswrapper[7864]: I0224 02:13:52.084062 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:52.084152 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:52.084152 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:52.084152 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:52.084737 master-0 kubenswrapper[7864]: I0224 02:13:52.084191 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:53.084560 master-0 kubenswrapper[7864]: I0224 02:13:53.084469 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:53.084560 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:53.084560 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:53.084560 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:53.085829 master-0 kubenswrapper[7864]: I0224 02:13:53.084612 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:54.084896 master-0 kubenswrapper[7864]: I0224 02:13:54.084810 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:54.084896 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:54.084896 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:54.084896 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:54.085799 master-0 kubenswrapper[7864]: I0224 02:13:54.084943 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:55.085704 master-0 kubenswrapper[7864]: I0224 02:13:55.085613 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:55.085704 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:55.085704 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:55.085704 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:55.086835 master-0 kubenswrapper[7864]: I0224 02:13:55.085804 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:56.085153 master-0 kubenswrapper[7864]: I0224 02:13:56.085054 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:56.085153 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:56.085153 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:56.085153 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:56.085677 master-0 kubenswrapper[7864]: I0224 02:13:56.085195 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:57.087352 master-0 kubenswrapper[7864]: I0224 02:13:57.087246 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:57.087352 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:57.087352 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:57.087352 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:57.088562 master-0 kubenswrapper[7864]: I0224 02:13:57.087373 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:58.084441 master-0 kubenswrapper[7864]: I0224 02:13:58.084358 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:58.084441 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:58.084441 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:58.084441 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:58.085399 master-0 kubenswrapper[7864]: I0224 02:13:58.084456 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:13:59.085899 master-0 kubenswrapper[7864]: I0224 02:13:59.085804 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:13:59.085899 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:13:59.085899 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:13:59.085899 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:13:59.087113 master-0 kubenswrapper[7864]: I0224 02:13:59.085974 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:00.084936 master-0 kubenswrapper[7864]: I0224 02:14:00.084832 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:00.084936 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:00.084936 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:00.084936 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:00.085435 master-0 kubenswrapper[7864]: I0224 02:14:00.084951 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:01.085109 master-0 kubenswrapper[7864]: I0224 02:14:01.085024 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:01.085109 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:01.085109 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:01.085109 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:01.086209 master-0 kubenswrapper[7864]: I0224 02:14:01.085131 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:02.084945 master-0 kubenswrapper[7864]: I0224 02:14:02.084865 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:02.084945 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:02.084945 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:02.084945 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:02.086216 master-0 kubenswrapper[7864]: I0224 02:14:02.084967 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:03.085001 master-0 kubenswrapper[7864]: I0224 02:14:03.084924 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:03.085001 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:03.085001 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:03.085001 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:03.086168 master-0 kubenswrapper[7864]: I0224 02:14:03.085020 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:04.819415 master-0 kubenswrapper[7864]: I0224 02:14:04.819331 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:04.819415 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:04.819415 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:04.819415 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:04.821073 master-0 kubenswrapper[7864]: I0224 02:14:04.819433 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:05.085056 master-0 kubenswrapper[7864]: I0224 02:14:05.084861 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:05.085056 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:05.085056 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:05.085056 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:05.085056 master-0 kubenswrapper[7864]: I0224 02:14:05.084988 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:06.085098 master-0 kubenswrapper[7864]: I0224 02:14:06.085004 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:06.085098 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:06.085098 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:06.085098 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:06.086187 master-0 kubenswrapper[7864]: I0224 02:14:06.085142 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:07.084876 master-0 kubenswrapper[7864]: I0224 02:14:07.084776 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:07.084876 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:07.084876 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:07.084876 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:07.085990 master-0 kubenswrapper[7864]: I0224 02:14:07.084915 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:07.970945 master-0 kubenswrapper[7864]: I0224 02:14:07.970845 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-jjpsc"] Feb 24 02:14:07.972483 master-0 kubenswrapper[7864]: I0224 02:14:07.972439 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-jjpsc" Feb 24 02:14:07.975243 master-0 kubenswrapper[7864]: I0224 02:14:07.975202 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-xlxp2" Feb 24 02:14:07.977117 master-0 kubenswrapper[7864]: I0224 02:14:07.976691 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-jjpsc"] Feb 24 02:14:07.977448 master-0 kubenswrapper[7864]: I0224 02:14:07.977389 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 24 02:14:07.979708 master-0 kubenswrapper[7864]: I0224 02:14:07.977831 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 24 02:14:07.979708 master-0 kubenswrapper[7864]: I0224 02:14:07.977887 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 24 02:14:08.077490 master-0 kubenswrapper[7864]: I0224 02:14:08.077408 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3e36c9eb-0368-46dc-af84-9c602a15555d-cert\") pod \"ingress-canary-jjpsc\" (UID: \"3e36c9eb-0368-46dc-af84-9c602a15555d\") " pod="openshift-ingress-canary/ingress-canary-jjpsc" Feb 24 02:14:08.077740 master-0 kubenswrapper[7864]: I0224 02:14:08.077502 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbzsl\" (UniqueName: \"kubernetes.io/projected/3e36c9eb-0368-46dc-af84-9c602a15555d-kube-api-access-lbzsl\") pod \"ingress-canary-jjpsc\" (UID: \"3e36c9eb-0368-46dc-af84-9c602a15555d\") " pod="openshift-ingress-canary/ingress-canary-jjpsc" Feb 24 02:14:08.084975 master-0 kubenswrapper[7864]: I0224 02:14:08.084936 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:08.084975 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:08.084975 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:08.084975 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:08.085211 master-0 kubenswrapper[7864]: I0224 02:14:08.085179 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:08.178294 master-0 kubenswrapper[7864]: I0224 02:14:08.178240 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3e36c9eb-0368-46dc-af84-9c602a15555d-cert\") pod \"ingress-canary-jjpsc\" (UID: \"3e36c9eb-0368-46dc-af84-9c602a15555d\") " pod="openshift-ingress-canary/ingress-canary-jjpsc" Feb 24 02:14:08.178754 master-0 kubenswrapper[7864]: I0224 02:14:08.178319 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbzsl\" (UniqueName: \"kubernetes.io/projected/3e36c9eb-0368-46dc-af84-9c602a15555d-kube-api-access-lbzsl\") pod \"ingress-canary-jjpsc\" (UID: \"3e36c9eb-0368-46dc-af84-9c602a15555d\") " pod="openshift-ingress-canary/ingress-canary-jjpsc" Feb 24 02:14:08.179003 master-0 kubenswrapper[7864]: E0224 02:14:08.178979 7864 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 24 02:14:08.179128 master-0 kubenswrapper[7864]: E0224 02:14:08.179116 7864 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e36c9eb-0368-46dc-af84-9c602a15555d-cert podName:3e36c9eb-0368-46dc-af84-9c602a15555d nodeName:}" failed. No retries permitted until 2026-02-24 02:14:08.679085594 +0000 UTC m=+613.006739216 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3e36c9eb-0368-46dc-af84-9c602a15555d-cert") pod "ingress-canary-jjpsc" (UID: "3e36c9eb-0368-46dc-af84-9c602a15555d") : secret "canary-serving-cert" not found Feb 24 02:14:08.209013 master-0 kubenswrapper[7864]: I0224 02:14:08.208947 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbzsl\" (UniqueName: \"kubernetes.io/projected/3e36c9eb-0368-46dc-af84-9c602a15555d-kube-api-access-lbzsl\") pod \"ingress-canary-jjpsc\" (UID: \"3e36c9eb-0368-46dc-af84-9c602a15555d\") " pod="openshift-ingress-canary/ingress-canary-jjpsc" Feb 24 02:14:08.687568 master-0 kubenswrapper[7864]: I0224 02:14:08.687471 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3e36c9eb-0368-46dc-af84-9c602a15555d-cert\") pod \"ingress-canary-jjpsc\" (UID: \"3e36c9eb-0368-46dc-af84-9c602a15555d\") " pod="openshift-ingress-canary/ingress-canary-jjpsc" Feb 24 02:14:08.693381 master-0 kubenswrapper[7864]: I0224 02:14:08.692932 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3e36c9eb-0368-46dc-af84-9c602a15555d-cert\") pod \"ingress-canary-jjpsc\" (UID: \"3e36c9eb-0368-46dc-af84-9c602a15555d\") " pod="openshift-ingress-canary/ingress-canary-jjpsc" Feb 24 02:14:08.907466 master-0 kubenswrapper[7864]: I0224 02:14:08.907368 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-jjpsc" Feb 24 02:14:08.909370 master-0 kubenswrapper[7864]: I0224 02:14:08.909269 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-6dlqb_c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/ingress-operator/2.log" Feb 24 02:14:08.910286 master-0 kubenswrapper[7864]: I0224 02:14:08.910236 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-6dlqb_c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/ingress-operator/1.log" Feb 24 02:14:08.911244 master-0 kubenswrapper[7864]: I0224 02:14:08.911189 7864 generic.go:334] "Generic (PLEG): container finished" podID="c3278a82-ee70-4d6c-9c96-f8cb1bcb9334" containerID="f672e7e1695861ae42afc6a8a2613baf97d7bf401a8f7ddcbb43e4e1e3238a5e" exitCode=1 Feb 24 02:14:08.911292 master-0 kubenswrapper[7864]: I0224 02:14:08.911256 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" event={"ID":"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334","Type":"ContainerDied","Data":"f672e7e1695861ae42afc6a8a2613baf97d7bf401a8f7ddcbb43e4e1e3238a5e"} Feb 24 02:14:08.911342 master-0 kubenswrapper[7864]: I0224 02:14:08.911315 7864 scope.go:117] "RemoveContainer" containerID="e132e5edce75784f17a5d6083ace7875ddb1c8bd8a8f79447fddc0374107052b" Feb 24 02:14:08.912032 master-0 kubenswrapper[7864]: I0224 02:14:08.911992 7864 scope.go:117] "RemoveContainer" containerID="f672e7e1695861ae42afc6a8a2613baf97d7bf401a8f7ddcbb43e4e1e3238a5e" Feb 24 02:14:08.912405 master-0 kubenswrapper[7864]: E0224 02:14:08.912362 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-6dlqb_openshift-ingress-operator(c3278a82-ee70-4d6c-9c96-f8cb1bcb9334)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" podUID="c3278a82-ee70-4d6c-9c96-f8cb1bcb9334" Feb 24 02:14:09.086703 master-0 kubenswrapper[7864]: I0224 02:14:09.084969 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:09.086703 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:09.086703 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:09.086703 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:09.086703 master-0 kubenswrapper[7864]: I0224 02:14:09.085096 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:09.485550 master-0 kubenswrapper[7864]: I0224 02:14:09.485285 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-jjpsc"] Feb 24 02:14:09.488570 master-0 kubenswrapper[7864]: W0224 02:14:09.488487 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e36c9eb_0368_46dc_af84_9c602a15555d.slice/crio-0034746b398351f91b0a88e97985b40bb4895c122d618141a5cb5cca87941d23 WatchSource:0}: Error finding container 0034746b398351f91b0a88e97985b40bb4895c122d618141a5cb5cca87941d23: Status 404 returned error can't find the container with id 0034746b398351f91b0a88e97985b40bb4895c122d618141a5cb5cca87941d23 Feb 24 02:14:09.921374 master-0 kubenswrapper[7864]: I0224 02:14:09.921182 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-6dlqb_c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/ingress-operator/2.log" Feb 24 02:14:09.924425 master-0 kubenswrapper[7864]: I0224 02:14:09.924342 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-jjpsc" event={"ID":"3e36c9eb-0368-46dc-af84-9c602a15555d","Type":"ContainerStarted","Data":"687d08c64fa062df61a0c3e82a45be4f2c11c06761c616052b9e81d2135f7d70"} Feb 24 02:14:09.924621 master-0 kubenswrapper[7864]: I0224 02:14:09.924439 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-jjpsc" event={"ID":"3e36c9eb-0368-46dc-af84-9c602a15555d","Type":"ContainerStarted","Data":"0034746b398351f91b0a88e97985b40bb4895c122d618141a5cb5cca87941d23"} Feb 24 02:14:09.957515 master-0 kubenswrapper[7864]: I0224 02:14:09.957390 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-jjpsc" podStartSLOduration=2.957358915 podStartE2EDuration="2.957358915s" podCreationTimestamp="2026-02-24 02:14:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:14:09.95215022 +0000 UTC m=+614.279803882" watchObservedRunningTime="2026-02-24 02:14:09.957358915 +0000 UTC m=+614.285012567" Feb 24 02:14:10.085500 master-0 kubenswrapper[7864]: I0224 02:14:10.085363 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:10.085500 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:10.085500 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:10.085500 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:10.086232 master-0 kubenswrapper[7864]: I0224 02:14:10.085558 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:11.084709 master-0 kubenswrapper[7864]: I0224 02:14:11.084602 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:11.084709 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:11.084709 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:11.084709 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:11.085420 master-0 kubenswrapper[7864]: I0224 02:14:11.084739 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:12.084275 master-0 kubenswrapper[7864]: I0224 02:14:12.084165 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:12.084275 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:12.084275 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:12.084275 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:12.085488 master-0 kubenswrapper[7864]: I0224 02:14:12.084299 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:13.084814 master-0 kubenswrapper[7864]: I0224 02:14:13.084701 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:13.084814 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:13.084814 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:13.084814 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:13.085975 master-0 kubenswrapper[7864]: I0224 02:14:13.084811 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:14.084378 master-0 kubenswrapper[7864]: I0224 02:14:14.084258 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:14.084378 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:14.084378 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:14.084378 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:14.084378 master-0 kubenswrapper[7864]: I0224 02:14:14.084366 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:15.084691 master-0 kubenswrapper[7864]: I0224 02:14:15.084601 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:15.084691 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:15.084691 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:15.084691 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:15.085905 master-0 kubenswrapper[7864]: I0224 02:14:15.084716 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:16.085918 master-0 kubenswrapper[7864]: I0224 02:14:16.085804 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:16.085918 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:16.085918 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:16.085918 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:16.087469 master-0 kubenswrapper[7864]: I0224 02:14:16.085942 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:17.084730 master-0 kubenswrapper[7864]: I0224 02:14:17.084627 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:17.084730 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:17.084730 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:17.084730 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:17.084730 master-0 kubenswrapper[7864]: I0224 02:14:17.084721 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:18.085122 master-0 kubenswrapper[7864]: I0224 02:14:18.085023 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:18.085122 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:18.085122 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:18.085122 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:18.086272 master-0 kubenswrapper[7864]: I0224 02:14:18.085166 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:19.084652 master-0 kubenswrapper[7864]: I0224 02:14:19.084555 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:19.084652 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:19.084652 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:19.084652 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:19.084652 master-0 kubenswrapper[7864]: I0224 02:14:19.084655 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:19.875938 master-0 kubenswrapper[7864]: I0224 02:14:19.875844 7864 scope.go:117] "RemoveContainer" containerID="f672e7e1695861ae42afc6a8a2613baf97d7bf401a8f7ddcbb43e4e1e3238a5e" Feb 24 02:14:19.876965 master-0 kubenswrapper[7864]: E0224 02:14:19.876288 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-6dlqb_openshift-ingress-operator(c3278a82-ee70-4d6c-9c96-f8cb1bcb9334)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" podUID="c3278a82-ee70-4d6c-9c96-f8cb1bcb9334" Feb 24 02:14:20.085594 master-0 kubenswrapper[7864]: I0224 02:14:20.085481 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:20.085594 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:20.085594 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:20.085594 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:20.085594 master-0 kubenswrapper[7864]: I0224 02:14:20.085565 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:21.084901 master-0 kubenswrapper[7864]: I0224 02:14:21.084780 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:21.084901 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:21.084901 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:21.084901 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:21.086153 master-0 kubenswrapper[7864]: I0224 02:14:21.084922 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:22.085236 master-0 kubenswrapper[7864]: I0224 02:14:22.085140 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:22.085236 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:22.085236 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:22.085236 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:22.086551 master-0 kubenswrapper[7864]: I0224 02:14:22.085325 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:23.084976 master-0 kubenswrapper[7864]: I0224 02:14:23.084864 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:23.084976 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:23.084976 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:23.084976 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:23.086076 master-0 kubenswrapper[7864]: I0224 02:14:23.084989 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:24.086102 master-0 kubenswrapper[7864]: I0224 02:14:24.085995 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:24.086102 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:24.086102 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:24.086102 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:24.087294 master-0 kubenswrapper[7864]: I0224 02:14:24.086129 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:25.085065 master-0 kubenswrapper[7864]: I0224 02:14:25.084982 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:25.085065 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:25.085065 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:25.085065 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:25.085533 master-0 kubenswrapper[7864]: I0224 02:14:25.085090 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:26.084252 master-0 kubenswrapper[7864]: I0224 02:14:26.084128 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:26.084252 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:26.084252 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:26.084252 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:26.085375 master-0 kubenswrapper[7864]: I0224 02:14:26.084258 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:27.084738 master-0 kubenswrapper[7864]: I0224 02:14:27.084636 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:27.084738 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:27.084738 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:27.084738 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:27.085825 master-0 kubenswrapper[7864]: I0224 02:14:27.084750 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:28.084282 master-0 kubenswrapper[7864]: I0224 02:14:28.084197 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:28.084282 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:28.084282 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:28.084282 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:28.085660 master-0 kubenswrapper[7864]: I0224 02:14:28.084310 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:29.085101 master-0 kubenswrapper[7864]: I0224 02:14:29.085004 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:29.085101 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:29.085101 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:29.085101 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:29.086562 master-0 kubenswrapper[7864]: I0224 02:14:29.085110 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:30.085661 master-0 kubenswrapper[7864]: I0224 02:14:30.085550 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:30.085661 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:30.085661 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:30.085661 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:30.085661 master-0 kubenswrapper[7864]: I0224 02:14:30.085661 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:31.084114 master-0 kubenswrapper[7864]: I0224 02:14:31.084005 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:31.084114 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:31.084114 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:31.084114 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:31.084114 master-0 kubenswrapper[7864]: I0224 02:14:31.084115 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:32.084795 master-0 kubenswrapper[7864]: I0224 02:14:32.084662 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:32.084795 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:32.084795 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:32.084795 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:32.085912 master-0 kubenswrapper[7864]: I0224 02:14:32.084811 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:32.875417 master-0 kubenswrapper[7864]: I0224 02:14:32.875337 7864 scope.go:117] "RemoveContainer" containerID="f672e7e1695861ae42afc6a8a2613baf97d7bf401a8f7ddcbb43e4e1e3238a5e" Feb 24 02:14:33.093500 master-0 kubenswrapper[7864]: I0224 02:14:33.093425 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:33.093500 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:33.093500 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:33.093500 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:33.094315 master-0 kubenswrapper[7864]: I0224 02:14:33.093508 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:33.124163 master-0 kubenswrapper[7864]: I0224 02:14:33.124093 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-6dlqb_c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/ingress-operator/2.log" Feb 24 02:14:33.124820 master-0 kubenswrapper[7864]: I0224 02:14:33.124762 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" event={"ID":"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334","Type":"ContainerStarted","Data":"049ca3ed1273887ef2dfb1c46cae5f8f4c14254dc46c7407bca3af34b7c3bdfe"} Feb 24 02:14:34.084658 master-0 kubenswrapper[7864]: I0224 02:14:34.084551 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:34.084658 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:34.084658 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:34.084658 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:34.084658 master-0 kubenswrapper[7864]: I0224 02:14:34.084655 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:35.084517 master-0 kubenswrapper[7864]: I0224 02:14:35.084434 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:35.084517 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:35.084517 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:35.084517 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:35.085701 master-0 kubenswrapper[7864]: I0224 02:14:35.084558 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:36.085024 master-0 kubenswrapper[7864]: I0224 02:14:36.084935 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:36.085024 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:36.085024 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:36.085024 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:36.086205 master-0 kubenswrapper[7864]: I0224 02:14:36.085053 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:37.085236 master-0 kubenswrapper[7864]: I0224 02:14:37.085134 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:37.085236 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:37.085236 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:37.085236 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:37.086360 master-0 kubenswrapper[7864]: I0224 02:14:37.085248 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:38.084766 master-0 kubenswrapper[7864]: I0224 02:14:38.084667 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:38.084766 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:38.084766 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:38.084766 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:38.085288 master-0 kubenswrapper[7864]: I0224 02:14:38.084781 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:39.085143 master-0 kubenswrapper[7864]: I0224 02:14:39.085040 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:39.085143 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:39.085143 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:39.085143 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:39.086467 master-0 kubenswrapper[7864]: I0224 02:14:39.085146 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:39.913892 master-0 kubenswrapper[7864]: I0224 02:14:39.913796 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-75qmm"] Feb 24 02:14:39.914939 master-0 kubenswrapper[7864]: I0224 02:14:39.914903 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-75qmm" Feb 24 02:14:39.918855 master-0 kubenswrapper[7864]: I0224 02:14:39.918810 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-nhgxj" Feb 24 02:14:39.919659 master-0 kubenswrapper[7864]: I0224 02:14:39.919354 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Feb 24 02:14:40.077814 master-0 kubenswrapper[7864]: I0224 02:14:40.077726 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/36483bf4-9e27-4c15-bd83-bde809a64b5c-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-75qmm\" (UID: \"36483bf4-9e27-4c15-bd83-bde809a64b5c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-75qmm" Feb 24 02:14:40.077814 master-0 kubenswrapper[7864]: I0224 02:14:40.077802 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/36483bf4-9e27-4c15-bd83-bde809a64b5c-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-75qmm\" (UID: \"36483bf4-9e27-4c15-bd83-bde809a64b5c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-75qmm" Feb 24 02:14:40.078147 master-0 kubenswrapper[7864]: I0224 02:14:40.077852 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz8z8\" (UniqueName: \"kubernetes.io/projected/36483bf4-9e27-4c15-bd83-bde809a64b5c-kube-api-access-qz8z8\") pod \"cni-sysctl-allowlist-ds-75qmm\" (UID: \"36483bf4-9e27-4c15-bd83-bde809a64b5c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-75qmm" Feb 24 02:14:40.078147 master-0 kubenswrapper[7864]: I0224 02:14:40.077880 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/36483bf4-9e27-4c15-bd83-bde809a64b5c-ready\") pod \"cni-sysctl-allowlist-ds-75qmm\" (UID: \"36483bf4-9e27-4c15-bd83-bde809a64b5c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-75qmm" Feb 24 02:14:40.085844 master-0 kubenswrapper[7864]: I0224 02:14:40.085681 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:40.085844 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:40.085844 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:40.085844 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:40.085844 master-0 kubenswrapper[7864]: I0224 02:14:40.085773 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:40.179470 master-0 kubenswrapper[7864]: I0224 02:14:40.179340 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/36483bf4-9e27-4c15-bd83-bde809a64b5c-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-75qmm\" (UID: \"36483bf4-9e27-4c15-bd83-bde809a64b5c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-75qmm" Feb 24 02:14:40.179470 master-0 kubenswrapper[7864]: I0224 02:14:40.179414 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/36483bf4-9e27-4c15-bd83-bde809a64b5c-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-75qmm\" (UID: \"36483bf4-9e27-4c15-bd83-bde809a64b5c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-75qmm" Feb 24 02:14:40.179742 master-0 kubenswrapper[7864]: I0224 02:14:40.179577 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/36483bf4-9e27-4c15-bd83-bde809a64b5c-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-75qmm\" (UID: \"36483bf4-9e27-4c15-bd83-bde809a64b5c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-75qmm" Feb 24 02:14:40.179742 master-0 kubenswrapper[7864]: I0224 02:14:40.179635 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz8z8\" (UniqueName: \"kubernetes.io/projected/36483bf4-9e27-4c15-bd83-bde809a64b5c-kube-api-access-qz8z8\") pod \"cni-sysctl-allowlist-ds-75qmm\" (UID: \"36483bf4-9e27-4c15-bd83-bde809a64b5c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-75qmm" Feb 24 02:14:40.180722 master-0 kubenswrapper[7864]: I0224 02:14:40.180662 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/36483bf4-9e27-4c15-bd83-bde809a64b5c-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-75qmm\" (UID: \"36483bf4-9e27-4c15-bd83-bde809a64b5c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-75qmm" Feb 24 02:14:40.180856 master-0 kubenswrapper[7864]: I0224 02:14:40.180818 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/36483bf4-9e27-4c15-bd83-bde809a64b5c-ready\") pod \"cni-sysctl-allowlist-ds-75qmm\" (UID: \"36483bf4-9e27-4c15-bd83-bde809a64b5c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-75qmm" Feb 24 02:14:40.184523 master-0 kubenswrapper[7864]: I0224 02:14:40.181376 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/36483bf4-9e27-4c15-bd83-bde809a64b5c-ready\") pod \"cni-sysctl-allowlist-ds-75qmm\" (UID: \"36483bf4-9e27-4c15-bd83-bde809a64b5c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-75qmm" Feb 24 02:14:40.219345 master-0 kubenswrapper[7864]: I0224 02:14:40.219259 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz8z8\" (UniqueName: \"kubernetes.io/projected/36483bf4-9e27-4c15-bd83-bde809a64b5c-kube-api-access-qz8z8\") pod \"cni-sysctl-allowlist-ds-75qmm\" (UID: \"36483bf4-9e27-4c15-bd83-bde809a64b5c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-75qmm" Feb 24 02:14:40.248887 master-0 kubenswrapper[7864]: I0224 02:14:40.248785 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-75qmm" Feb 24 02:14:41.085232 master-0 kubenswrapper[7864]: I0224 02:14:41.085132 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:41.085232 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:41.085232 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:41.085232 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:41.085749 master-0 kubenswrapper[7864]: I0224 02:14:41.085238 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:41.204261 master-0 kubenswrapper[7864]: I0224 02:14:41.204181 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-75qmm" event={"ID":"36483bf4-9e27-4c15-bd83-bde809a64b5c","Type":"ContainerStarted","Data":"4fc3c3f9b07ab4f3d8df72a12e6a49ea26ea25715d1967b9b5fbf8ea69312af9"} Feb 24 02:14:41.204261 master-0 kubenswrapper[7864]: I0224 02:14:41.204250 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-75qmm" event={"ID":"36483bf4-9e27-4c15-bd83-bde809a64b5c","Type":"ContainerStarted","Data":"21c99d51c26516b820359d2a4b1fd0df121190c0e505cd23ba2c2f47d16cd7f9"} Feb 24 02:14:41.205433 master-0 kubenswrapper[7864]: I0224 02:14:41.205366 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-75qmm" Feb 24 02:14:41.245351 master-0 kubenswrapper[7864]: I0224 02:14:41.245210 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-75qmm" podStartSLOduration=2.245165187 podStartE2EDuration="2.245165187s" podCreationTimestamp="2026-02-24 02:14:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:14:41.237500603 +0000 UTC m=+645.565154255" watchObservedRunningTime="2026-02-24 02:14:41.245165187 +0000 UTC m=+645.572818849" Feb 24 02:14:41.249869 master-0 kubenswrapper[7864]: I0224 02:14:41.249809 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-75qmm" Feb 24 02:14:41.941785 master-0 kubenswrapper[7864]: I0224 02:14:41.941629 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-75qmm"] Feb 24 02:14:42.084713 master-0 kubenswrapper[7864]: I0224 02:14:42.084635 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:42.084713 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:42.084713 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:42.084713 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:42.085153 master-0 kubenswrapper[7864]: I0224 02:14:42.084750 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:43.085774 master-0 kubenswrapper[7864]: I0224 02:14:43.085651 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:43.085774 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:43.085774 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:43.085774 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:43.086908 master-0 kubenswrapper[7864]: I0224 02:14:43.085809 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:43.221977 master-0 kubenswrapper[7864]: I0224 02:14:43.221831 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-75qmm" podUID="36483bf4-9e27-4c15-bd83-bde809a64b5c" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://4fc3c3f9b07ab4f3d8df72a12e6a49ea26ea25715d1967b9b5fbf8ea69312af9" gracePeriod=30 Feb 24 02:14:43.609991 master-0 kubenswrapper[7864]: I0224 02:14:43.609917 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 24 02:14:43.612082 master-0 kubenswrapper[7864]: I0224 02:14:43.612014 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 24 02:14:43.615352 master-0 kubenswrapper[7864]: I0224 02:14:43.615313 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Feb 24 02:14:43.615714 master-0 kubenswrapper[7864]: I0224 02:14:43.615662 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-z9svp" Feb 24 02:14:43.625998 master-0 kubenswrapper[7864]: I0224 02:14:43.625937 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 24 02:14:43.754703 master-0 kubenswrapper[7864]: I0224 02:14:43.754632 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50c78047-1c4d-4535-ba2c-31f080d6a57d-kube-api-access\") pod \"installer-2-master-0\" (UID: \"50c78047-1c4d-4535-ba2c-31f080d6a57d\") " pod="openshift-etcd/installer-2-master-0" Feb 24 02:14:43.754868 master-0 kubenswrapper[7864]: I0224 02:14:43.754718 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/50c78047-1c4d-4535-ba2c-31f080d6a57d-var-lock\") pod \"installer-2-master-0\" (UID: \"50c78047-1c4d-4535-ba2c-31f080d6a57d\") " pod="openshift-etcd/installer-2-master-0" Feb 24 02:14:43.755167 master-0 kubenswrapper[7864]: I0224 02:14:43.755112 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50c78047-1c4d-4535-ba2c-31f080d6a57d-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"50c78047-1c4d-4535-ba2c-31f080d6a57d\") " pod="openshift-etcd/installer-2-master-0" Feb 24 02:14:43.856811 master-0 kubenswrapper[7864]: I0224 02:14:43.856737 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/50c78047-1c4d-4535-ba2c-31f080d6a57d-var-lock\") pod \"installer-2-master-0\" (UID: \"50c78047-1c4d-4535-ba2c-31f080d6a57d\") " pod="openshift-etcd/installer-2-master-0" Feb 24 02:14:43.856977 master-0 kubenswrapper[7864]: I0224 02:14:43.856859 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/50c78047-1c4d-4535-ba2c-31f080d6a57d-var-lock\") pod \"installer-2-master-0\" (UID: \"50c78047-1c4d-4535-ba2c-31f080d6a57d\") " pod="openshift-etcd/installer-2-master-0" Feb 24 02:14:43.856977 master-0 kubenswrapper[7864]: I0224 02:14:43.856914 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50c78047-1c4d-4535-ba2c-31f080d6a57d-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"50c78047-1c4d-4535-ba2c-31f080d6a57d\") " pod="openshift-etcd/installer-2-master-0" Feb 24 02:14:43.856977 master-0 kubenswrapper[7864]: I0224 02:14:43.856965 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50c78047-1c4d-4535-ba2c-31f080d6a57d-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"50c78047-1c4d-4535-ba2c-31f080d6a57d\") " pod="openshift-etcd/installer-2-master-0" Feb 24 02:14:43.857192 master-0 kubenswrapper[7864]: I0224 02:14:43.857077 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50c78047-1c4d-4535-ba2c-31f080d6a57d-kube-api-access\") pod \"installer-2-master-0\" (UID: \"50c78047-1c4d-4535-ba2c-31f080d6a57d\") " pod="openshift-etcd/installer-2-master-0" Feb 24 02:14:43.886423 master-0 kubenswrapper[7864]: I0224 02:14:43.886258 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50c78047-1c4d-4535-ba2c-31f080d6a57d-kube-api-access\") pod \"installer-2-master-0\" (UID: \"50c78047-1c4d-4535-ba2c-31f080d6a57d\") " pod="openshift-etcd/installer-2-master-0" Feb 24 02:14:43.984906 master-0 kubenswrapper[7864]: I0224 02:14:43.984794 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 24 02:14:44.086446 master-0 kubenswrapper[7864]: I0224 02:14:44.086370 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:44.086446 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:44.086446 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:44.086446 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:44.087396 master-0 kubenswrapper[7864]: I0224 02:14:44.086469 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:44.537481 master-0 kubenswrapper[7864]: I0224 02:14:44.537412 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 24 02:14:44.545190 master-0 kubenswrapper[7864]: W0224 02:14:44.545111 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod50c78047_1c4d_4535_ba2c_31f080d6a57d.slice/crio-073c8b4053396ee6cbbc1314c9a8361c6af6be047fe705c3463b911834d8b963 WatchSource:0}: Error finding container 073c8b4053396ee6cbbc1314c9a8361c6af6be047fe705c3463b911834d8b963: Status 404 returned error can't find the container with id 073c8b4053396ee6cbbc1314c9a8361c6af6be047fe705c3463b911834d8b963 Feb 24 02:14:45.085407 master-0 kubenswrapper[7864]: I0224 02:14:45.085127 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:45.085407 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:45.085407 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:45.085407 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:45.085407 master-0 kubenswrapper[7864]: I0224 02:14:45.085254 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:45.272964 master-0 kubenswrapper[7864]: I0224 02:14:45.272855 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"50c78047-1c4d-4535-ba2c-31f080d6a57d","Type":"ContainerStarted","Data":"7bb232625f3494579f18ed676cbbdfe8d63a7f633ead8439889c5a5bfa8b5a12"} Feb 24 02:14:45.273909 master-0 kubenswrapper[7864]: I0224 02:14:45.273001 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"50c78047-1c4d-4535-ba2c-31f080d6a57d","Type":"ContainerStarted","Data":"073c8b4053396ee6cbbc1314c9a8361c6af6be047fe705c3463b911834d8b963"} Feb 24 02:14:45.314679 master-0 kubenswrapper[7864]: I0224 02:14:45.313740 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-0" podStartSLOduration=2.313700166 podStartE2EDuration="2.313700166s" podCreationTimestamp="2026-02-24 02:14:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:14:45.304407748 +0000 UTC m=+649.632061400" watchObservedRunningTime="2026-02-24 02:14:45.313700166 +0000 UTC m=+649.641353818" Feb 24 02:14:46.085736 master-0 kubenswrapper[7864]: I0224 02:14:46.085609 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:46.085736 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:46.085736 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:46.085736 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:46.086273 master-0 kubenswrapper[7864]: I0224 02:14:46.085737 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:47.086110 master-0 kubenswrapper[7864]: I0224 02:14:47.085989 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:47.086110 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:47.086110 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:47.086110 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:47.087263 master-0 kubenswrapper[7864]: I0224 02:14:47.086162 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:48.087110 master-0 kubenswrapper[7864]: I0224 02:14:48.087018 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:48.087110 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:48.087110 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:48.087110 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:48.088373 master-0 kubenswrapper[7864]: I0224 02:14:48.087145 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:49.085321 master-0 kubenswrapper[7864]: I0224 02:14:49.085250 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:49.085321 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:49.085321 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:49.085321 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:49.085990 master-0 kubenswrapper[7864]: I0224 02:14:49.085939 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:49.524118 master-0 kubenswrapper[7864]: I0224 02:14:49.524036 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-5f54bf67d4-ctssl"] Feb 24 02:14:49.525983 master-0 kubenswrapper[7864]: I0224 02:14:49.525898 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f54bf67d4-ctssl" Feb 24 02:14:49.532479 master-0 kubenswrapper[7864]: I0224 02:14:49.532419 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-ddb6g" Feb 24 02:14:49.546029 master-0 kubenswrapper[7864]: I0224 02:14:49.545958 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5f54bf67d4-ctssl"] Feb 24 02:14:49.677610 master-0 kubenswrapper[7864]: I0224 02:14:49.676819 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e68b3061-c9d2-469d-babf-7ccac0ad9b14-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-ctssl\" (UID: \"e68b3061-c9d2-469d-babf-7ccac0ad9b14\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-ctssl" Feb 24 02:14:49.677610 master-0 kubenswrapper[7864]: I0224 02:14:49.676966 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xz68\" (UniqueName: \"kubernetes.io/projected/e68b3061-c9d2-469d-babf-7ccac0ad9b14-kube-api-access-6xz68\") pod \"multus-admission-controller-5f54bf67d4-ctssl\" (UID: \"e68b3061-c9d2-469d-babf-7ccac0ad9b14\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-ctssl" Feb 24 02:14:49.778914 master-0 kubenswrapper[7864]: I0224 02:14:49.778750 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e68b3061-c9d2-469d-babf-7ccac0ad9b14-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-ctssl\" (UID: \"e68b3061-c9d2-469d-babf-7ccac0ad9b14\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-ctssl" Feb 24 02:14:49.779158 master-0 kubenswrapper[7864]: I0224 02:14:49.779008 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xz68\" (UniqueName: \"kubernetes.io/projected/e68b3061-c9d2-469d-babf-7ccac0ad9b14-kube-api-access-6xz68\") pod \"multus-admission-controller-5f54bf67d4-ctssl\" (UID: \"e68b3061-c9d2-469d-babf-7ccac0ad9b14\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-ctssl" Feb 24 02:14:49.783020 master-0 kubenswrapper[7864]: I0224 02:14:49.782955 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e68b3061-c9d2-469d-babf-7ccac0ad9b14-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-ctssl\" (UID: \"e68b3061-c9d2-469d-babf-7ccac0ad9b14\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-ctssl" Feb 24 02:14:49.799406 master-0 kubenswrapper[7864]: I0224 02:14:49.799357 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xz68\" (UniqueName: \"kubernetes.io/projected/e68b3061-c9d2-469d-babf-7ccac0ad9b14-kube-api-access-6xz68\") pod \"multus-admission-controller-5f54bf67d4-ctssl\" (UID: \"e68b3061-c9d2-469d-babf-7ccac0ad9b14\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-ctssl" Feb 24 02:14:49.880912 master-0 kubenswrapper[7864]: I0224 02:14:49.880856 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f54bf67d4-ctssl" Feb 24 02:14:50.085585 master-0 kubenswrapper[7864]: I0224 02:14:50.085519 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:50.085585 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:50.085585 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:50.085585 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:50.086159 master-0 kubenswrapper[7864]: I0224 02:14:50.085601 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:50.251993 master-0 kubenswrapper[7864]: E0224 02:14:50.251881 7864 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4fc3c3f9b07ab4f3d8df72a12e6a49ea26ea25715d1967b9b5fbf8ea69312af9" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 02:14:50.254345 master-0 kubenswrapper[7864]: E0224 02:14:50.254275 7864 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4fc3c3f9b07ab4f3d8df72a12e6a49ea26ea25715d1967b9b5fbf8ea69312af9" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 02:14:50.256123 master-0 kubenswrapper[7864]: E0224 02:14:50.256068 7864 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4fc3c3f9b07ab4f3d8df72a12e6a49ea26ea25715d1967b9b5fbf8ea69312af9" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 02:14:50.256290 master-0 kubenswrapper[7864]: E0224 02:14:50.256130 7864 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-75qmm" podUID="36483bf4-9e27-4c15-bd83-bde809a64b5c" containerName="kube-multus-additional-cni-plugins" Feb 24 02:14:50.372739 master-0 kubenswrapper[7864]: I0224 02:14:50.372665 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5f54bf67d4-ctssl"] Feb 24 02:14:50.378437 master-0 kubenswrapper[7864]: W0224 02:14:50.378381 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode68b3061_c9d2_469d_babf_7ccac0ad9b14.slice/crio-14bae3b4a416d85f1092a8f00a7f0b630edcc978d596d53dd23dbb652596ecd8 WatchSource:0}: Error finding container 14bae3b4a416d85f1092a8f00a7f0b630edcc978d596d53dd23dbb652596ecd8: Status 404 returned error can't find the container with id 14bae3b4a416d85f1092a8f00a7f0b630edcc978d596d53dd23dbb652596ecd8 Feb 24 02:14:51.085532 master-0 kubenswrapper[7864]: I0224 02:14:51.085447 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:51.085532 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:51.085532 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:51.085532 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:51.094360 master-0 kubenswrapper[7864]: I0224 02:14:51.085549 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:51.367040 master-0 kubenswrapper[7864]: I0224 02:14:51.366855 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f54bf67d4-ctssl" event={"ID":"e68b3061-c9d2-469d-babf-7ccac0ad9b14","Type":"ContainerStarted","Data":"56d3fdb5093e1d4eccefbac182f17fc4900bc5eca061248db29c85445165c4dc"} Feb 24 02:14:51.367040 master-0 kubenswrapper[7864]: I0224 02:14:51.366949 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f54bf67d4-ctssl" event={"ID":"e68b3061-c9d2-469d-babf-7ccac0ad9b14","Type":"ContainerStarted","Data":"e6b7fffe1df34eb4ec92449883945249c2a727f4ae3f99a2b5f3f554aa75c619"} Feb 24 02:14:51.367040 master-0 kubenswrapper[7864]: I0224 02:14:51.366965 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f54bf67d4-ctssl" event={"ID":"e68b3061-c9d2-469d-babf-7ccac0ad9b14","Type":"ContainerStarted","Data":"14bae3b4a416d85f1092a8f00a7f0b630edcc978d596d53dd23dbb652596ecd8"} Feb 24 02:14:51.399626 master-0 kubenswrapper[7864]: I0224 02:14:51.399526 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-5f54bf67d4-ctssl" podStartSLOduration=2.399501002 podStartE2EDuration="2.399501002s" podCreationTimestamp="2026-02-24 02:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:14:51.392628031 +0000 UTC m=+655.720281683" watchObservedRunningTime="2026-02-24 02:14:51.399501002 +0000 UTC m=+655.727154634" Feb 24 02:14:51.457166 master-0 kubenswrapper[7864]: I0224 02:14:51.452818 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f"] Feb 24 02:14:51.457166 master-0 kubenswrapper[7864]: I0224 02:14:51.453185 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" podUID="dc3d08db-45fa-4fef-b1fd-2875f22d5c45" containerName="multus-admission-controller" containerID="cri-o://1a9c80348ec3d9615f2f58e4f90b6f801e400fc962ca77ec229b6df397014b2d" gracePeriod=30 Feb 24 02:14:51.457166 master-0 kubenswrapper[7864]: I0224 02:14:51.453510 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" podUID="dc3d08db-45fa-4fef-b1fd-2875f22d5c45" containerName="kube-rbac-proxy" containerID="cri-o://206e32f211480b70d154888e7eaab059acddf0419748ec8afd5db9a5bab1c507" gracePeriod=30 Feb 24 02:14:52.084271 master-0 kubenswrapper[7864]: I0224 02:14:52.084179 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:52.084271 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:52.084271 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:52.084271 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:52.084818 master-0 kubenswrapper[7864]: I0224 02:14:52.084301 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:52.387935 master-0 kubenswrapper[7864]: I0224 02:14:52.387766 7864 generic.go:334] "Generic (PLEG): container finished" podID="dc3d08db-45fa-4fef-b1fd-2875f22d5c45" containerID="206e32f211480b70d154888e7eaab059acddf0419748ec8afd5db9a5bab1c507" exitCode=0 Feb 24 02:14:52.388918 master-0 kubenswrapper[7864]: I0224 02:14:52.387895 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" event={"ID":"dc3d08db-45fa-4fef-b1fd-2875f22d5c45","Type":"ContainerDied","Data":"206e32f211480b70d154888e7eaab059acddf0419748ec8afd5db9a5bab1c507"} Feb 24 02:14:53.085270 master-0 kubenswrapper[7864]: I0224 02:14:53.085157 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:53.085270 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:53.085270 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:53.085270 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:53.085270 master-0 kubenswrapper[7864]: I0224 02:14:53.085264 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:54.085179 master-0 kubenswrapper[7864]: I0224 02:14:54.085060 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:54.085179 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:54.085179 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:54.085179 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:54.085179 master-0 kubenswrapper[7864]: I0224 02:14:54.085162 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:55.085516 master-0 kubenswrapper[7864]: I0224 02:14:55.085422 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:55.085516 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:55.085516 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:55.085516 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:55.085516 master-0 kubenswrapper[7864]: I0224 02:14:55.085516 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:56.084735 master-0 kubenswrapper[7864]: I0224 02:14:56.084646 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:56.084735 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:56.084735 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:56.084735 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:56.086844 master-0 kubenswrapper[7864]: I0224 02:14:56.084759 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:57.085052 master-0 kubenswrapper[7864]: I0224 02:14:57.084963 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:57.085052 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:57.085052 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:57.085052 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:57.085606 master-0 kubenswrapper[7864]: I0224 02:14:57.085088 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:58.085068 master-0 kubenswrapper[7864]: I0224 02:14:58.084962 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:58.085068 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:58.085068 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:58.085068 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:58.085068 master-0 kubenswrapper[7864]: I0224 02:14:58.085065 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:14:59.085165 master-0 kubenswrapper[7864]: I0224 02:14:59.085050 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:14:59.085165 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:14:59.085165 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:14:59.085165 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:14:59.086314 master-0 kubenswrapper[7864]: I0224 02:14:59.085183 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:15:00.088064 master-0 kubenswrapper[7864]: I0224 02:15:00.087950 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:15:00.088064 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:15:00.088064 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:15:00.088064 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:15:00.089176 master-0 kubenswrapper[7864]: I0224 02:15:00.088084 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:15:00.176839 master-0 kubenswrapper[7864]: I0224 02:15:00.176776 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531655-kw6fn"] Feb 24 02:15:00.178516 master-0 kubenswrapper[7864]: I0224 02:15:00.178485 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531655-kw6fn" Feb 24 02:15:00.180484 master-0 kubenswrapper[7864]: I0224 02:15:00.180392 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 02:15:00.180855 master-0 kubenswrapper[7864]: I0224 02:15:00.180788 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-dqz7q" Feb 24 02:15:00.191733 master-0 kubenswrapper[7864]: I0224 02:15:00.190281 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531655-kw6fn"] Feb 24 02:15:00.252165 master-0 kubenswrapper[7864]: E0224 02:15:00.252052 7864 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4fc3c3f9b07ab4f3d8df72a12e6a49ea26ea25715d1967b9b5fbf8ea69312af9" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 02:15:00.253683 master-0 kubenswrapper[7864]: E0224 02:15:00.253611 7864 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4fc3c3f9b07ab4f3d8df72a12e6a49ea26ea25715d1967b9b5fbf8ea69312af9" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 02:15:00.256424 master-0 kubenswrapper[7864]: E0224 02:15:00.255389 7864 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4fc3c3f9b07ab4f3d8df72a12e6a49ea26ea25715d1967b9b5fbf8ea69312af9" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 02:15:00.256424 master-0 kubenswrapper[7864]: E0224 02:15:00.255422 7864 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-75qmm" podUID="36483bf4-9e27-4c15-bd83-bde809a64b5c" containerName="kube-multus-additional-cni-plugins" Feb 24 02:15:00.311803 master-0 kubenswrapper[7864]: I0224 02:15:00.311693 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06fb1d82-f9e9-473b-80c5-767ec3948bd4-config-volume\") pod \"collect-profiles-29531655-kw6fn\" (UID: \"06fb1d82-f9e9-473b-80c5-767ec3948bd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531655-kw6fn" Feb 24 02:15:00.311803 master-0 kubenswrapper[7864]: I0224 02:15:00.311817 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbbrj\" (UniqueName: \"kubernetes.io/projected/06fb1d82-f9e9-473b-80c5-767ec3948bd4-kube-api-access-fbbrj\") pod \"collect-profiles-29531655-kw6fn\" (UID: \"06fb1d82-f9e9-473b-80c5-767ec3948bd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531655-kw6fn" Feb 24 02:15:00.312332 master-0 kubenswrapper[7864]: I0224 02:15:00.311908 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06fb1d82-f9e9-473b-80c5-767ec3948bd4-secret-volume\") pod \"collect-profiles-29531655-kw6fn\" (UID: \"06fb1d82-f9e9-473b-80c5-767ec3948bd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531655-kw6fn" Feb 24 02:15:00.413753 master-0 kubenswrapper[7864]: I0224 02:15:00.413615 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06fb1d82-f9e9-473b-80c5-767ec3948bd4-secret-volume\") pod \"collect-profiles-29531655-kw6fn\" (UID: \"06fb1d82-f9e9-473b-80c5-767ec3948bd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531655-kw6fn" Feb 24 02:15:00.413940 master-0 kubenswrapper[7864]: I0224 02:15:00.413787 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06fb1d82-f9e9-473b-80c5-767ec3948bd4-config-volume\") pod \"collect-profiles-29531655-kw6fn\" (UID: \"06fb1d82-f9e9-473b-80c5-767ec3948bd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531655-kw6fn" Feb 24 02:15:00.413940 master-0 kubenswrapper[7864]: I0224 02:15:00.413855 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbbrj\" (UniqueName: \"kubernetes.io/projected/06fb1d82-f9e9-473b-80c5-767ec3948bd4-kube-api-access-fbbrj\") pod \"collect-profiles-29531655-kw6fn\" (UID: \"06fb1d82-f9e9-473b-80c5-767ec3948bd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531655-kw6fn" Feb 24 02:15:00.415665 master-0 kubenswrapper[7864]: I0224 02:15:00.415607 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06fb1d82-f9e9-473b-80c5-767ec3948bd4-config-volume\") pod \"collect-profiles-29531655-kw6fn\" (UID: \"06fb1d82-f9e9-473b-80c5-767ec3948bd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531655-kw6fn" Feb 24 02:15:00.419063 master-0 kubenswrapper[7864]: I0224 02:15:00.419008 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06fb1d82-f9e9-473b-80c5-767ec3948bd4-secret-volume\") pod \"collect-profiles-29531655-kw6fn\" (UID: \"06fb1d82-f9e9-473b-80c5-767ec3948bd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531655-kw6fn" Feb 24 02:15:00.449197 master-0 kubenswrapper[7864]: I0224 02:15:00.449131 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbbrj\" (UniqueName: \"kubernetes.io/projected/06fb1d82-f9e9-473b-80c5-767ec3948bd4-kube-api-access-fbbrj\") pod \"collect-profiles-29531655-kw6fn\" (UID: \"06fb1d82-f9e9-473b-80c5-767ec3948bd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531655-kw6fn" Feb 24 02:15:00.499391 master-0 kubenswrapper[7864]: I0224 02:15:00.499299 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531655-kw6fn" Feb 24 02:15:00.999627 master-0 kubenswrapper[7864]: I0224 02:15:00.999222 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531655-kw6fn"] Feb 24 02:15:01.009103 master-0 kubenswrapper[7864]: W0224 02:15:01.008989 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06fb1d82_f9e9_473b_80c5_767ec3948bd4.slice/crio-d422768badc0a68fd4cf2f302f097d6619f8211838023083d72164e3cae439f7 WatchSource:0}: Error finding container d422768badc0a68fd4cf2f302f097d6619f8211838023083d72164e3cae439f7: Status 404 returned error can't find the container with id d422768badc0a68fd4cf2f302f097d6619f8211838023083d72164e3cae439f7 Feb 24 02:15:01.086093 master-0 kubenswrapper[7864]: I0224 02:15:01.086005 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:15:01.086093 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:15:01.086093 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:15:01.086093 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:15:01.086450 master-0 kubenswrapper[7864]: I0224 02:15:01.086122 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:15:01.482162 master-0 kubenswrapper[7864]: I0224 02:15:01.482071 7864 generic.go:334] "Generic (PLEG): container finished" podID="06fb1d82-f9e9-473b-80c5-767ec3948bd4" containerID="23e5ece2a1174ce846ce41906ef5a0fcc35a5f58a900b96b34aee280e09c4850" exitCode=0 Feb 24 02:15:01.482162 master-0 kubenswrapper[7864]: I0224 02:15:01.482158 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531655-kw6fn" event={"ID":"06fb1d82-f9e9-473b-80c5-767ec3948bd4","Type":"ContainerDied","Data":"23e5ece2a1174ce846ce41906ef5a0fcc35a5f58a900b96b34aee280e09c4850"} Feb 24 02:15:01.483014 master-0 kubenswrapper[7864]: I0224 02:15:01.482210 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531655-kw6fn" event={"ID":"06fb1d82-f9e9-473b-80c5-767ec3948bd4","Type":"ContainerStarted","Data":"d422768badc0a68fd4cf2f302f097d6619f8211838023083d72164e3cae439f7"} Feb 24 02:15:01.813410 master-0 kubenswrapper[7864]: I0224 02:15:01.813316 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-57df7db547-2v9c5"] Feb 24 02:15:01.816338 master-0 kubenswrapper[7864]: I0224 02:15:01.813721 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" podUID="bd1a99d5-e213-42b3-9538-44f68d993184" containerName="controller-manager" containerID="cri-o://3b3dc3f3343efcc761aa7131cd129c211df9cb96a7206172c4bf7e73b8ab89d1" gracePeriod=30 Feb 24 02:15:01.837603 master-0 kubenswrapper[7864]: I0224 02:15:01.837498 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56cd46585c-nhkd9"] Feb 24 02:15:01.837898 master-0 kubenswrapper[7864]: I0224 02:15:01.837847 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-56cd46585c-nhkd9" podUID="8e6fd0d2-d629-4399-b008-979f28390943" containerName="route-controller-manager" containerID="cri-o://f07b242f8321209046afc90d0d2ec30b10004f1058240cc14bdc59e604994c55" gracePeriod=30 Feb 24 02:15:02.103071 master-0 kubenswrapper[7864]: I0224 02:15:02.103012 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:15:02.103071 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:15:02.103071 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:15:02.103071 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:15:02.103408 master-0 kubenswrapper[7864]: I0224 02:15:02.103085 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:15:02.310201 master-0 kubenswrapper[7864]: I0224 02:15:02.310129 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" Feb 24 02:15:02.352800 master-0 kubenswrapper[7864]: I0224 02:15:02.352676 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd1a99d5-e213-42b3-9538-44f68d993184-client-ca\") pod \"bd1a99d5-e213-42b3-9538-44f68d993184\" (UID: \"bd1a99d5-e213-42b3-9538-44f68d993184\") " Feb 24 02:15:02.352947 master-0 kubenswrapper[7864]: I0224 02:15:02.352871 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd4jm\" (UniqueName: \"kubernetes.io/projected/bd1a99d5-e213-42b3-9538-44f68d993184-kube-api-access-xd4jm\") pod \"bd1a99d5-e213-42b3-9538-44f68d993184\" (UID: \"bd1a99d5-e213-42b3-9538-44f68d993184\") " Feb 24 02:15:02.353023 master-0 kubenswrapper[7864]: I0224 02:15:02.352977 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd1a99d5-e213-42b3-9538-44f68d993184-serving-cert\") pod \"bd1a99d5-e213-42b3-9538-44f68d993184\" (UID: \"bd1a99d5-e213-42b3-9538-44f68d993184\") " Feb 24 02:15:02.353023 master-0 kubenswrapper[7864]: I0224 02:15:02.353010 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd1a99d5-e213-42b3-9538-44f68d993184-config\") pod \"bd1a99d5-e213-42b3-9538-44f68d993184\" (UID: \"bd1a99d5-e213-42b3-9538-44f68d993184\") " Feb 24 02:15:02.353142 master-0 kubenswrapper[7864]: I0224 02:15:02.353111 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd1a99d5-e213-42b3-9538-44f68d993184-proxy-ca-bundles\") pod \"bd1a99d5-e213-42b3-9538-44f68d993184\" (UID: \"bd1a99d5-e213-42b3-9538-44f68d993184\") " Feb 24 02:15:02.354305 master-0 kubenswrapper[7864]: I0224 02:15:02.354219 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd1a99d5-e213-42b3-9538-44f68d993184-client-ca" (OuterVolumeSpecName: "client-ca") pod "bd1a99d5-e213-42b3-9538-44f68d993184" (UID: "bd1a99d5-e213-42b3-9538-44f68d993184"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:15:02.354399 master-0 kubenswrapper[7864]: I0224 02:15:02.354280 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd1a99d5-e213-42b3-9538-44f68d993184-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bd1a99d5-e213-42b3-9538-44f68d993184" (UID: "bd1a99d5-e213-42b3-9538-44f68d993184"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:15:02.354399 master-0 kubenswrapper[7864]: I0224 02:15:02.354301 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd1a99d5-e213-42b3-9538-44f68d993184-config" (OuterVolumeSpecName: "config") pod "bd1a99d5-e213-42b3-9538-44f68d993184" (UID: "bd1a99d5-e213-42b3-9538-44f68d993184"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:15:02.359658 master-0 kubenswrapper[7864]: I0224 02:15:02.359554 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd1a99d5-e213-42b3-9538-44f68d993184-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bd1a99d5-e213-42b3-9538-44f68d993184" (UID: "bd1a99d5-e213-42b3-9538-44f68d993184"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:15:02.363929 master-0 kubenswrapper[7864]: I0224 02:15:02.363858 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd1a99d5-e213-42b3-9538-44f68d993184-kube-api-access-xd4jm" (OuterVolumeSpecName: "kube-api-access-xd4jm") pod "bd1a99d5-e213-42b3-9538-44f68d993184" (UID: "bd1a99d5-e213-42b3-9538-44f68d993184"). InnerVolumeSpecName "kube-api-access-xd4jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:15:02.397279 master-0 kubenswrapper[7864]: I0224 02:15:02.397203 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56cd46585c-nhkd9" Feb 24 02:15:02.455018 master-0 kubenswrapper[7864]: I0224 02:15:02.454927 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e6fd0d2-d629-4399-b008-979f28390943-config\") pod \"8e6fd0d2-d629-4399-b008-979f28390943\" (UID: \"8e6fd0d2-d629-4399-b008-979f28390943\") " Feb 24 02:15:02.455555 master-0 kubenswrapper[7864]: I0224 02:15:02.455485 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e6fd0d2-d629-4399-b008-979f28390943-client-ca\") pod \"8e6fd0d2-d629-4399-b008-979f28390943\" (UID: \"8e6fd0d2-d629-4399-b008-979f28390943\") " Feb 24 02:15:02.455555 master-0 kubenswrapper[7864]: I0224 02:15:02.455546 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqr2q\" (UniqueName: \"kubernetes.io/projected/8e6fd0d2-d629-4399-b008-979f28390943-kube-api-access-bqr2q\") pod \"8e6fd0d2-d629-4399-b008-979f28390943\" (UID: \"8e6fd0d2-d629-4399-b008-979f28390943\") " Feb 24 02:15:02.456614 master-0 kubenswrapper[7864]: I0224 02:15:02.455624 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e6fd0d2-d629-4399-b008-979f28390943-serving-cert\") pod \"8e6fd0d2-d629-4399-b008-979f28390943\" (UID: \"8e6fd0d2-d629-4399-b008-979f28390943\") " Feb 24 02:15:02.456614 master-0 kubenswrapper[7864]: I0224 02:15:02.455890 7864 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd1a99d5-e213-42b3-9538-44f68d993184-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 02:15:02.456614 master-0 kubenswrapper[7864]: I0224 02:15:02.455910 7864 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd1a99d5-e213-42b3-9538-44f68d993184-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:15:02.456614 master-0 kubenswrapper[7864]: I0224 02:15:02.455923 7864 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd1a99d5-e213-42b3-9538-44f68d993184-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 24 02:15:02.456614 master-0 kubenswrapper[7864]: I0224 02:15:02.455936 7864 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd1a99d5-e213-42b3-9538-44f68d993184-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 02:15:02.456614 master-0 kubenswrapper[7864]: I0224 02:15:02.455952 7864 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd4jm\" (UniqueName: \"kubernetes.io/projected/bd1a99d5-e213-42b3-9538-44f68d993184-kube-api-access-xd4jm\") on node \"master-0\" DevicePath \"\"" Feb 24 02:15:02.456614 master-0 kubenswrapper[7864]: I0224 02:15:02.456277 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e6fd0d2-d629-4399-b008-979f28390943-config" (OuterVolumeSpecName: "config") pod "8e6fd0d2-d629-4399-b008-979f28390943" (UID: "8e6fd0d2-d629-4399-b008-979f28390943"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:15:02.456614 master-0 kubenswrapper[7864]: I0224 02:15:02.456354 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e6fd0d2-d629-4399-b008-979f28390943-client-ca" (OuterVolumeSpecName: "client-ca") pod "8e6fd0d2-d629-4399-b008-979f28390943" (UID: "8e6fd0d2-d629-4399-b008-979f28390943"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:15:02.462652 master-0 kubenswrapper[7864]: I0224 02:15:02.461200 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e6fd0d2-d629-4399-b008-979f28390943-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8e6fd0d2-d629-4399-b008-979f28390943" (UID: "8e6fd0d2-d629-4399-b008-979f28390943"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:15:02.462989 master-0 kubenswrapper[7864]: I0224 02:15:02.462789 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e6fd0d2-d629-4399-b008-979f28390943-kube-api-access-bqr2q" (OuterVolumeSpecName: "kube-api-access-bqr2q") pod "8e6fd0d2-d629-4399-b008-979f28390943" (UID: "8e6fd0d2-d629-4399-b008-979f28390943"). InnerVolumeSpecName "kube-api-access-bqr2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:15:02.490471 master-0 kubenswrapper[7864]: I0224 02:15:02.490400 7864 generic.go:334] "Generic (PLEG): container finished" podID="bd1a99d5-e213-42b3-9538-44f68d993184" containerID="3b3dc3f3343efcc761aa7131cd129c211df9cb96a7206172c4bf7e73b8ab89d1" exitCode=0 Feb 24 02:15:02.491121 master-0 kubenswrapper[7864]: I0224 02:15:02.490514 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" event={"ID":"bd1a99d5-e213-42b3-9538-44f68d993184","Type":"ContainerDied","Data":"3b3dc3f3343efcc761aa7131cd129c211df9cb96a7206172c4bf7e73b8ab89d1"} Feb 24 02:15:02.491121 master-0 kubenswrapper[7864]: I0224 02:15:02.490644 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" Feb 24 02:15:02.491121 master-0 kubenswrapper[7864]: I0224 02:15:02.490779 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57df7db547-2v9c5" event={"ID":"bd1a99d5-e213-42b3-9538-44f68d993184","Type":"ContainerDied","Data":"4ead8768953af0cfa689ea585f04601b758642042fcf9f682eda77679bdd25c2"} Feb 24 02:15:02.491121 master-0 kubenswrapper[7864]: I0224 02:15:02.490814 7864 scope.go:117] "RemoveContainer" containerID="3b3dc3f3343efcc761aa7131cd129c211df9cb96a7206172c4bf7e73b8ab89d1" Feb 24 02:15:02.520768 master-0 kubenswrapper[7864]: I0224 02:15:02.520619 7864 generic.go:334] "Generic (PLEG): container finished" podID="8e6fd0d2-d629-4399-b008-979f28390943" containerID="f07b242f8321209046afc90d0d2ec30b10004f1058240cc14bdc59e604994c55" exitCode=0 Feb 24 02:15:02.524605 master-0 kubenswrapper[7864]: I0224 02:15:02.520947 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56cd46585c-nhkd9" Feb 24 02:15:02.524605 master-0 kubenswrapper[7864]: I0224 02:15:02.520942 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56cd46585c-nhkd9" event={"ID":"8e6fd0d2-d629-4399-b008-979f28390943","Type":"ContainerDied","Data":"f07b242f8321209046afc90d0d2ec30b10004f1058240cc14bdc59e604994c55"} Feb 24 02:15:02.524605 master-0 kubenswrapper[7864]: I0224 02:15:02.521207 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56cd46585c-nhkd9" event={"ID":"8e6fd0d2-d629-4399-b008-979f28390943","Type":"ContainerDied","Data":"9ab3d7488d1528aaa7ffd06e009913bbe758abf8174f66f8797cbc3ed9bce292"} Feb 24 02:15:02.556955 master-0 kubenswrapper[7864]: I0224 02:15:02.556902 7864 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e6fd0d2-d629-4399-b008-979f28390943-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:15:02.556955 master-0 kubenswrapper[7864]: I0224 02:15:02.556947 7864 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e6fd0d2-d629-4399-b008-979f28390943-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 02:15:02.556955 master-0 kubenswrapper[7864]: I0224 02:15:02.556964 7864 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqr2q\" (UniqueName: \"kubernetes.io/projected/8e6fd0d2-d629-4399-b008-979f28390943-kube-api-access-bqr2q\") on node \"master-0\" DevicePath \"\"" Feb 24 02:15:02.556955 master-0 kubenswrapper[7864]: I0224 02:15:02.556978 7864 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e6fd0d2-d629-4399-b008-979f28390943-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 02:15:02.569042 master-0 kubenswrapper[7864]: I0224 02:15:02.566905 7864 scope.go:117] "RemoveContainer" containerID="5ce0ca46b5707d074f514e4d89c259a98815b6c015e08d177ffa0a7a40772a21" Feb 24 02:15:02.572275 master-0 kubenswrapper[7864]: I0224 02:15:02.572211 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-57df7db547-2v9c5"] Feb 24 02:15:02.582626 master-0 kubenswrapper[7864]: I0224 02:15:02.582536 7864 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-57df7db547-2v9c5"] Feb 24 02:15:02.601489 master-0 kubenswrapper[7864]: I0224 02:15:02.601156 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56cd46585c-nhkd9"] Feb 24 02:15:02.605712 master-0 kubenswrapper[7864]: I0224 02:15:02.605598 7864 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56cd46585c-nhkd9"] Feb 24 02:15:02.610293 master-0 kubenswrapper[7864]: I0224 02:15:02.610247 7864 scope.go:117] "RemoveContainer" containerID="3b3dc3f3343efcc761aa7131cd129c211df9cb96a7206172c4bf7e73b8ab89d1" Feb 24 02:15:02.610978 master-0 kubenswrapper[7864]: E0224 02:15:02.610933 7864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b3dc3f3343efcc761aa7131cd129c211df9cb96a7206172c4bf7e73b8ab89d1\": container with ID starting with 3b3dc3f3343efcc761aa7131cd129c211df9cb96a7206172c4bf7e73b8ab89d1 not found: ID does not exist" containerID="3b3dc3f3343efcc761aa7131cd129c211df9cb96a7206172c4bf7e73b8ab89d1" Feb 24 02:15:02.611050 master-0 kubenswrapper[7864]: I0224 02:15:02.610983 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b3dc3f3343efcc761aa7131cd129c211df9cb96a7206172c4bf7e73b8ab89d1"} err="failed to get container status \"3b3dc3f3343efcc761aa7131cd129c211df9cb96a7206172c4bf7e73b8ab89d1\": rpc error: code = NotFound desc = could not find container \"3b3dc3f3343efcc761aa7131cd129c211df9cb96a7206172c4bf7e73b8ab89d1\": container with ID starting with 3b3dc3f3343efcc761aa7131cd129c211df9cb96a7206172c4bf7e73b8ab89d1 not found: ID does not exist" Feb 24 02:15:02.611050 master-0 kubenswrapper[7864]: I0224 02:15:02.611019 7864 scope.go:117] "RemoveContainer" containerID="5ce0ca46b5707d074f514e4d89c259a98815b6c015e08d177ffa0a7a40772a21" Feb 24 02:15:02.611635 master-0 kubenswrapper[7864]: E0224 02:15:02.611589 7864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ce0ca46b5707d074f514e4d89c259a98815b6c015e08d177ffa0a7a40772a21\": container with ID starting with 5ce0ca46b5707d074f514e4d89c259a98815b6c015e08d177ffa0a7a40772a21 not found: ID does not exist" containerID="5ce0ca46b5707d074f514e4d89c259a98815b6c015e08d177ffa0a7a40772a21" Feb 24 02:15:02.611704 master-0 kubenswrapper[7864]: I0224 02:15:02.611636 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ce0ca46b5707d074f514e4d89c259a98815b6c015e08d177ffa0a7a40772a21"} err="failed to get container status \"5ce0ca46b5707d074f514e4d89c259a98815b6c015e08d177ffa0a7a40772a21\": rpc error: code = NotFound desc = could not find container \"5ce0ca46b5707d074f514e4d89c259a98815b6c015e08d177ffa0a7a40772a21\": container with ID starting with 5ce0ca46b5707d074f514e4d89c259a98815b6c015e08d177ffa0a7a40772a21 not found: ID does not exist" Feb 24 02:15:02.611704 master-0 kubenswrapper[7864]: I0224 02:15:02.611678 7864 scope.go:117] "RemoveContainer" containerID="f07b242f8321209046afc90d0d2ec30b10004f1058240cc14bdc59e604994c55" Feb 24 02:15:02.635593 master-0 kubenswrapper[7864]: I0224 02:15:02.635524 7864 scope.go:117] "RemoveContainer" containerID="f07b242f8321209046afc90d0d2ec30b10004f1058240cc14bdc59e604994c55" Feb 24 02:15:02.636549 master-0 kubenswrapper[7864]: E0224 02:15:02.636367 7864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f07b242f8321209046afc90d0d2ec30b10004f1058240cc14bdc59e604994c55\": container with ID starting with f07b242f8321209046afc90d0d2ec30b10004f1058240cc14bdc59e604994c55 not found: ID does not exist" containerID="f07b242f8321209046afc90d0d2ec30b10004f1058240cc14bdc59e604994c55" Feb 24 02:15:02.636549 master-0 kubenswrapper[7864]: I0224 02:15:02.636415 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f07b242f8321209046afc90d0d2ec30b10004f1058240cc14bdc59e604994c55"} err="failed to get container status \"f07b242f8321209046afc90d0d2ec30b10004f1058240cc14bdc59e604994c55\": rpc error: code = NotFound desc = could not find container \"f07b242f8321209046afc90d0d2ec30b10004f1058240cc14bdc59e604994c55\": container with ID starting with f07b242f8321209046afc90d0d2ec30b10004f1058240cc14bdc59e604994c55 not found: ID does not exist" Feb 24 02:15:02.928613 master-0 kubenswrapper[7864]: I0224 02:15:02.927565 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531655-kw6fn" Feb 24 02:15:02.978728 master-0 kubenswrapper[7864]: I0224 02:15:02.963487 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06fb1d82-f9e9-473b-80c5-767ec3948bd4-secret-volume\") pod \"06fb1d82-f9e9-473b-80c5-767ec3948bd4\" (UID: \"06fb1d82-f9e9-473b-80c5-767ec3948bd4\") " Feb 24 02:15:02.978728 master-0 kubenswrapper[7864]: I0224 02:15:02.963723 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbbrj\" (UniqueName: \"kubernetes.io/projected/06fb1d82-f9e9-473b-80c5-767ec3948bd4-kube-api-access-fbbrj\") pod \"06fb1d82-f9e9-473b-80c5-767ec3948bd4\" (UID: \"06fb1d82-f9e9-473b-80c5-767ec3948bd4\") " Feb 24 02:15:02.978728 master-0 kubenswrapper[7864]: I0224 02:15:02.963897 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06fb1d82-f9e9-473b-80c5-767ec3948bd4-config-volume\") pod \"06fb1d82-f9e9-473b-80c5-767ec3948bd4\" (UID: \"06fb1d82-f9e9-473b-80c5-767ec3948bd4\") " Feb 24 02:15:02.978728 master-0 kubenswrapper[7864]: I0224 02:15:02.965344 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06fb1d82-f9e9-473b-80c5-767ec3948bd4-config-volume" (OuterVolumeSpecName: "config-volume") pod "06fb1d82-f9e9-473b-80c5-767ec3948bd4" (UID: "06fb1d82-f9e9-473b-80c5-767ec3948bd4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:15:02.983747 master-0 kubenswrapper[7864]: I0224 02:15:02.983671 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06fb1d82-f9e9-473b-80c5-767ec3948bd4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "06fb1d82-f9e9-473b-80c5-767ec3948bd4" (UID: "06fb1d82-f9e9-473b-80c5-767ec3948bd4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:15:02.983747 master-0 kubenswrapper[7864]: I0224 02:15:02.983693 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06fb1d82-f9e9-473b-80c5-767ec3948bd4-kube-api-access-fbbrj" (OuterVolumeSpecName: "kube-api-access-fbbrj") pod "06fb1d82-f9e9-473b-80c5-767ec3948bd4" (UID: "06fb1d82-f9e9-473b-80c5-767ec3948bd4"). InnerVolumeSpecName "kube-api-access-fbbrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:15:03.067402 master-0 kubenswrapper[7864]: I0224 02:15:03.067329 7864 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbbrj\" (UniqueName: \"kubernetes.io/projected/06fb1d82-f9e9-473b-80c5-767ec3948bd4-kube-api-access-fbbrj\") on node \"master-0\" DevicePath \"\"" Feb 24 02:15:03.067402 master-0 kubenswrapper[7864]: I0224 02:15:03.067386 7864 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06fb1d82-f9e9-473b-80c5-767ec3948bd4-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 24 02:15:03.067402 master-0 kubenswrapper[7864]: I0224 02:15:03.067408 7864 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06fb1d82-f9e9-473b-80c5-767ec3948bd4-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 24 02:15:03.085793 master-0 kubenswrapper[7864]: I0224 02:15:03.085713 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:15:03.085793 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:15:03.085793 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:15:03.085793 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:15:03.086084 master-0 kubenswrapper[7864]: I0224 02:15:03.085803 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:15:03.468149 master-0 kubenswrapper[7864]: I0224 02:15:03.468074 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6"] Feb 24 02:15:03.468493 master-0 kubenswrapper[7864]: E0224 02:15:03.468466 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd1a99d5-e213-42b3-9538-44f68d993184" containerName="controller-manager" Feb 24 02:15:03.468493 master-0 kubenswrapper[7864]: I0224 02:15:03.468489 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd1a99d5-e213-42b3-9538-44f68d993184" containerName="controller-manager" Feb 24 02:15:03.468678 master-0 kubenswrapper[7864]: E0224 02:15:03.468513 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06fb1d82-f9e9-473b-80c5-767ec3948bd4" containerName="collect-profiles" Feb 24 02:15:03.468678 master-0 kubenswrapper[7864]: I0224 02:15:03.468527 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="06fb1d82-f9e9-473b-80c5-767ec3948bd4" containerName="collect-profiles" Feb 24 02:15:03.468678 master-0 kubenswrapper[7864]: E0224 02:15:03.468542 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6fd0d2-d629-4399-b008-979f28390943" containerName="route-controller-manager" Feb 24 02:15:03.468678 master-0 kubenswrapper[7864]: I0224 02:15:03.468555 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6fd0d2-d629-4399-b008-979f28390943" containerName="route-controller-manager" Feb 24 02:15:03.468678 master-0 kubenswrapper[7864]: E0224 02:15:03.468601 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd1a99d5-e213-42b3-9538-44f68d993184" containerName="controller-manager" Feb 24 02:15:03.468678 master-0 kubenswrapper[7864]: I0224 02:15:03.468613 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd1a99d5-e213-42b3-9538-44f68d993184" containerName="controller-manager" Feb 24 02:15:03.469020 master-0 kubenswrapper[7864]: I0224 02:15:03.468911 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd1a99d5-e213-42b3-9538-44f68d993184" containerName="controller-manager" Feb 24 02:15:03.469020 master-0 kubenswrapper[7864]: I0224 02:15:03.468950 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="06fb1d82-f9e9-473b-80c5-767ec3948bd4" containerName="collect-profiles" Feb 24 02:15:03.469020 master-0 kubenswrapper[7864]: I0224 02:15:03.468988 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e6fd0d2-d629-4399-b008-979f28390943" containerName="route-controller-manager" Feb 24 02:15:03.469749 master-0 kubenswrapper[7864]: I0224 02:15:03.469710 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:15:03.481119 master-0 kubenswrapper[7864]: I0224 02:15:03.479088 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-ntn8v" Feb 24 02:15:03.481119 master-0 kubenswrapper[7864]: I0224 02:15:03.479996 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 02:15:03.481119 master-0 kubenswrapper[7864]: I0224 02:15:03.480425 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 02:15:03.481119 master-0 kubenswrapper[7864]: I0224 02:15:03.480601 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 02:15:03.481477 master-0 kubenswrapper[7864]: I0224 02:15:03.481406 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 02:15:03.483698 master-0 kubenswrapper[7864]: I0224 02:15:03.483642 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 02:15:03.498914 master-0 kubenswrapper[7864]: I0224 02:15:03.497352 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd"] Feb 24 02:15:03.498914 master-0 kubenswrapper[7864]: I0224 02:15:03.498904 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd1a99d5-e213-42b3-9538-44f68d993184" containerName="controller-manager" Feb 24 02:15:03.535899 master-0 kubenswrapper[7864]: I0224 02:15:03.535809 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:15:03.535899 master-0 kubenswrapper[7864]: I0224 02:15:03.535866 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 02:15:03.538452 master-0 kubenswrapper[7864]: I0224 02:15:03.538416 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 02:15:03.541061 master-0 kubenswrapper[7864]: I0224 02:15:03.540291 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 02:15:03.541061 master-0 kubenswrapper[7864]: I0224 02:15:03.540403 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-ltf57" Feb 24 02:15:03.541061 master-0 kubenswrapper[7864]: I0224 02:15:03.540605 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 02:15:03.541061 master-0 kubenswrapper[7864]: I0224 02:15:03.541023 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 02:15:03.541366 master-0 kubenswrapper[7864]: I0224 02:15:03.541248 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 02:15:03.547831 master-0 kubenswrapper[7864]: I0224 02:15:03.546919 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6"] Feb 24 02:15:03.555756 master-0 kubenswrapper[7864]: I0224 02:15:03.555717 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd"] Feb 24 02:15:03.572188 master-0 kubenswrapper[7864]: I0224 02:15:03.572103 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531655-kw6fn" event={"ID":"06fb1d82-f9e9-473b-80c5-767ec3948bd4","Type":"ContainerDied","Data":"d422768badc0a68fd4cf2f302f097d6619f8211838023083d72164e3cae439f7"} Feb 24 02:15:03.572188 master-0 kubenswrapper[7864]: I0224 02:15:03.572167 7864 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d422768badc0a68fd4cf2f302f097d6619f8211838023083d72164e3cae439f7" Feb 24 02:15:03.572647 master-0 kubenswrapper[7864]: I0224 02:15:03.572235 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531655-kw6fn" Feb 24 02:15:03.595236 master-0 kubenswrapper[7864]: I0224 02:15:03.595151 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8d6a6c0-b944-4206-9178-9a9930b303b9-serving-cert\") pod \"controller-manager-56b6d9c5b7-lxwt6\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:15:03.595420 master-0 kubenswrapper[7864]: I0224 02:15:03.595368 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-client-ca\") pod \"controller-manager-56b6d9c5b7-lxwt6\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:15:03.595503 master-0 kubenswrapper[7864]: I0224 02:15:03.595476 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxjtb\" (UniqueName: \"kubernetes.io/projected/e8d6a6c0-b944-4206-9178-9a9930b303b9-kube-api-access-zxjtb\") pod \"controller-manager-56b6d9c5b7-lxwt6\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:15:03.595782 master-0 kubenswrapper[7864]: I0224 02:15:03.595685 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55a2662a-d672-4a46-9b81-bfcaf334eedb-client-ca\") pod \"route-controller-manager-676fddcd58-49xzd\" (UID: \"55a2662a-d672-4a46-9b81-bfcaf334eedb\") " pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:15:03.595860 master-0 kubenswrapper[7864]: I0224 02:15:03.595814 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-proxy-ca-bundles\") pod \"controller-manager-56b6d9c5b7-lxwt6\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:15:03.595924 master-0 kubenswrapper[7864]: I0224 02:15:03.595907 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55a2662a-d672-4a46-9b81-bfcaf334eedb-serving-cert\") pod \"route-controller-manager-676fddcd58-49xzd\" (UID: \"55a2662a-d672-4a46-9b81-bfcaf334eedb\") " pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:15:03.596034 master-0 kubenswrapper[7864]: I0224 02:15:03.595999 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-config\") pod \"controller-manager-56b6d9c5b7-lxwt6\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:15:03.596109 master-0 kubenswrapper[7864]: I0224 02:15:03.596058 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55a2662a-d672-4a46-9b81-bfcaf334eedb-config\") pod \"route-controller-manager-676fddcd58-49xzd\" (UID: \"55a2662a-d672-4a46-9b81-bfcaf334eedb\") " pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:15:03.596184 master-0 kubenswrapper[7864]: I0224 02:15:03.596121 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzghr\" (UniqueName: \"kubernetes.io/projected/55a2662a-d672-4a46-9b81-bfcaf334eedb-kube-api-access-gzghr\") pod \"route-controller-manager-676fddcd58-49xzd\" (UID: \"55a2662a-d672-4a46-9b81-bfcaf334eedb\") " pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:15:03.698364 master-0 kubenswrapper[7864]: I0224 02:15:03.698230 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-client-ca\") pod \"controller-manager-56b6d9c5b7-lxwt6\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:15:03.698525 master-0 kubenswrapper[7864]: I0224 02:15:03.698429 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxjtb\" (UniqueName: \"kubernetes.io/projected/e8d6a6c0-b944-4206-9178-9a9930b303b9-kube-api-access-zxjtb\") pod \"controller-manager-56b6d9c5b7-lxwt6\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:15:03.698666 master-0 kubenswrapper[7864]: I0224 02:15:03.698629 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55a2662a-d672-4a46-9b81-bfcaf334eedb-client-ca\") pod \"route-controller-manager-676fddcd58-49xzd\" (UID: \"55a2662a-d672-4a46-9b81-bfcaf334eedb\") " pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:15:03.698961 master-0 kubenswrapper[7864]: I0224 02:15:03.698881 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-proxy-ca-bundles\") pod \"controller-manager-56b6d9c5b7-lxwt6\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:15:03.699214 master-0 kubenswrapper[7864]: I0224 02:15:03.699165 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55a2662a-d672-4a46-9b81-bfcaf334eedb-serving-cert\") pod \"route-controller-manager-676fddcd58-49xzd\" (UID: \"55a2662a-d672-4a46-9b81-bfcaf334eedb\") " pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:15:03.699284 master-0 kubenswrapper[7864]: I0224 02:15:03.699250 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-config\") pod \"controller-manager-56b6d9c5b7-lxwt6\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:15:03.699355 master-0 kubenswrapper[7864]: I0224 02:15:03.699300 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55a2662a-d672-4a46-9b81-bfcaf334eedb-config\") pod \"route-controller-manager-676fddcd58-49xzd\" (UID: \"55a2662a-d672-4a46-9b81-bfcaf334eedb\") " pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:15:03.699427 master-0 kubenswrapper[7864]: I0224 02:15:03.699395 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzghr\" (UniqueName: \"kubernetes.io/projected/55a2662a-d672-4a46-9b81-bfcaf334eedb-kube-api-access-gzghr\") pod \"route-controller-manager-676fddcd58-49xzd\" (UID: \"55a2662a-d672-4a46-9b81-bfcaf334eedb\") " pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:15:03.700269 master-0 kubenswrapper[7864]: I0224 02:15:03.700192 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-client-ca\") pod \"controller-manager-56b6d9c5b7-lxwt6\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:15:03.701123 master-0 kubenswrapper[7864]: I0224 02:15:03.701052 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55a2662a-d672-4a46-9b81-bfcaf334eedb-client-ca\") pod \"route-controller-manager-676fddcd58-49xzd\" (UID: \"55a2662a-d672-4a46-9b81-bfcaf334eedb\") " pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:15:03.701728 master-0 kubenswrapper[7864]: I0224 02:15:03.701668 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8d6a6c0-b944-4206-9178-9a9930b303b9-serving-cert\") pod \"controller-manager-56b6d9c5b7-lxwt6\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:15:03.701985 master-0 kubenswrapper[7864]: I0224 02:15:03.701931 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-config\") pod \"controller-manager-56b6d9c5b7-lxwt6\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:15:03.702076 master-0 kubenswrapper[7864]: I0224 02:15:03.702000 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55a2662a-d672-4a46-9b81-bfcaf334eedb-config\") pod \"route-controller-manager-676fddcd58-49xzd\" (UID: \"55a2662a-d672-4a46-9b81-bfcaf334eedb\") " pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:15:03.704871 master-0 kubenswrapper[7864]: I0224 02:15:03.704760 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-proxy-ca-bundles\") pod \"controller-manager-56b6d9c5b7-lxwt6\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:15:03.706218 master-0 kubenswrapper[7864]: I0224 02:15:03.706161 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8d6a6c0-b944-4206-9178-9a9930b303b9-serving-cert\") pod \"controller-manager-56b6d9c5b7-lxwt6\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:15:03.707915 master-0 kubenswrapper[7864]: I0224 02:15:03.707862 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55a2662a-d672-4a46-9b81-bfcaf334eedb-serving-cert\") pod \"route-controller-manager-676fddcd58-49xzd\" (UID: \"55a2662a-d672-4a46-9b81-bfcaf334eedb\") " pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:15:03.728701 master-0 kubenswrapper[7864]: I0224 02:15:03.728532 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxjtb\" (UniqueName: \"kubernetes.io/projected/e8d6a6c0-b944-4206-9178-9a9930b303b9-kube-api-access-zxjtb\") pod \"controller-manager-56b6d9c5b7-lxwt6\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:15:03.730334 master-0 kubenswrapper[7864]: I0224 02:15:03.730275 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzghr\" (UniqueName: \"kubernetes.io/projected/55a2662a-d672-4a46-9b81-bfcaf334eedb-kube-api-access-gzghr\") pod \"route-controller-manager-676fddcd58-49xzd\" (UID: \"55a2662a-d672-4a46-9b81-bfcaf334eedb\") " pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:15:03.870562 master-0 kubenswrapper[7864]: I0224 02:15:03.870488 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:15:03.897593 master-0 kubenswrapper[7864]: I0224 02:15:03.896919 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:15:03.904732 master-0 kubenswrapper[7864]: I0224 02:15:03.900980 7864 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e6fd0d2-d629-4399-b008-979f28390943" path="/var/lib/kubelet/pods/8e6fd0d2-d629-4399-b008-979f28390943/volumes" Feb 24 02:15:03.904732 master-0 kubenswrapper[7864]: I0224 02:15:03.902772 7864 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd1a99d5-e213-42b3-9538-44f68d993184" path="/var/lib/kubelet/pods/bd1a99d5-e213-42b3-9538-44f68d993184/volumes" Feb 24 02:15:04.088767 master-0 kubenswrapper[7864]: I0224 02:15:04.088670 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:15:04.088767 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:15:04.088767 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:15:04.088767 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:15:04.089138 master-0 kubenswrapper[7864]: I0224 02:15:04.088787 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:15:04.323881 master-0 kubenswrapper[7864]: I0224 02:15:04.323812 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd"] Feb 24 02:15:04.331127 master-0 kubenswrapper[7864]: W0224 02:15:04.331040 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55a2662a_d672_4a46_9b81_bfcaf334eedb.slice/crio-45681e7db0a00432167c0ceb01dfa150d4182b397673d5a0da048e4b9054ffea WatchSource:0}: Error finding container 45681e7db0a00432167c0ceb01dfa150d4182b397673d5a0da048e4b9054ffea: Status 404 returned error can't find the container with id 45681e7db0a00432167c0ceb01dfa150d4182b397673d5a0da048e4b9054ffea Feb 24 02:15:04.449486 master-0 kubenswrapper[7864]: I0224 02:15:04.449416 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6"] Feb 24 02:15:04.472496 master-0 kubenswrapper[7864]: W0224 02:15:04.472434 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8d6a6c0_b944_4206_9178_9a9930b303b9.slice/crio-5c3398a6c263edc9332a777f898c18bf8d4d5354af4bc2396e80f920a1e77f07 WatchSource:0}: Error finding container 5c3398a6c263edc9332a777f898c18bf8d4d5354af4bc2396e80f920a1e77f07: Status 404 returned error can't find the container with id 5c3398a6c263edc9332a777f898c18bf8d4d5354af4bc2396e80f920a1e77f07 Feb 24 02:15:04.591116 master-0 kubenswrapper[7864]: I0224 02:15:04.591047 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" event={"ID":"e8d6a6c0-b944-4206-9178-9a9930b303b9","Type":"ContainerStarted","Data":"5c3398a6c263edc9332a777f898c18bf8d4d5354af4bc2396e80f920a1e77f07"} Feb 24 02:15:04.594508 master-0 kubenswrapper[7864]: I0224 02:15:04.594426 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" event={"ID":"55a2662a-d672-4a46-9b81-bfcaf334eedb","Type":"ContainerStarted","Data":"bcd82e1ea303c732e8cd1c96072c832298944aee61e13de8101c6575d136f541"} Feb 24 02:15:04.594649 master-0 kubenswrapper[7864]: I0224 02:15:04.594509 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" event={"ID":"55a2662a-d672-4a46-9b81-bfcaf334eedb","Type":"ContainerStarted","Data":"45681e7db0a00432167c0ceb01dfa150d4182b397673d5a0da048e4b9054ffea"} Feb 24 02:15:04.595701 master-0 kubenswrapper[7864]: I0224 02:15:04.595650 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:15:04.599530 master-0 kubenswrapper[7864]: I0224 02:15:04.598347 7864 patch_prober.go:28] interesting pod/route-controller-manager-676fddcd58-49xzd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.80:8443/healthz\": dial tcp 10.128.0.80:8443: connect: connection refused" start-of-body= Feb 24 02:15:04.599530 master-0 kubenswrapper[7864]: I0224 02:15:04.598437 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" podUID="55a2662a-d672-4a46-9b81-bfcaf334eedb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.80:8443/healthz\": dial tcp 10.128.0.80:8443: connect: connection refused" Feb 24 02:15:05.084863 master-0 kubenswrapper[7864]: I0224 02:15:05.084788 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:15:05.084863 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:15:05.084863 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:15:05.084863 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:15:05.085403 master-0 kubenswrapper[7864]: I0224 02:15:05.084869 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:15:05.605918 master-0 kubenswrapper[7864]: I0224 02:15:05.605802 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" event={"ID":"e8d6a6c0-b944-4206-9178-9a9930b303b9","Type":"ContainerStarted","Data":"5ed088abb8fdf119602dca1779c3b84da28af95aaab8dcf8c7df738c7d83aa56"} Feb 24 02:15:05.614199 master-0 kubenswrapper[7864]: I0224 02:15:05.614145 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:15:05.631506 master-0 kubenswrapper[7864]: I0224 02:15:05.631421 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" podStartSLOduration=4.631400296 podStartE2EDuration="4.631400296s" podCreationTimestamp="2026-02-24 02:15:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:15:04.631815102 +0000 UTC m=+668.959468764" watchObservedRunningTime="2026-02-24 02:15:05.631400296 +0000 UTC m=+669.959053938" Feb 24 02:15:05.634152 master-0 kubenswrapper[7864]: I0224 02:15:05.634087 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" podStartSLOduration=4.634075321 podStartE2EDuration="4.634075321s" podCreationTimestamp="2026-02-24 02:15:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:15:05.629655457 +0000 UTC m=+669.957309109" watchObservedRunningTime="2026-02-24 02:15:05.634075321 +0000 UTC m=+669.961728963" Feb 24 02:15:06.084531 master-0 kubenswrapper[7864]: I0224 02:15:06.084442 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:15:06.084531 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:15:06.084531 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:15:06.084531 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:15:06.085002 master-0 kubenswrapper[7864]: I0224 02:15:06.084600 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:15:06.616556 master-0 kubenswrapper[7864]: I0224 02:15:06.616490 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:15:06.629415 master-0 kubenswrapper[7864]: I0224 02:15:06.629324 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:15:07.085309 master-0 kubenswrapper[7864]: I0224 02:15:07.085197 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:15:07.085309 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:15:07.085309 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:15:07.085309 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:15:07.085309 master-0 kubenswrapper[7864]: I0224 02:15:07.085300 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:15:08.086190 master-0 kubenswrapper[7864]: I0224 02:15:08.086073 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:15:08.086190 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:15:08.086190 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:15:08.086190 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:15:08.087352 master-0 kubenswrapper[7864]: I0224 02:15:08.086208 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:15:08.088652 master-0 kubenswrapper[7864]: I0224 02:15:08.088523 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 24 02:15:08.090030 master-0 kubenswrapper[7864]: I0224 02:15:08.089977 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 24 02:15:08.092769 master-0 kubenswrapper[7864]: I0224 02:15:08.092719 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-k86pk" Feb 24 02:15:08.094168 master-0 kubenswrapper[7864]: I0224 02:15:08.094113 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 24 02:15:08.109409 master-0 kubenswrapper[7864]: I0224 02:15:08.109325 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 24 02:15:08.206618 master-0 kubenswrapper[7864]: I0224 02:15:08.206498 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c12652f5-003f-4b77-b2bb-b666c9d7bb53-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"c12652f5-003f-4b77-b2bb-b666c9d7bb53\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 24 02:15:08.206995 master-0 kubenswrapper[7864]: I0224 02:15:08.206670 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c12652f5-003f-4b77-b2bb-b666c9d7bb53-kube-api-access\") pod \"installer-2-master-0\" (UID: \"c12652f5-003f-4b77-b2bb-b666c9d7bb53\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 24 02:15:08.206995 master-0 kubenswrapper[7864]: I0224 02:15:08.206714 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c12652f5-003f-4b77-b2bb-b666c9d7bb53-var-lock\") pod \"installer-2-master-0\" (UID: \"c12652f5-003f-4b77-b2bb-b666c9d7bb53\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 24 02:15:08.307972 master-0 kubenswrapper[7864]: I0224 02:15:08.307898 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c12652f5-003f-4b77-b2bb-b666c9d7bb53-var-lock\") pod \"installer-2-master-0\" (UID: \"c12652f5-003f-4b77-b2bb-b666c9d7bb53\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 24 02:15:08.308333 master-0 kubenswrapper[7864]: I0224 02:15:08.308068 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c12652f5-003f-4b77-b2bb-b666c9d7bb53-var-lock\") pod \"installer-2-master-0\" (UID: \"c12652f5-003f-4b77-b2bb-b666c9d7bb53\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 24 02:15:08.308333 master-0 kubenswrapper[7864]: I0224 02:15:08.308135 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c12652f5-003f-4b77-b2bb-b666c9d7bb53-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"c12652f5-003f-4b77-b2bb-b666c9d7bb53\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 24 02:15:08.308479 master-0 kubenswrapper[7864]: I0224 02:15:08.308349 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c12652f5-003f-4b77-b2bb-b666c9d7bb53-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"c12652f5-003f-4b77-b2bb-b666c9d7bb53\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 24 02:15:08.308479 master-0 kubenswrapper[7864]: I0224 02:15:08.308398 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c12652f5-003f-4b77-b2bb-b666c9d7bb53-kube-api-access\") pod \"installer-2-master-0\" (UID: \"c12652f5-003f-4b77-b2bb-b666c9d7bb53\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 24 02:15:08.337179 master-0 kubenswrapper[7864]: I0224 02:15:08.337047 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c12652f5-003f-4b77-b2bb-b666c9d7bb53-kube-api-access\") pod \"installer-2-master-0\" (UID: \"c12652f5-003f-4b77-b2bb-b666c9d7bb53\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 24 02:15:08.457397 master-0 kubenswrapper[7864]: I0224 02:15:08.457300 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 24 02:15:08.988492 master-0 kubenswrapper[7864]: I0224 02:15:08.988413 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 24 02:15:09.092657 master-0 kubenswrapper[7864]: I0224 02:15:09.092599 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:15:09.092657 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:15:09.092657 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:15:09.092657 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:15:09.093730 master-0 kubenswrapper[7864]: I0224 02:15:09.093681 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:15:09.647293 master-0 kubenswrapper[7864]: I0224 02:15:09.647199 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"c12652f5-003f-4b77-b2bb-b666c9d7bb53","Type":"ContainerStarted","Data":"50801a56a4404416a44874540419cd05a4a4bedf1fb5022f9e0b4725f3c11f4d"} Feb 24 02:15:09.647293 master-0 kubenswrapper[7864]: I0224 02:15:09.647285 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"c12652f5-003f-4b77-b2bb-b666c9d7bb53","Type":"ContainerStarted","Data":"d91c1d25f97f5902e0cd98da21fb3d84dc557631fbc1bb6bed501fad908da85d"} Feb 24 02:15:09.673917 master-0 kubenswrapper[7864]: I0224 02:15:09.673805 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-0" podStartSLOduration=1.673781328 podStartE2EDuration="1.673781328s" podCreationTimestamp="2026-02-24 02:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:15:09.670403463 +0000 UTC m=+673.998057125" watchObservedRunningTime="2026-02-24 02:15:09.673781328 +0000 UTC m=+674.001434990" Feb 24 02:15:10.085695 master-0 kubenswrapper[7864]: I0224 02:15:10.085559 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:15:10.085695 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:15:10.085695 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:15:10.085695 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:15:10.086195 master-0 kubenswrapper[7864]: I0224 02:15:10.085724 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:15:10.253504 master-0 kubenswrapper[7864]: E0224 02:15:10.253162 7864 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4fc3c3f9b07ab4f3d8df72a12e6a49ea26ea25715d1967b9b5fbf8ea69312af9" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 02:15:10.255749 master-0 kubenswrapper[7864]: E0224 02:15:10.255668 7864 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4fc3c3f9b07ab4f3d8df72a12e6a49ea26ea25715d1967b9b5fbf8ea69312af9" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 02:15:10.258136 master-0 kubenswrapper[7864]: E0224 02:15:10.258067 7864 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4fc3c3f9b07ab4f3d8df72a12e6a49ea26ea25715d1967b9b5fbf8ea69312af9" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 02:15:10.258276 master-0 kubenswrapper[7864]: E0224 02:15:10.258166 7864 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-75qmm" podUID="36483bf4-9e27-4c15-bd83-bde809a64b5c" containerName="kube-multus-additional-cni-plugins" Feb 24 02:15:11.086165 master-0 kubenswrapper[7864]: I0224 02:15:11.086053 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:15:11.086165 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:15:11.086165 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:15:11.086165 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:15:11.086711 master-0 kubenswrapper[7864]: I0224 02:15:11.086176 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:15:12.084972 master-0 kubenswrapper[7864]: I0224 02:15:12.084883 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:15:12.084972 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:15:12.084972 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:15:12.084972 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:15:12.086225 master-0 kubenswrapper[7864]: I0224 02:15:12.085013 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:15:13.085201 master-0 kubenswrapper[7864]: I0224 02:15:13.085082 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:15:13.085201 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:15:13.085201 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:15:13.085201 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:15:13.086482 master-0 kubenswrapper[7864]: I0224 02:15:13.085213 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:15:13.362244 master-0 kubenswrapper[7864]: I0224 02:15:13.362161 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-75qmm_36483bf4-9e27-4c15-bd83-bde809a64b5c/kube-multus-additional-cni-plugins/0.log" Feb 24 02:15:13.362558 master-0 kubenswrapper[7864]: I0224 02:15:13.362286 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-75qmm" Feb 24 02:15:13.402122 master-0 kubenswrapper[7864]: I0224 02:15:13.401989 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/36483bf4-9e27-4c15-bd83-bde809a64b5c-cni-sysctl-allowlist\") pod \"36483bf4-9e27-4c15-bd83-bde809a64b5c\" (UID: \"36483bf4-9e27-4c15-bd83-bde809a64b5c\") " Feb 24 02:15:13.402418 master-0 kubenswrapper[7864]: I0224 02:15:13.402303 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/36483bf4-9e27-4c15-bd83-bde809a64b5c-tuning-conf-dir\") pod \"36483bf4-9e27-4c15-bd83-bde809a64b5c\" (UID: \"36483bf4-9e27-4c15-bd83-bde809a64b5c\") " Feb 24 02:15:13.402505 master-0 kubenswrapper[7864]: I0224 02:15:13.402411 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36483bf4-9e27-4c15-bd83-bde809a64b5c-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "36483bf4-9e27-4c15-bd83-bde809a64b5c" (UID: "36483bf4-9e27-4c15-bd83-bde809a64b5c"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:15:13.402505 master-0 kubenswrapper[7864]: I0224 02:15:13.402416 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz8z8\" (UniqueName: \"kubernetes.io/projected/36483bf4-9e27-4c15-bd83-bde809a64b5c-kube-api-access-qz8z8\") pod \"36483bf4-9e27-4c15-bd83-bde809a64b5c\" (UID: \"36483bf4-9e27-4c15-bd83-bde809a64b5c\") " Feb 24 02:15:13.402670 master-0 kubenswrapper[7864]: I0224 02:15:13.402563 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/36483bf4-9e27-4c15-bd83-bde809a64b5c-ready\") pod \"36483bf4-9e27-4c15-bd83-bde809a64b5c\" (UID: \"36483bf4-9e27-4c15-bd83-bde809a64b5c\") " Feb 24 02:15:13.402962 master-0 kubenswrapper[7864]: I0224 02:15:13.402853 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36483bf4-9e27-4c15-bd83-bde809a64b5c-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "36483bf4-9e27-4c15-bd83-bde809a64b5c" (UID: "36483bf4-9e27-4c15-bd83-bde809a64b5c"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:15:13.403183 master-0 kubenswrapper[7864]: I0224 02:15:13.403134 7864 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/36483bf4-9e27-4c15-bd83-bde809a64b5c-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:15:13.403275 master-0 kubenswrapper[7864]: I0224 02:15:13.403202 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36483bf4-9e27-4c15-bd83-bde809a64b5c-ready" (OuterVolumeSpecName: "ready") pod "36483bf4-9e27-4c15-bd83-bde809a64b5c" (UID: "36483bf4-9e27-4c15-bd83-bde809a64b5c"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:15:13.407749 master-0 kubenswrapper[7864]: I0224 02:15:13.407700 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36483bf4-9e27-4c15-bd83-bde809a64b5c-kube-api-access-qz8z8" (OuterVolumeSpecName: "kube-api-access-qz8z8") pod "36483bf4-9e27-4c15-bd83-bde809a64b5c" (UID: "36483bf4-9e27-4c15-bd83-bde809a64b5c"). InnerVolumeSpecName "kube-api-access-qz8z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:15:13.504843 master-0 kubenswrapper[7864]: I0224 02:15:13.504765 7864 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz8z8\" (UniqueName: \"kubernetes.io/projected/36483bf4-9e27-4c15-bd83-bde809a64b5c-kube-api-access-qz8z8\") on node \"master-0\" DevicePath \"\"" Feb 24 02:15:13.504843 master-0 kubenswrapper[7864]: I0224 02:15:13.504828 7864 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/36483bf4-9e27-4c15-bd83-bde809a64b5c-ready\") on node \"master-0\" DevicePath \"\"" Feb 24 02:15:13.504843 master-0 kubenswrapper[7864]: I0224 02:15:13.504850 7864 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/36483bf4-9e27-4c15-bd83-bde809a64b5c-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Feb 24 02:15:13.688122 master-0 kubenswrapper[7864]: I0224 02:15:13.687963 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-75qmm_36483bf4-9e27-4c15-bd83-bde809a64b5c/kube-multus-additional-cni-plugins/0.log" Feb 24 02:15:13.688122 master-0 kubenswrapper[7864]: I0224 02:15:13.688061 7864 generic.go:334] "Generic (PLEG): container finished" podID="36483bf4-9e27-4c15-bd83-bde809a64b5c" containerID="4fc3c3f9b07ab4f3d8df72a12e6a49ea26ea25715d1967b9b5fbf8ea69312af9" exitCode=137 Feb 24 02:15:13.688122 master-0 kubenswrapper[7864]: I0224 02:15:13.688115 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-75qmm" event={"ID":"36483bf4-9e27-4c15-bd83-bde809a64b5c","Type":"ContainerDied","Data":"4fc3c3f9b07ab4f3d8df72a12e6a49ea26ea25715d1967b9b5fbf8ea69312af9"} Feb 24 02:15:13.688520 master-0 kubenswrapper[7864]: I0224 02:15:13.688180 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-75qmm" event={"ID":"36483bf4-9e27-4c15-bd83-bde809a64b5c","Type":"ContainerDied","Data":"21c99d51c26516b820359d2a4b1fd0df121190c0e505cd23ba2c2f47d16cd7f9"} Feb 24 02:15:13.688520 master-0 kubenswrapper[7864]: I0224 02:15:13.688180 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-75qmm" Feb 24 02:15:13.688520 master-0 kubenswrapper[7864]: I0224 02:15:13.688215 7864 scope.go:117] "RemoveContainer" containerID="4fc3c3f9b07ab4f3d8df72a12e6a49ea26ea25715d1967b9b5fbf8ea69312af9" Feb 24 02:15:13.715534 master-0 kubenswrapper[7864]: I0224 02:15:13.715480 7864 scope.go:117] "RemoveContainer" containerID="4fc3c3f9b07ab4f3d8df72a12e6a49ea26ea25715d1967b9b5fbf8ea69312af9" Feb 24 02:15:13.716366 master-0 kubenswrapper[7864]: E0224 02:15:13.716281 7864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fc3c3f9b07ab4f3d8df72a12e6a49ea26ea25715d1967b9b5fbf8ea69312af9\": container with ID starting with 4fc3c3f9b07ab4f3d8df72a12e6a49ea26ea25715d1967b9b5fbf8ea69312af9 not found: ID does not exist" containerID="4fc3c3f9b07ab4f3d8df72a12e6a49ea26ea25715d1967b9b5fbf8ea69312af9" Feb 24 02:15:13.716453 master-0 kubenswrapper[7864]: I0224 02:15:13.716380 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fc3c3f9b07ab4f3d8df72a12e6a49ea26ea25715d1967b9b5fbf8ea69312af9"} err="failed to get container status \"4fc3c3f9b07ab4f3d8df72a12e6a49ea26ea25715d1967b9b5fbf8ea69312af9\": rpc error: code = NotFound desc = could not find container \"4fc3c3f9b07ab4f3d8df72a12e6a49ea26ea25715d1967b9b5fbf8ea69312af9\": container with ID starting with 4fc3c3f9b07ab4f3d8df72a12e6a49ea26ea25715d1967b9b5fbf8ea69312af9 not found: ID does not exist" Feb 24 02:15:13.747753 master-0 kubenswrapper[7864]: I0224 02:15:13.747679 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-75qmm"] Feb 24 02:15:13.756459 master-0 kubenswrapper[7864]: I0224 02:15:13.756391 7864 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-75qmm"] Feb 24 02:15:13.892048 master-0 kubenswrapper[7864]: I0224 02:15:13.891958 7864 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36483bf4-9e27-4c15-bd83-bde809a64b5c" path="/var/lib/kubelet/pods/36483bf4-9e27-4c15-bd83-bde809a64b5c/volumes" Feb 24 02:15:14.084953 master-0 kubenswrapper[7864]: I0224 02:15:14.084863 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:15:14.084953 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:15:14.084953 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:15:14.084953 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:15:14.086000 master-0 kubenswrapper[7864]: I0224 02:15:14.084971 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:15:15.085204 master-0 kubenswrapper[7864]: I0224 02:15:15.085089 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:15:15.085204 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:15:15.085204 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:15:15.085204 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:15:15.086368 master-0 kubenswrapper[7864]: I0224 02:15:15.085245 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:15:16.086054 master-0 kubenswrapper[7864]: I0224 02:15:16.085946 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:15:16.086054 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:15:16.086054 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:15:16.086054 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:15:16.086054 master-0 kubenswrapper[7864]: I0224 02:15:16.086067 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:15:16.528720 master-0 kubenswrapper[7864]: I0224 02:15:16.528630 7864 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0"] Feb 24 02:15:16.529330 master-0 kubenswrapper[7864]: I0224 02:15:16.529260 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcdctl" containerID="cri-o://d67efce29a4c7dadf9def673a29b605a940ecf24b2d87b4ec084d429002c032e" gracePeriod=30 Feb 24 02:15:16.529412 master-0 kubenswrapper[7864]: I0224 02:15:16.529352 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-readyz" containerID="cri-o://7418acef2878f63a41664398dc64c50d69563b99c0e4935df8104aecdaf495b4" gracePeriod=30 Feb 24 02:15:16.529491 master-0 kubenswrapper[7864]: I0224 02:15:16.529375 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-metrics" containerID="cri-o://0899242d9942257db778aa29a478801ba8d2518e639b0033c7b16a0a42ff10a5" gracePeriod=30 Feb 24 02:15:16.529491 master-0 kubenswrapper[7864]: I0224 02:15:16.529442 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-rev" containerID="cri-o://8e958425dd3f7d3725b8e44d186204361d688e9476e552207418132e8cb6897d" gracePeriod=30 Feb 24 02:15:16.529734 master-0 kubenswrapper[7864]: I0224 02:15:16.529388 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd" containerID="cri-o://e94d6492402ac33f13355d596dfc90617ce3a06153f369c7597c91d9aa0d6092" gracePeriod=30 Feb 24 02:15:16.534113 master-0 kubenswrapper[7864]: I0224 02:15:16.534045 7864 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Feb 24 02:15:16.534667 master-0 kubenswrapper[7864]: E0224 02:15:16.534616 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36483bf4-9e27-4c15-bd83-bde809a64b5c" containerName="kube-multus-additional-cni-plugins" Feb 24 02:15:16.534667 master-0 kubenswrapper[7864]: I0224 02:15:16.534652 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="36483bf4-9e27-4c15-bd83-bde809a64b5c" containerName="kube-multus-additional-cni-plugins" Feb 24 02:15:16.534811 master-0 kubenswrapper[7864]: E0224 02:15:16.534685 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="setup" Feb 24 02:15:16.534811 master-0 kubenswrapper[7864]: I0224 02:15:16.534699 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="setup" Feb 24 02:15:16.534811 master-0 kubenswrapper[7864]: E0224 02:15:16.534720 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcdctl" Feb 24 02:15:16.534811 master-0 kubenswrapper[7864]: I0224 02:15:16.534733 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcdctl" Feb 24 02:15:16.534811 master-0 kubenswrapper[7864]: E0224 02:15:16.534754 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd" Feb 24 02:15:16.534811 master-0 kubenswrapper[7864]: I0224 02:15:16.534766 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd" Feb 24 02:15:16.534811 master-0 kubenswrapper[7864]: E0224 02:15:16.534786 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-resources-copy" Feb 24 02:15:16.534811 master-0 kubenswrapper[7864]: I0224 02:15:16.534798 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-resources-copy" Feb 24 02:15:16.534811 master-0 kubenswrapper[7864]: E0224 02:15:16.534816 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-ensure-env-vars" Feb 24 02:15:16.534811 master-0 kubenswrapper[7864]: I0224 02:15:16.534830 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-ensure-env-vars" Feb 24 02:15:16.535310 master-0 kubenswrapper[7864]: E0224 02:15:16.534846 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-rev" Feb 24 02:15:16.535310 master-0 kubenswrapper[7864]: I0224 02:15:16.534859 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-rev" Feb 24 02:15:16.535310 master-0 kubenswrapper[7864]: E0224 02:15:16.534880 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-metrics" Feb 24 02:15:16.535310 master-0 kubenswrapper[7864]: I0224 02:15:16.534892 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-metrics" Feb 24 02:15:16.535310 master-0 kubenswrapper[7864]: E0224 02:15:16.534916 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-readyz" Feb 24 02:15:16.535310 master-0 kubenswrapper[7864]: I0224 02:15:16.534928 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-readyz" Feb 24 02:15:16.535310 master-0 kubenswrapper[7864]: I0224 02:15:16.535128 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-rev" Feb 24 02:15:16.535310 master-0 kubenswrapper[7864]: I0224 02:15:16.535147 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-readyz" Feb 24 02:15:16.535310 master-0 kubenswrapper[7864]: I0224 02:15:16.535168 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcdctl" Feb 24 02:15:16.535310 master-0 kubenswrapper[7864]: I0224 02:15:16.535181 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd" Feb 24 02:15:16.535310 master-0 kubenswrapper[7864]: I0224 02:15:16.535212 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-metrics" Feb 24 02:15:16.535310 master-0 kubenswrapper[7864]: I0224 02:15:16.535234 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="36483bf4-9e27-4c15-bd83-bde809a64b5c" containerName="kube-multus-additional-cni-plugins" Feb 24 02:15:16.659646 master-0 kubenswrapper[7864]: I0224 02:15:16.659470 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:15:16.659646 master-0 kubenswrapper[7864]: I0224 02:15:16.659599 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:15:16.659978 master-0 kubenswrapper[7864]: I0224 02:15:16.659705 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:15:16.659978 master-0 kubenswrapper[7864]: I0224 02:15:16.659765 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:15:16.659978 master-0 kubenswrapper[7864]: I0224 02:15:16.659806 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:15:16.659978 master-0 kubenswrapper[7864]: I0224 02:15:16.659847 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:15:16.723910 master-0 kubenswrapper[7864]: I0224 02:15:16.723823 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-rev/0.log" Feb 24 02:15:16.726999 master-0 kubenswrapper[7864]: I0224 02:15:16.726945 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-metrics/0.log" Feb 24 02:15:16.730403 master-0 kubenswrapper[7864]: I0224 02:15:16.730228 7864 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="8e958425dd3f7d3725b8e44d186204361d688e9476e552207418132e8cb6897d" exitCode=2 Feb 24 02:15:16.730403 master-0 kubenswrapper[7864]: I0224 02:15:16.730289 7864 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="7418acef2878f63a41664398dc64c50d69563b99c0e4935df8104aecdaf495b4" exitCode=0 Feb 24 02:15:16.730403 master-0 kubenswrapper[7864]: I0224 02:15:16.730303 7864 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="0899242d9942257db778aa29a478801ba8d2518e639b0033c7b16a0a42ff10a5" exitCode=2 Feb 24 02:15:16.761951 master-0 kubenswrapper[7864]: I0224 02:15:16.761872 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:15:16.762104 master-0 kubenswrapper[7864]: I0224 02:15:16.762034 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:15:16.762104 master-0 kubenswrapper[7864]: I0224 02:15:16.762043 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:15:16.762242 master-0 kubenswrapper[7864]: I0224 02:15:16.762207 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:15:16.762242 master-0 kubenswrapper[7864]: I0224 02:15:16.762211 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:15:16.762369 master-0 kubenswrapper[7864]: I0224 02:15:16.762281 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:15:16.762369 master-0 kubenswrapper[7864]: I0224 02:15:16.762330 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:15:16.762369 master-0 kubenswrapper[7864]: I0224 02:15:16.762347 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:15:16.762539 master-0 kubenswrapper[7864]: I0224 02:15:16.762285 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:15:16.762539 master-0 kubenswrapper[7864]: I0224 02:15:16.762377 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:15:16.762539 master-0 kubenswrapper[7864]: I0224 02:15:16.762415 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:15:16.762539 master-0 kubenswrapper[7864]: I0224 02:15:16.762472 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:15:17.084757 master-0 kubenswrapper[7864]: I0224 02:15:17.084672 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:15:17.084757 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:15:17.084757 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:15:17.084757 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:15:17.085179 master-0 kubenswrapper[7864]: I0224 02:15:17.084774 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:15:18.085402 master-0 kubenswrapper[7864]: I0224 02:15:18.085292 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:15:18.085402 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:15:18.085402 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:15:18.085402 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:15:18.085402 master-0 kubenswrapper[7864]: I0224 02:15:18.085416 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:15:19.085271 master-0 kubenswrapper[7864]: I0224 02:15:19.085160 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:15:19.085271 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:15:19.085271 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:15:19.085271 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:15:19.086737 master-0 kubenswrapper[7864]: I0224 02:15:19.085316 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:15:20.085522 master-0 kubenswrapper[7864]: I0224 02:15:20.085289 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:15:20.085522 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:15:20.085522 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:15:20.085522 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:15:20.085522 master-0 kubenswrapper[7864]: I0224 02:15:20.085415 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:15:21.084853 master-0 kubenswrapper[7864]: I0224 02:15:21.084770 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:15:21.084853 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:15:21.084853 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:15:21.084853 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:15:21.085409 master-0 kubenswrapper[7864]: I0224 02:15:21.084886 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:15:21.785682 master-0 kubenswrapper[7864]: I0224 02:15:21.785435 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5f98f4f8d5-dg77f_dc3d08db-45fa-4fef-b1fd-2875f22d5c45/multus-admission-controller/0.log" Feb 24 02:15:21.785682 master-0 kubenswrapper[7864]: I0224 02:15:21.785538 7864 generic.go:334] "Generic (PLEG): container finished" podID="dc3d08db-45fa-4fef-b1fd-2875f22d5c45" containerID="1a9c80348ec3d9615f2f58e4f90b6f801e400fc962ca77ec229b6df397014b2d" exitCode=137 Feb 24 02:15:21.785682 master-0 kubenswrapper[7864]: I0224 02:15:21.785622 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" event={"ID":"dc3d08db-45fa-4fef-b1fd-2875f22d5c45","Type":"ContainerDied","Data":"1a9c80348ec3d9615f2f58e4f90b6f801e400fc962ca77ec229b6df397014b2d"} Feb 24 02:15:22.084317 master-0 kubenswrapper[7864]: I0224 02:15:22.084175 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:15:22.084317 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:15:22.084317 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:15:22.084317 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:15:22.084317 master-0 kubenswrapper[7864]: I0224 02:15:22.084265 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:15:22.415650 master-0 kubenswrapper[7864]: I0224 02:15:22.415549 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5f98f4f8d5-dg77f_dc3d08db-45fa-4fef-b1fd-2875f22d5c45/multus-admission-controller/0.log" Feb 24 02:15:22.415987 master-0 kubenswrapper[7864]: I0224 02:15:22.415826 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" Feb 24 02:15:22.548973 master-0 kubenswrapper[7864]: I0224 02:15:22.548876 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs\") pod \"dc3d08db-45fa-4fef-b1fd-2875f22d5c45\" (UID: \"dc3d08db-45fa-4fef-b1fd-2875f22d5c45\") " Feb 24 02:15:22.549289 master-0 kubenswrapper[7864]: I0224 02:15:22.549065 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ssxg\" (UniqueName: \"kubernetes.io/projected/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-kube-api-access-2ssxg\") pod \"dc3d08db-45fa-4fef-b1fd-2875f22d5c45\" (UID: \"dc3d08db-45fa-4fef-b1fd-2875f22d5c45\") " Feb 24 02:15:22.554903 master-0 kubenswrapper[7864]: I0224 02:15:22.554827 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "dc3d08db-45fa-4fef-b1fd-2875f22d5c45" (UID: "dc3d08db-45fa-4fef-b1fd-2875f22d5c45"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:15:22.556298 master-0 kubenswrapper[7864]: I0224 02:15:22.556210 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-kube-api-access-2ssxg" (OuterVolumeSpecName: "kube-api-access-2ssxg") pod "dc3d08db-45fa-4fef-b1fd-2875f22d5c45" (UID: "dc3d08db-45fa-4fef-b1fd-2875f22d5c45"). InnerVolumeSpecName "kube-api-access-2ssxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:15:22.651453 master-0 kubenswrapper[7864]: I0224 02:15:22.651301 7864 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ssxg\" (UniqueName: \"kubernetes.io/projected/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-kube-api-access-2ssxg\") on node \"master-0\" DevicePath \"\"" Feb 24 02:15:22.651453 master-0 kubenswrapper[7864]: I0224 02:15:22.651337 7864 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dc3d08db-45fa-4fef-b1fd-2875f22d5c45-webhook-certs\") on node \"master-0\" DevicePath \"\"" Feb 24 02:15:22.798147 master-0 kubenswrapper[7864]: I0224 02:15:22.798082 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5f98f4f8d5-dg77f_dc3d08db-45fa-4fef-b1fd-2875f22d5c45/multus-admission-controller/0.log" Feb 24 02:15:22.798908 master-0 kubenswrapper[7864]: I0224 02:15:22.798193 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" event={"ID":"dc3d08db-45fa-4fef-b1fd-2875f22d5c45","Type":"ContainerDied","Data":"fcf31e223d10d13349fb4cdd6b66ae2d5ca057b142afb57903be6e940e13cdfc"} Feb 24 02:15:22.798908 master-0 kubenswrapper[7864]: I0224 02:15:22.798279 7864 scope.go:117] "RemoveContainer" containerID="206e32f211480b70d154888e7eaab059acddf0419748ec8afd5db9a5bab1c507" Feb 24 02:15:22.798908 master-0 kubenswrapper[7864]: I0224 02:15:22.798292 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" Feb 24 02:15:22.823266 master-0 kubenswrapper[7864]: I0224 02:15:22.823199 7864 scope.go:117] "RemoveContainer" containerID="1a9c80348ec3d9615f2f58e4f90b6f801e400fc962ca77ec229b6df397014b2d" Feb 24 02:15:23.085965 master-0 kubenswrapper[7864]: I0224 02:15:23.085857 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:15:23.085965 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:15:23.085965 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:15:23.085965 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:15:23.086495 master-0 kubenswrapper[7864]: I0224 02:15:23.086009 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:15:24.084591 master-0 kubenswrapper[7864]: I0224 02:15:24.084456 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:15:24.084591 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:15:24.084591 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:15:24.084591 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:15:24.085615 master-0 kubenswrapper[7864]: I0224 02:15:24.084642 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:15:24.085615 master-0 kubenswrapper[7864]: I0224 02:15:24.084734 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:15:24.085877 master-0 kubenswrapper[7864]: I0224 02:15:24.085829 7864 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"2b3213a5477253a9e7e61477d63287c522e2dc114274fd4d6193d811f2552967"} pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" containerMessage="Container router failed startup probe, will be restarted" Feb 24 02:15:24.085962 master-0 kubenswrapper[7864]: I0224 02:15:24.085926 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" containerID="cri-o://2b3213a5477253a9e7e61477d63287c522e2dc114274fd4d6193d811f2552967" gracePeriod=3600 Feb 24 02:15:29.870567 master-0 kubenswrapper[7864]: I0224 02:15:29.870450 7864 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="2ee495a11edbad7f6df2bd3a7a0dabaa3f2d4f8c796ece321404428d107c75f1" exitCode=1 Feb 24 02:15:29.870567 master-0 kubenswrapper[7864]: I0224 02:15:29.870534 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"2ee495a11edbad7f6df2bd3a7a0dabaa3f2d4f8c796ece321404428d107c75f1"} Feb 24 02:15:29.871820 master-0 kubenswrapper[7864]: I0224 02:15:29.870634 7864 scope.go:117] "RemoveContainer" containerID="28d78d14185433406f5d6be1256f4efc7cd117cd145b616d5b8ccdbc8f03929c" Feb 24 02:15:29.871820 master-0 kubenswrapper[7864]: I0224 02:15:29.871432 7864 scope.go:117] "RemoveContainer" containerID="2ee495a11edbad7f6df2bd3a7a0dabaa3f2d4f8c796ece321404428d107c75f1" Feb 24 02:15:29.872006 master-0 kubenswrapper[7864]: E0224 02:15:29.871939 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 24 02:15:30.213643 master-0 kubenswrapper[7864]: I0224 02:15:30.213549 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:15:30.365198 master-0 kubenswrapper[7864]: E0224 02:15:30.365099 7864 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:15:30.886059 master-0 kubenswrapper[7864]: I0224 02:15:30.886007 7864 scope.go:117] "RemoveContainer" containerID="2ee495a11edbad7f6df2bd3a7a0dabaa3f2d4f8c796ece321404428d107c75f1" Feb 24 02:15:30.886855 master-0 kubenswrapper[7864]: E0224 02:15:30.886393 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 24 02:15:30.889474 master-0 kubenswrapper[7864]: I0224 02:15:30.889409 7864 generic.go:334] "Generic (PLEG): container finished" podID="50c78047-1c4d-4535-ba2c-31f080d6a57d" containerID="7bb232625f3494579f18ed676cbbdfe8d63a7f633ead8439889c5a5bfa8b5a12" exitCode=0 Feb 24 02:15:30.889636 master-0 kubenswrapper[7864]: I0224 02:15:30.889480 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"50c78047-1c4d-4535-ba2c-31f080d6a57d","Type":"ContainerDied","Data":"7bb232625f3494579f18ed676cbbdfe8d63a7f633ead8439889c5a5bfa8b5a12"} Feb 24 02:15:32.278043 master-0 kubenswrapper[7864]: I0224 02:15:32.277969 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 24 02:15:32.405185 master-0 kubenswrapper[7864]: I0224 02:15:32.405113 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50c78047-1c4d-4535-ba2c-31f080d6a57d-kube-api-access\") pod \"50c78047-1c4d-4535-ba2c-31f080d6a57d\" (UID: \"50c78047-1c4d-4535-ba2c-31f080d6a57d\") " Feb 24 02:15:32.405364 master-0 kubenswrapper[7864]: I0224 02:15:32.405214 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50c78047-1c4d-4535-ba2c-31f080d6a57d-kubelet-dir\") pod \"50c78047-1c4d-4535-ba2c-31f080d6a57d\" (UID: \"50c78047-1c4d-4535-ba2c-31f080d6a57d\") " Feb 24 02:15:32.405364 master-0 kubenswrapper[7864]: I0224 02:15:32.405253 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/50c78047-1c4d-4535-ba2c-31f080d6a57d-var-lock\") pod \"50c78047-1c4d-4535-ba2c-31f080d6a57d\" (UID: \"50c78047-1c4d-4535-ba2c-31f080d6a57d\") " Feb 24 02:15:32.405525 master-0 kubenswrapper[7864]: I0224 02:15:32.405398 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50c78047-1c4d-4535-ba2c-31f080d6a57d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "50c78047-1c4d-4535-ba2c-31f080d6a57d" (UID: "50c78047-1c4d-4535-ba2c-31f080d6a57d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:15:32.405619 master-0 kubenswrapper[7864]: I0224 02:15:32.405530 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50c78047-1c4d-4535-ba2c-31f080d6a57d-var-lock" (OuterVolumeSpecName: "var-lock") pod "50c78047-1c4d-4535-ba2c-31f080d6a57d" (UID: "50c78047-1c4d-4535-ba2c-31f080d6a57d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:15:32.406378 master-0 kubenswrapper[7864]: I0224 02:15:32.406285 7864 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50c78047-1c4d-4535-ba2c-31f080d6a57d-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:15:32.406378 master-0 kubenswrapper[7864]: I0224 02:15:32.406315 7864 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/50c78047-1c4d-4535-ba2c-31f080d6a57d-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 02:15:32.410562 master-0 kubenswrapper[7864]: I0224 02:15:32.410449 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50c78047-1c4d-4535-ba2c-31f080d6a57d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "50c78047-1c4d-4535-ba2c-31f080d6a57d" (UID: "50c78047-1c4d-4535-ba2c-31f080d6a57d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:15:32.508302 master-0 kubenswrapper[7864]: I0224 02:15:32.508227 7864 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50c78047-1c4d-4535-ba2c-31f080d6a57d-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 02:15:32.912609 master-0 kubenswrapper[7864]: I0224 02:15:32.912510 7864 generic.go:334] "Generic (PLEG): container finished" podID="56c3cb71c9851003c8de7e7c5db4b87e" containerID="5fbafce85063f872b1786e48e809b15f5aa08369e9d34c7d53d1c636ed17075e" exitCode=1 Feb 24 02:15:32.912946 master-0 kubenswrapper[7864]: I0224 02:15:32.912637 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerDied","Data":"5fbafce85063f872b1786e48e809b15f5aa08369e9d34c7d53d1c636ed17075e"} Feb 24 02:15:32.912946 master-0 kubenswrapper[7864]: I0224 02:15:32.912754 7864 scope.go:117] "RemoveContainer" containerID="0be92c811440a62e120f914b735c96a60861f339574fbe9008068727fac04419" Feb 24 02:15:32.913691 master-0 kubenswrapper[7864]: I0224 02:15:32.913639 7864 scope.go:117] "RemoveContainer" containerID="5fbafce85063f872b1786e48e809b15f5aa08369e9d34c7d53d1c636ed17075e" Feb 24 02:15:32.915910 master-0 kubenswrapper[7864]: I0224 02:15:32.915514 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"50c78047-1c4d-4535-ba2c-31f080d6a57d","Type":"ContainerDied","Data":"073c8b4053396ee6cbbc1314c9a8361c6af6be047fe705c3463b911834d8b963"} Feb 24 02:15:32.916554 master-0 kubenswrapper[7864]: I0224 02:15:32.916458 7864 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="073c8b4053396ee6cbbc1314c9a8361c6af6be047fe705c3463b911834d8b963" Feb 24 02:15:32.916669 master-0 kubenswrapper[7864]: I0224 02:15:32.916615 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 24 02:15:33.928926 master-0 kubenswrapper[7864]: I0224 02:15:33.928829 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"f26d70a857d43fac3deacb2102ae3da953979c9be93877036525bd880271cb08"} Feb 24 02:15:34.484435 master-0 kubenswrapper[7864]: I0224 02:15:34.484324 7864 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:15:34.485233 master-0 kubenswrapper[7864]: I0224 02:15:34.485179 7864 scope.go:117] "RemoveContainer" containerID="2ee495a11edbad7f6df2bd3a7a0dabaa3f2d4f8c796ece321404428d107c75f1" Feb 24 02:15:34.485741 master-0 kubenswrapper[7864]: E0224 02:15:34.485679 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 24 02:15:37.217065 master-0 kubenswrapper[7864]: I0224 02:15:37.216961 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:15:37.218262 master-0 kubenswrapper[7864]: I0224 02:15:37.217907 7864 scope.go:117] "RemoveContainer" containerID="2ee495a11edbad7f6df2bd3a7a0dabaa3f2d4f8c796ece321404428d107c75f1" Feb 24 02:15:37.218409 master-0 kubenswrapper[7864]: E0224 02:15:37.218339 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 24 02:15:40.366246 master-0 kubenswrapper[7864]: E0224 02:15:40.366175 7864 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:15:47.049065 master-0 kubenswrapper[7864]: I0224 02:15:47.048976 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-rev/0.log" Feb 24 02:15:47.050668 master-0 kubenswrapper[7864]: I0224 02:15:47.050535 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-metrics/0.log" Feb 24 02:15:47.052008 master-0 kubenswrapper[7864]: I0224 02:15:47.051955 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcdctl/0.log" Feb 24 02:15:47.053863 master-0 kubenswrapper[7864]: I0224 02:15:47.053802 7864 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="e94d6492402ac33f13355d596dfc90617ce3a06153f369c7597c91d9aa0d6092" exitCode=0 Feb 24 02:15:47.053863 master-0 kubenswrapper[7864]: I0224 02:15:47.053847 7864 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="d67efce29a4c7dadf9def673a29b605a940ecf24b2d87b4ec084d429002c032e" exitCode=137 Feb 24 02:15:47.154858 master-0 kubenswrapper[7864]: I0224 02:15:47.154799 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-rev/0.log" Feb 24 02:15:47.156419 master-0 kubenswrapper[7864]: I0224 02:15:47.156368 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-metrics/0.log" Feb 24 02:15:47.157741 master-0 kubenswrapper[7864]: I0224 02:15:47.157705 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcdctl/0.log" Feb 24 02:15:47.159552 master-0 kubenswrapper[7864]: I0224 02:15:47.159508 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 24 02:15:47.347414 master-0 kubenswrapper[7864]: I0224 02:15:47.347211 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 24 02:15:47.347414 master-0 kubenswrapper[7864]: I0224 02:15:47.347340 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 24 02:15:47.347414 master-0 kubenswrapper[7864]: I0224 02:15:47.347389 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 24 02:15:47.347843 master-0 kubenswrapper[7864]: I0224 02:15:47.347444 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 24 02:15:47.347843 master-0 kubenswrapper[7864]: I0224 02:15:47.347542 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 24 02:15:47.347843 master-0 kubenswrapper[7864]: I0224 02:15:47.347428 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:15:47.347843 master-0 kubenswrapper[7864]: I0224 02:15:47.347624 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:15:47.347843 master-0 kubenswrapper[7864]: I0224 02:15:47.347639 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 24 02:15:47.347843 master-0 kubenswrapper[7864]: I0224 02:15:47.347443 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:15:47.347843 master-0 kubenswrapper[7864]: I0224 02:15:47.347482 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir" (OuterVolumeSpecName: "data-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:15:47.348275 master-0 kubenswrapper[7864]: I0224 02:15:47.347549 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir" (OuterVolumeSpecName: "log-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:15:47.348275 master-0 kubenswrapper[7864]: I0224 02:15:47.347701 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:15:47.348617 master-0 kubenswrapper[7864]: I0224 02:15:47.348534 7864 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:15:47.348617 master-0 kubenswrapper[7864]: I0224 02:15:47.348608 7864 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") on node \"master-0\" DevicePath \"\"" Feb 24 02:15:47.348783 master-0 kubenswrapper[7864]: I0224 02:15:47.348633 7864 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:15:47.348783 master-0 kubenswrapper[7864]: I0224 02:15:47.348652 7864 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:15:47.348783 master-0 kubenswrapper[7864]: I0224 02:15:47.348670 7864 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:15:47.348783 master-0 kubenswrapper[7864]: I0224 02:15:47.348690 7864 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:15:47.901286 master-0 kubenswrapper[7864]: I0224 02:15:47.901183 7864 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18a83278819db2092fa26d8274eb3f00" path="/var/lib/kubelet/pods/18a83278819db2092fa26d8274eb3f00/volumes" Feb 24 02:15:48.067277 master-0 kubenswrapper[7864]: I0224 02:15:48.067225 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-rev/0.log" Feb 24 02:15:48.069594 master-0 kubenswrapper[7864]: I0224 02:15:48.069502 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-metrics/0.log" Feb 24 02:15:48.071313 master-0 kubenswrapper[7864]: I0224 02:15:48.071226 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcdctl/0.log" Feb 24 02:15:48.073074 master-0 kubenswrapper[7864]: I0224 02:15:48.073012 7864 scope.go:117] "RemoveContainer" containerID="8e958425dd3f7d3725b8e44d186204361d688e9476e552207418132e8cb6897d" Feb 24 02:15:48.073246 master-0 kubenswrapper[7864]: I0224 02:15:48.073150 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 24 02:15:48.099477 master-0 kubenswrapper[7864]: I0224 02:15:48.099398 7864 scope.go:117] "RemoveContainer" containerID="7418acef2878f63a41664398dc64c50d69563b99c0e4935df8104aecdaf495b4" Feb 24 02:15:48.126112 master-0 kubenswrapper[7864]: I0224 02:15:48.126066 7864 scope.go:117] "RemoveContainer" containerID="0899242d9942257db778aa29a478801ba8d2518e639b0033c7b16a0a42ff10a5" Feb 24 02:15:48.155807 master-0 kubenswrapper[7864]: I0224 02:15:48.155724 7864 scope.go:117] "RemoveContainer" containerID="e94d6492402ac33f13355d596dfc90617ce3a06153f369c7597c91d9aa0d6092" Feb 24 02:15:48.180639 master-0 kubenswrapper[7864]: I0224 02:15:48.180553 7864 scope.go:117] "RemoveContainer" containerID="d67efce29a4c7dadf9def673a29b605a940ecf24b2d87b4ec084d429002c032e" Feb 24 02:15:48.206483 master-0 kubenswrapper[7864]: I0224 02:15:48.206434 7864 scope.go:117] "RemoveContainer" containerID="a8c66d27d61884b6fe77ab4fbf5e74a8d795491882a22f1608be7b10b5068b90" Feb 24 02:15:48.240175 master-0 kubenswrapper[7864]: I0224 02:15:48.240121 7864 scope.go:117] "RemoveContainer" containerID="40cdc6ec977c48256a7824f9e0a45c0db3de1ee50c08fc0891aa0798ab321016" Feb 24 02:15:48.275437 master-0 kubenswrapper[7864]: I0224 02:15:48.275350 7864 scope.go:117] "RemoveContainer" containerID="dd25a51f43f2a6b08e822686b9a5c9a9800c428fe9e69fa2ec6e4544069e9236" Feb 24 02:15:50.367830 master-0 kubenswrapper[7864]: E0224 02:15:50.367696 7864 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:15:50.548633 master-0 kubenswrapper[7864]: E0224 02:15:50.548395 7864 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.18970d0a5b7c7692 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:18a83278819db2092fa26d8274eb3f00,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Killing,Message:Stopping container etcd-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:15:16.529297042 +0000 UTC m=+680.856950774,LastTimestamp:2026-02-24 02:15:16.529297042 +0000 UTC m=+680.856950774,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:15:52.874420 master-0 kubenswrapper[7864]: I0224 02:15:52.874334 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 24 02:15:52.875675 master-0 kubenswrapper[7864]: I0224 02:15:52.874964 7864 scope.go:117] "RemoveContainer" containerID="2ee495a11edbad7f6df2bd3a7a0dabaa3f2d4f8c796ece321404428d107c75f1" Feb 24 02:15:52.876203 master-0 kubenswrapper[7864]: E0224 02:15:52.876145 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 24 02:15:52.913491 master-0 kubenswrapper[7864]: I0224 02:15:52.913412 7864 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="fc3a7b55-0847-44db-87c3-4a0d6e333219" Feb 24 02:15:52.913491 master-0 kubenswrapper[7864]: I0224 02:15:52.913477 7864 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="fc3a7b55-0847-44db-87c3-4a0d6e333219" Feb 24 02:15:55.146222 master-0 kubenswrapper[7864]: I0224 02:15:55.146134 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_c12652f5-003f-4b77-b2bb-b666c9d7bb53/installer/0.log" Feb 24 02:15:55.147159 master-0 kubenswrapper[7864]: I0224 02:15:55.146234 7864 generic.go:334] "Generic (PLEG): container finished" podID="c12652f5-003f-4b77-b2bb-b666c9d7bb53" containerID="50801a56a4404416a44874540419cd05a4a4bedf1fb5022f9e0b4725f3c11f4d" exitCode=1 Feb 24 02:15:55.147159 master-0 kubenswrapper[7864]: I0224 02:15:55.146291 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"c12652f5-003f-4b77-b2bb-b666c9d7bb53","Type":"ContainerDied","Data":"50801a56a4404416a44874540419cd05a4a4bedf1fb5022f9e0b4725f3c11f4d"} Feb 24 02:15:56.583605 master-0 kubenswrapper[7864]: I0224 02:15:56.583532 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_c12652f5-003f-4b77-b2bb-b666c9d7bb53/installer/0.log" Feb 24 02:15:56.584277 master-0 kubenswrapper[7864]: I0224 02:15:56.583706 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 24 02:15:56.701389 master-0 kubenswrapper[7864]: I0224 02:15:56.701316 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c12652f5-003f-4b77-b2bb-b666c9d7bb53-kubelet-dir\") pod \"c12652f5-003f-4b77-b2bb-b666c9d7bb53\" (UID: \"c12652f5-003f-4b77-b2bb-b666c9d7bb53\") " Feb 24 02:15:56.701532 master-0 kubenswrapper[7864]: I0224 02:15:56.701463 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c12652f5-003f-4b77-b2bb-b666c9d7bb53-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c12652f5-003f-4b77-b2bb-b666c9d7bb53" (UID: "c12652f5-003f-4b77-b2bb-b666c9d7bb53"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:15:56.701680 master-0 kubenswrapper[7864]: I0224 02:15:56.701542 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c12652f5-003f-4b77-b2bb-b666c9d7bb53-kube-api-access\") pod \"c12652f5-003f-4b77-b2bb-b666c9d7bb53\" (UID: \"c12652f5-003f-4b77-b2bb-b666c9d7bb53\") " Feb 24 02:15:56.701794 master-0 kubenswrapper[7864]: I0224 02:15:56.701684 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c12652f5-003f-4b77-b2bb-b666c9d7bb53-var-lock\") pod \"c12652f5-003f-4b77-b2bb-b666c9d7bb53\" (UID: \"c12652f5-003f-4b77-b2bb-b666c9d7bb53\") " Feb 24 02:15:56.701872 master-0 kubenswrapper[7864]: I0224 02:15:56.701763 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c12652f5-003f-4b77-b2bb-b666c9d7bb53-var-lock" (OuterVolumeSpecName: "var-lock") pod "c12652f5-003f-4b77-b2bb-b666c9d7bb53" (UID: "c12652f5-003f-4b77-b2bb-b666c9d7bb53"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:15:56.702347 master-0 kubenswrapper[7864]: I0224 02:15:56.702297 7864 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c12652f5-003f-4b77-b2bb-b666c9d7bb53-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 02:15:56.702347 master-0 kubenswrapper[7864]: I0224 02:15:56.702334 7864 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c12652f5-003f-4b77-b2bb-b666c9d7bb53-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:15:56.706666 master-0 kubenswrapper[7864]: I0224 02:15:56.706603 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c12652f5-003f-4b77-b2bb-b666c9d7bb53-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c12652f5-003f-4b77-b2bb-b666c9d7bb53" (UID: "c12652f5-003f-4b77-b2bb-b666c9d7bb53"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:15:56.803667 master-0 kubenswrapper[7864]: I0224 02:15:56.803494 7864 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c12652f5-003f-4b77-b2bb-b666c9d7bb53-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 02:15:57.168977 master-0 kubenswrapper[7864]: I0224 02:15:57.168765 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_c12652f5-003f-4b77-b2bb-b666c9d7bb53/installer/0.log" Feb 24 02:15:57.168977 master-0 kubenswrapper[7864]: I0224 02:15:57.168851 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"c12652f5-003f-4b77-b2bb-b666c9d7bb53","Type":"ContainerDied","Data":"d91c1d25f97f5902e0cd98da21fb3d84dc557631fbc1bb6bed501fad908da85d"} Feb 24 02:15:57.168977 master-0 kubenswrapper[7864]: I0224 02:15:57.168895 7864 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d91c1d25f97f5902e0cd98da21fb3d84dc557631fbc1bb6bed501fad908da85d" Feb 24 02:15:57.169455 master-0 kubenswrapper[7864]: I0224 02:15:57.169001 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 24 02:16:00.368842 master-0 kubenswrapper[7864]: E0224 02:16:00.368690 7864 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:16:07.876461 master-0 kubenswrapper[7864]: I0224 02:16:07.876329 7864 scope.go:117] "RemoveContainer" containerID="2ee495a11edbad7f6df2bd3a7a0dabaa3f2d4f8c796ece321404428d107c75f1" Feb 24 02:16:07.877356 master-0 kubenswrapper[7864]: E0224 02:16:07.876888 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 24 02:16:10.292775 master-0 kubenswrapper[7864]: I0224 02:16:10.292687 7864 generic.go:334] "Generic (PLEG): container finished" podID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerID="2b3213a5477253a9e7e61477d63287c522e2dc114274fd4d6193d811f2552967" exitCode=0 Feb 24 02:16:10.293466 master-0 kubenswrapper[7864]: I0224 02:16:10.292780 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" event={"ID":"6a08a1e4-cf92-4733-a8af-c7ac5b21e925","Type":"ContainerDied","Data":"2b3213a5477253a9e7e61477d63287c522e2dc114274fd4d6193d811f2552967"} Feb 24 02:16:10.293466 master-0 kubenswrapper[7864]: I0224 02:16:10.292883 7864 scope.go:117] "RemoveContainer" containerID="160afcf676f240f4edef48c62c951182da5bbf1b49c67d215747997188263486" Feb 24 02:16:10.369426 master-0 kubenswrapper[7864]: E0224 02:16:10.369331 7864 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:16:10.369837 master-0 kubenswrapper[7864]: I0224 02:16:10.369787 7864 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 24 02:16:11.308638 master-0 kubenswrapper[7864]: I0224 02:16:11.308394 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" event={"ID":"6a08a1e4-cf92-4733-a8af-c7ac5b21e925","Type":"ContainerStarted","Data":"16b88bdb19342563d81116f8e13c7e868beadde1813cafcd204be1678600a199"} Feb 24 02:16:12.081510 master-0 kubenswrapper[7864]: I0224 02:16:12.081375 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:16:12.081510 master-0 kubenswrapper[7864]: I0224 02:16:12.081467 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:16:12.085764 master-0 kubenswrapper[7864]: I0224 02:16:12.085662 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:12.085764 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:12.085764 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:12.085764 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:12.086120 master-0 kubenswrapper[7864]: I0224 02:16:12.085817 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:13.084838 master-0 kubenswrapper[7864]: I0224 02:16:13.084760 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:13.084838 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:13.084838 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:13.084838 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:13.086748 master-0 kubenswrapper[7864]: I0224 02:16:13.084867 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:14.084734 master-0 kubenswrapper[7864]: I0224 02:16:14.084656 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:14.084734 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:14.084734 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:14.084734 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:14.085762 master-0 kubenswrapper[7864]: I0224 02:16:14.084767 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:15.085088 master-0 kubenswrapper[7864]: I0224 02:16:15.085006 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:15.085088 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:15.085088 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:15.085088 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:15.086188 master-0 kubenswrapper[7864]: I0224 02:16:15.085104 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:16.084643 master-0 kubenswrapper[7864]: I0224 02:16:16.084517 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:16.084643 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:16.084643 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:16.084643 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:16.085169 master-0 kubenswrapper[7864]: I0224 02:16:16.084687 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:17.085288 master-0 kubenswrapper[7864]: I0224 02:16:17.085130 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:17.085288 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:17.085288 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:17.085288 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:17.085288 master-0 kubenswrapper[7864]: I0224 02:16:17.085274 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:18.085652 master-0 kubenswrapper[7864]: I0224 02:16:18.085525 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:18.085652 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:18.085652 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:18.085652 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:18.086819 master-0 kubenswrapper[7864]: I0224 02:16:18.085679 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:18.373476 master-0 kubenswrapper[7864]: I0224 02:16:18.373358 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-p5b6q_adc1097b-c1ab-4f09-965d-1c819671475b/approver/1.log" Feb 24 02:16:18.375257 master-0 kubenswrapper[7864]: I0224 02:16:18.375213 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-p5b6q_adc1097b-c1ab-4f09-965d-1c819671475b/approver/0.log" Feb 24 02:16:18.376035 master-0 kubenswrapper[7864]: I0224 02:16:18.375959 7864 generic.go:334] "Generic (PLEG): container finished" podID="adc1097b-c1ab-4f09-965d-1c819671475b" containerID="a75bd245067c2bd96dc6595a801a2c01cb01bd1d3a373e46bfec3a120455dff3" exitCode=1 Feb 24 02:16:18.376164 master-0 kubenswrapper[7864]: I0224 02:16:18.376065 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-p5b6q" event={"ID":"adc1097b-c1ab-4f09-965d-1c819671475b","Type":"ContainerDied","Data":"a75bd245067c2bd96dc6595a801a2c01cb01bd1d3a373e46bfec3a120455dff3"} Feb 24 02:16:18.376240 master-0 kubenswrapper[7864]: I0224 02:16:18.376203 7864 scope.go:117] "RemoveContainer" containerID="40959528c0e652134371f4afb20a4ee849f4f1c1c0599ddd64b9076a7771bc13" Feb 24 02:16:18.377435 master-0 kubenswrapper[7864]: I0224 02:16:18.377373 7864 scope.go:117] "RemoveContainer" containerID="a75bd245067c2bd96dc6595a801a2c01cb01bd1d3a373e46bfec3a120455dff3" Feb 24 02:16:18.377820 master-0 kubenswrapper[7864]: E0224 02:16:18.377760 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"approver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=approver pod=network-node-identity-p5b6q_openshift-network-node-identity(adc1097b-c1ab-4f09-965d-1c819671475b)\"" pod="openshift-network-node-identity/network-node-identity-p5b6q" podUID="adc1097b-c1ab-4f09-965d-1c819671475b" Feb 24 02:16:19.085514 master-0 kubenswrapper[7864]: I0224 02:16:19.085427 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:19.085514 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:19.085514 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:19.085514 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:19.086694 master-0 kubenswrapper[7864]: I0224 02:16:19.085532 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:19.389267 master-0 kubenswrapper[7864]: I0224 02:16:19.389139 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-p5b6q_adc1097b-c1ab-4f09-965d-1c819671475b/approver/1.log" Feb 24 02:16:19.875878 master-0 kubenswrapper[7864]: I0224 02:16:19.875801 7864 scope.go:117] "RemoveContainer" containerID="2ee495a11edbad7f6df2bd3a7a0dabaa3f2d4f8c796ece321404428d107c75f1" Feb 24 02:16:20.085965 master-0 kubenswrapper[7864]: I0224 02:16:20.085816 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:20.085965 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:20.085965 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:20.085965 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:20.085965 master-0 kubenswrapper[7864]: I0224 02:16:20.085939 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:20.371138 master-0 kubenswrapper[7864]: E0224 02:16:20.370715 7864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Feb 24 02:16:20.409246 master-0 kubenswrapper[7864]: I0224 02:16:20.409109 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"5b0e89fd54937406992276c2b46728950e8622b38ab169cee211632dcaf018e1"} Feb 24 02:16:21.086045 master-0 kubenswrapper[7864]: I0224 02:16:21.085970 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:21.086045 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:21.086045 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:21.086045 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:21.087667 master-0 kubenswrapper[7864]: I0224 02:16:21.086085 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:22.084458 master-0 kubenswrapper[7864]: I0224 02:16:22.084381 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:22.084458 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:22.084458 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:22.084458 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:22.085226 master-0 kubenswrapper[7864]: I0224 02:16:22.085171 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:22.418049 master-0 kubenswrapper[7864]: I0224 02:16:22.417806 7864 status_manager.go:851] "Failed to get status for pod" podUID="dc3d08db-45fa-4fef-b1fd-2875f22d5c45" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods multus-admission-controller-5f98f4f8d5-dg77f)" Feb 24 02:16:23.085198 master-0 kubenswrapper[7864]: I0224 02:16:23.085084 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:23.085198 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:23.085198 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:23.085198 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:23.085752 master-0 kubenswrapper[7864]: I0224 02:16:23.085219 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:24.084965 master-0 kubenswrapper[7864]: I0224 02:16:24.084860 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:24.084965 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:24.084965 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:24.084965 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:24.086373 master-0 kubenswrapper[7864]: I0224 02:16:24.084981 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:24.552951 master-0 kubenswrapper[7864]: E0224 02:16:24.552707 7864 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event=< Feb 24 02:16:24.552951 master-0 kubenswrapper[7864]: &Event{ObjectMeta:{router-default-7b65dc9fcb-22sgl.18970cc986e863a2 openshift-ingress 10853 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ingress,Name:router-default-7b65dc9fcb-22sgl,UID:6a08a1e4-cf92-4733-a8af-c7ac5b21e925,APIVersion:v1,ResourceVersion:10250,FieldPath:spec.containers{router},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Feb 24 02:16:24.552951 master-0 kubenswrapper[7864]: body: [-]backend-http failed: reason withheld Feb 24 02:16:24.552951 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:24.552951 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:24.552951 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:24.552951 master-0 kubenswrapper[7864]: Feb 24 02:16:24.552951 master-0 kubenswrapper[7864]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:10:38 +0000 UTC,LastTimestamp:2026-02-24 02:15:17.084745161 +0000 UTC m=+681.412398813,Count:233,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Feb 24 02:16:24.552951 master-0 kubenswrapper[7864]: > Feb 24 02:16:25.085247 master-0 kubenswrapper[7864]: I0224 02:16:25.085148 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:25.085247 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:25.085247 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:25.085247 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:25.086238 master-0 kubenswrapper[7864]: I0224 02:16:25.085274 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:26.085240 master-0 kubenswrapper[7864]: I0224 02:16:26.085167 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:26.085240 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:26.085240 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:26.085240 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:26.086110 master-0 kubenswrapper[7864]: I0224 02:16:26.085288 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:26.917392 master-0 kubenswrapper[7864]: E0224 02:16:26.917303 7864 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 24 02:16:26.918046 master-0 kubenswrapper[7864]: I0224 02:16:26.918001 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 24 02:16:26.957895 master-0 kubenswrapper[7864]: W0224 02:16:26.957847 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb419b8533666d3ae7054c771ce97a95f.slice/crio-c28e00882d5ec6f229538a77e2756ba9244f00b81361306d5103b5a9571bb19f WatchSource:0}: Error finding container c28e00882d5ec6f229538a77e2756ba9244f00b81361306d5103b5a9571bb19f: Status 404 returned error can't find the container with id c28e00882d5ec6f229538a77e2756ba9244f00b81361306d5103b5a9571bb19f Feb 24 02:16:27.085385 master-0 kubenswrapper[7864]: I0224 02:16:27.085343 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:27.085385 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:27.085385 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:27.085385 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:27.086366 master-0 kubenswrapper[7864]: I0224 02:16:27.086318 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:27.217472 master-0 kubenswrapper[7864]: I0224 02:16:27.217445 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:16:27.475568 master-0 kubenswrapper[7864]: I0224 02:16:27.475491 7864 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="e6d11eb8af0756a7414e361a0a41884731f78257822ffdb122c02d11a1914c35" exitCode=0 Feb 24 02:16:27.475844 master-0 kubenswrapper[7864]: I0224 02:16:27.475606 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"e6d11eb8af0756a7414e361a0a41884731f78257822ffdb122c02d11a1914c35"} Feb 24 02:16:27.475844 master-0 kubenswrapper[7864]: I0224 02:16:27.475713 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"c28e00882d5ec6f229538a77e2756ba9244f00b81361306d5103b5a9571bb19f"} Feb 24 02:16:27.476338 master-0 kubenswrapper[7864]: I0224 02:16:27.476268 7864 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="fc3a7b55-0847-44db-87c3-4a0d6e333219" Feb 24 02:16:27.476338 master-0 kubenswrapper[7864]: I0224 02:16:27.476326 7864 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="fc3a7b55-0847-44db-87c3-4a0d6e333219" Feb 24 02:16:28.085018 master-0 kubenswrapper[7864]: I0224 02:16:28.084905 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:28.085018 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:28.085018 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:28.085018 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:28.086216 master-0 kubenswrapper[7864]: I0224 02:16:28.085041 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:28.875135 master-0 kubenswrapper[7864]: I0224 02:16:28.875070 7864 scope.go:117] "RemoveContainer" containerID="a75bd245067c2bd96dc6595a801a2c01cb01bd1d3a373e46bfec3a120455dff3" Feb 24 02:16:29.085337 master-0 kubenswrapper[7864]: I0224 02:16:29.085259 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:29.085337 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:29.085337 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:29.085337 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:29.086297 master-0 kubenswrapper[7864]: I0224 02:16:29.085356 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:29.495494 master-0 kubenswrapper[7864]: I0224 02:16:29.495361 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-p5b6q_adc1097b-c1ab-4f09-965d-1c819671475b/approver/1.log" Feb 24 02:16:29.496151 master-0 kubenswrapper[7864]: I0224 02:16:29.496105 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-p5b6q" event={"ID":"adc1097b-c1ab-4f09-965d-1c819671475b","Type":"ContainerStarted","Data":"ebba0a8319d4fdb20034a34d2683c5848263067d143ef6656f178fc12ac2c957"} Feb 24 02:16:30.085072 master-0 kubenswrapper[7864]: I0224 02:16:30.084975 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:30.085072 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:30.085072 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:30.085072 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:30.086214 master-0 kubenswrapper[7864]: I0224 02:16:30.085081 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:30.213196 master-0 kubenswrapper[7864]: I0224 02:16:30.213111 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:16:30.226933 master-0 kubenswrapper[7864]: I0224 02:16:30.226825 7864 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 02:16:30.572758 master-0 kubenswrapper[7864]: E0224 02:16:30.572646 7864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Feb 24 02:16:31.084971 master-0 kubenswrapper[7864]: I0224 02:16:31.084886 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:31.084971 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:31.084971 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:31.084971 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:31.085507 master-0 kubenswrapper[7864]: I0224 02:16:31.085001 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:32.084545 master-0 kubenswrapper[7864]: I0224 02:16:32.084466 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:32.084545 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:32.084545 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:32.084545 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:32.085679 master-0 kubenswrapper[7864]: I0224 02:16:32.084560 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:33.085066 master-0 kubenswrapper[7864]: I0224 02:16:33.084979 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:33.085066 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:33.085066 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:33.085066 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:33.086851 master-0 kubenswrapper[7864]: I0224 02:16:33.085091 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:34.085559 master-0 kubenswrapper[7864]: I0224 02:16:34.085445 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:34.085559 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:34.085559 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:34.085559 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:34.086377 master-0 kubenswrapper[7864]: I0224 02:16:34.085623 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:34.544081 master-0 kubenswrapper[7864]: I0224 02:16:34.544001 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-6dlqb_c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/ingress-operator/3.log" Feb 24 02:16:34.545662 master-0 kubenswrapper[7864]: I0224 02:16:34.545562 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-6dlqb_c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/ingress-operator/2.log" Feb 24 02:16:34.546352 master-0 kubenswrapper[7864]: I0224 02:16:34.546283 7864 generic.go:334] "Generic (PLEG): container finished" podID="c3278a82-ee70-4d6c-9c96-f8cb1bcb9334" containerID="049ca3ed1273887ef2dfb1c46cae5f8f4c14254dc46c7407bca3af34b7c3bdfe" exitCode=1 Feb 24 02:16:34.546456 master-0 kubenswrapper[7864]: I0224 02:16:34.546363 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" event={"ID":"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334","Type":"ContainerDied","Data":"049ca3ed1273887ef2dfb1c46cae5f8f4c14254dc46c7407bca3af34b7c3bdfe"} Feb 24 02:16:34.546522 master-0 kubenswrapper[7864]: I0224 02:16:34.546494 7864 scope.go:117] "RemoveContainer" containerID="f672e7e1695861ae42afc6a8a2613baf97d7bf401a8f7ddcbb43e4e1e3238a5e" Feb 24 02:16:34.547158 master-0 kubenswrapper[7864]: I0224 02:16:34.547110 7864 scope.go:117] "RemoveContainer" containerID="049ca3ed1273887ef2dfb1c46cae5f8f4c14254dc46c7407bca3af34b7c3bdfe" Feb 24 02:16:34.547650 master-0 kubenswrapper[7864]: E0224 02:16:34.547563 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-6dlqb_openshift-ingress-operator(c3278a82-ee70-4d6c-9c96-f8cb1bcb9334)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" podUID="c3278a82-ee70-4d6c-9c96-f8cb1bcb9334" Feb 24 02:16:35.085024 master-0 kubenswrapper[7864]: I0224 02:16:35.084873 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:35.085024 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:35.085024 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:35.085024 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:35.086399 master-0 kubenswrapper[7864]: I0224 02:16:35.085019 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:35.559485 master-0 kubenswrapper[7864]: I0224 02:16:35.559363 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-6dlqb_c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/ingress-operator/3.log" Feb 24 02:16:36.085273 master-0 kubenswrapper[7864]: I0224 02:16:36.085141 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:36.085273 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:36.085273 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:36.085273 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:36.085273 master-0 kubenswrapper[7864]: I0224 02:16:36.085243 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:37.085896 master-0 kubenswrapper[7864]: I0224 02:16:37.085820 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:37.085896 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:37.085896 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:37.085896 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:37.086902 master-0 kubenswrapper[7864]: I0224 02:16:37.085932 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:38.085490 master-0 kubenswrapper[7864]: I0224 02:16:38.085367 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:38.085490 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:38.085490 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:38.085490 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:38.086643 master-0 kubenswrapper[7864]: I0224 02:16:38.085503 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:39.084978 master-0 kubenswrapper[7864]: I0224 02:16:39.084878 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:39.084978 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:39.084978 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:39.084978 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:39.085382 master-0 kubenswrapper[7864]: I0224 02:16:39.084997 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:40.085083 master-0 kubenswrapper[7864]: I0224 02:16:40.085010 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:40.085083 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:40.085083 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:40.085083 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:40.086309 master-0 kubenswrapper[7864]: I0224 02:16:40.086260 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:40.218414 master-0 kubenswrapper[7864]: I0224 02:16:40.218298 7864 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 02:16:40.974269 master-0 kubenswrapper[7864]: E0224 02:16:40.974166 7864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Feb 24 02:16:41.085189 master-0 kubenswrapper[7864]: I0224 02:16:41.085070 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:41.085189 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:41.085189 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:41.085189 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:41.086134 master-0 kubenswrapper[7864]: I0224 02:16:41.085220 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:42.084803 master-0 kubenswrapper[7864]: I0224 02:16:42.084736 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:42.084803 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:42.084803 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:42.084803 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:42.086080 master-0 kubenswrapper[7864]: I0224 02:16:42.085893 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:43.085247 master-0 kubenswrapper[7864]: I0224 02:16:43.085127 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:43.085247 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:43.085247 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:43.085247 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:43.087118 master-0 kubenswrapper[7864]: I0224 02:16:43.085279 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:44.084927 master-0 kubenswrapper[7864]: I0224 02:16:44.084809 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:44.084927 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:44.084927 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:44.084927 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:44.084927 master-0 kubenswrapper[7864]: I0224 02:16:44.084900 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:45.086151 master-0 kubenswrapper[7864]: I0224 02:16:45.086047 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:45.086151 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:45.086151 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:45.086151 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:45.086680 master-0 kubenswrapper[7864]: I0224 02:16:45.086170 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:46.085801 master-0 kubenswrapper[7864]: I0224 02:16:46.085714 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:46.085801 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:46.085801 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:46.085801 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:46.086322 master-0 kubenswrapper[7864]: I0224 02:16:46.085808 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:47.084437 master-0 kubenswrapper[7864]: I0224 02:16:47.084322 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:47.084437 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:47.084437 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:47.084437 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:47.085751 master-0 kubenswrapper[7864]: I0224 02:16:47.084453 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:48.085430 master-0 kubenswrapper[7864]: I0224 02:16:48.085339 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:48.085430 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:48.085430 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:48.085430 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:48.086540 master-0 kubenswrapper[7864]: I0224 02:16:48.085458 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:49.085077 master-0 kubenswrapper[7864]: I0224 02:16:49.084999 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:49.085077 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:49.085077 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:49.085077 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:49.087018 master-0 kubenswrapper[7864]: I0224 02:16:49.085707 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:49.874972 master-0 kubenswrapper[7864]: I0224 02:16:49.874875 7864 scope.go:117] "RemoveContainer" containerID="049ca3ed1273887ef2dfb1c46cae5f8f4c14254dc46c7407bca3af34b7c3bdfe" Feb 24 02:16:49.875427 master-0 kubenswrapper[7864]: E0224 02:16:49.875335 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-6dlqb_openshift-ingress-operator(c3278a82-ee70-4d6c-9c96-f8cb1bcb9334)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" podUID="c3278a82-ee70-4d6c-9c96-f8cb1bcb9334" Feb 24 02:16:50.084965 master-0 kubenswrapper[7864]: I0224 02:16:50.084862 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:50.084965 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:50.084965 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:50.084965 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:50.085482 master-0 kubenswrapper[7864]: I0224 02:16:50.084963 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:50.217293 master-0 kubenswrapper[7864]: I0224 02:16:50.217186 7864 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 02:16:50.218125 master-0 kubenswrapper[7864]: I0224 02:16:50.217334 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:16:50.218369 master-0 kubenswrapper[7864]: I0224 02:16:50.218302 7864 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"5b0e89fd54937406992276c2b46728950e8622b38ab169cee211632dcaf018e1"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 24 02:16:50.218483 master-0 kubenswrapper[7864]: I0224 02:16:50.218428 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" containerID="cri-o://5b0e89fd54937406992276c2b46728950e8622b38ab169cee211632dcaf018e1" gracePeriod=30 Feb 24 02:16:50.706100 master-0 kubenswrapper[7864]: I0224 02:16:50.706023 7864 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="5b0e89fd54937406992276c2b46728950e8622b38ab169cee211632dcaf018e1" exitCode=2 Feb 24 02:16:50.706422 master-0 kubenswrapper[7864]: I0224 02:16:50.706110 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"5b0e89fd54937406992276c2b46728950e8622b38ab169cee211632dcaf018e1"} Feb 24 02:16:50.706422 master-0 kubenswrapper[7864]: I0224 02:16:50.706166 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503"} Feb 24 02:16:50.706422 master-0 kubenswrapper[7864]: I0224 02:16:50.706197 7864 scope.go:117] "RemoveContainer" containerID="2ee495a11edbad7f6df2bd3a7a0dabaa3f2d4f8c796ece321404428d107c75f1" Feb 24 02:16:51.085534 master-0 kubenswrapper[7864]: I0224 02:16:51.085407 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:51.085534 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:51.085534 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:51.085534 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:51.085875 master-0 kubenswrapper[7864]: I0224 02:16:51.085524 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:51.776116 master-0 kubenswrapper[7864]: E0224 02:16:51.775513 7864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Feb 24 02:16:52.084353 master-0 kubenswrapper[7864]: I0224 02:16:52.084119 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:52.084353 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:52.084353 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:52.084353 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:52.084353 master-0 kubenswrapper[7864]: I0224 02:16:52.084238 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:53.085434 master-0 kubenswrapper[7864]: I0224 02:16:53.085308 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:53.085434 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:53.085434 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:53.085434 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:53.086528 master-0 kubenswrapper[7864]: I0224 02:16:53.085470 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:54.085518 master-0 kubenswrapper[7864]: I0224 02:16:54.085423 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:54.085518 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:54.085518 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:54.085518 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:54.086488 master-0 kubenswrapper[7864]: I0224 02:16:54.085531 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:55.085243 master-0 kubenswrapper[7864]: I0224 02:16:55.085162 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:55.085243 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:55.085243 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:55.085243 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:55.086408 master-0 kubenswrapper[7864]: I0224 02:16:55.085790 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:56.085484 master-0 kubenswrapper[7864]: I0224 02:16:56.085368 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:56.085484 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:56.085484 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:56.085484 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:56.085484 master-0 kubenswrapper[7864]: I0224 02:16:56.085479 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:57.085747 master-0 kubenswrapper[7864]: I0224 02:16:57.085571 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:57.085747 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:57.085747 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:57.085747 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:57.086994 master-0 kubenswrapper[7864]: I0224 02:16:57.085752 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:57.217179 master-0 kubenswrapper[7864]: I0224 02:16:57.217087 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:16:58.085827 master-0 kubenswrapper[7864]: I0224 02:16:58.085669 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:58.085827 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:58.085827 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:58.085827 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:58.085827 master-0 kubenswrapper[7864]: I0224 02:16:58.085773 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:16:58.558521 master-0 kubenswrapper[7864]: E0224 02:16:58.558187 7864 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.18970d0d76c45509 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:BackOff,Message:Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:15:29.871893769 +0000 UTC m=+694.199547421,LastTimestamp:2026-02-24 02:15:29.871893769 +0000 UTC m=+694.199547421,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:16:59.085522 master-0 kubenswrapper[7864]: I0224 02:16:59.085428 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:16:59.085522 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:16:59.085522 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:16:59.085522 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:16:59.086024 master-0 kubenswrapper[7864]: I0224 02:16:59.085540 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:00.085771 master-0 kubenswrapper[7864]: I0224 02:17:00.085625 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:00.085771 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:00.085771 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:00.085771 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:00.085771 master-0 kubenswrapper[7864]: I0224 02:17:00.085736 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:00.213349 master-0 kubenswrapper[7864]: I0224 02:17:00.213272 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:17:00.218227 master-0 kubenswrapper[7864]: I0224 02:17:00.218080 7864 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 02:17:01.085970 master-0 kubenswrapper[7864]: I0224 02:17:01.085876 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:01.085970 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:01.085970 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:01.085970 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:01.087105 master-0 kubenswrapper[7864]: I0224 02:17:01.086009 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:01.479925 master-0 kubenswrapper[7864]: E0224 02:17:01.479845 7864 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 24 02:17:02.085260 master-0 kubenswrapper[7864]: I0224 02:17:02.085103 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:02.085260 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:02.085260 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:02.085260 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:02.085260 master-0 kubenswrapper[7864]: I0224 02:17:02.085215 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:02.825739 master-0 kubenswrapper[7864]: I0224 02:17:02.825602 7864 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="e47b414c847a54539f28e830435dfe61ba2d4309c2e9e84ac24e938ca23917ff" exitCode=0 Feb 24 02:17:02.825739 master-0 kubenswrapper[7864]: I0224 02:17:02.825679 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"e47b414c847a54539f28e830435dfe61ba2d4309c2e9e84ac24e938ca23917ff"} Feb 24 02:17:02.827003 master-0 kubenswrapper[7864]: I0224 02:17:02.826139 7864 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="fc3a7b55-0847-44db-87c3-4a0d6e333219" Feb 24 02:17:02.827003 master-0 kubenswrapper[7864]: I0224 02:17:02.826163 7864 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="fc3a7b55-0847-44db-87c3-4a0d6e333219" Feb 24 02:17:03.086015 master-0 kubenswrapper[7864]: I0224 02:17:03.085809 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:03.086015 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:03.086015 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:03.086015 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:03.086015 master-0 kubenswrapper[7864]: I0224 02:17:03.085928 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:03.377765 master-0 kubenswrapper[7864]: E0224 02:17:03.377317 7864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="3.2s" Feb 24 02:17:03.875514 master-0 kubenswrapper[7864]: I0224 02:17:03.875401 7864 scope.go:117] "RemoveContainer" containerID="049ca3ed1273887ef2dfb1c46cae5f8f4c14254dc46c7407bca3af34b7c3bdfe" Feb 24 02:17:03.876196 master-0 kubenswrapper[7864]: E0224 02:17:03.876160 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-6dlqb_openshift-ingress-operator(c3278a82-ee70-4d6c-9c96-f8cb1bcb9334)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" podUID="c3278a82-ee70-4d6c-9c96-f8cb1bcb9334" Feb 24 02:17:04.085365 master-0 kubenswrapper[7864]: I0224 02:17:04.085214 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:04.085365 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:04.085365 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:04.085365 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:04.085365 master-0 kubenswrapper[7864]: I0224 02:17:04.085340 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:04.847999 master-0 kubenswrapper[7864]: I0224 02:17:04.847857 7864 generic.go:334] "Generic (PLEG): container finished" podID="91d16f7b-390a-4d9d-99d6-cc8e210801d1" containerID="2ba78e893a4a0218430d836aa7034890de4059e422e2aea2e06c126f90cc740a" exitCode=0 Feb 24 02:17:04.848342 master-0 kubenswrapper[7864]: I0224 02:17:04.848019 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" event={"ID":"91d16f7b-390a-4d9d-99d6-cc8e210801d1","Type":"ContainerDied","Data":"2ba78e893a4a0218430d836aa7034890de4059e422e2aea2e06c126f90cc740a"} Feb 24 02:17:04.848342 master-0 kubenswrapper[7864]: I0224 02:17:04.848084 7864 scope.go:117] "RemoveContainer" containerID="f88738c7cb3808e8ebb5ddd209f4e28577d6aec5e69f689e145c36b78a77fe4b" Feb 24 02:17:04.848834 master-0 kubenswrapper[7864]: I0224 02:17:04.848782 7864 scope.go:117] "RemoveContainer" containerID="2ba78e893a4a0218430d836aa7034890de4059e422e2aea2e06c126f90cc740a" Feb 24 02:17:04.849211 master-0 kubenswrapper[7864]: E0224 02:17:04.849164 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-6f5488b997-4qf9p_openshift-marketplace(91d16f7b-390a-4d9d-99d6-cc8e210801d1)\"" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" podUID="91d16f7b-390a-4d9d-99d6-cc8e210801d1" Feb 24 02:17:04.852436 master-0 kubenswrapper[7864]: I0224 02:17:04.852366 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-8znkt_8e70a9f5-1154-40e9-a487-21e36e7f420a/config-sync-controllers/0.log" Feb 24 02:17:04.853083 master-0 kubenswrapper[7864]: I0224 02:17:04.853006 7864 generic.go:334] "Generic (PLEG): container finished" podID="8e70a9f5-1154-40e9-a487-21e36e7f420a" containerID="092cbfe8313c87ea7a79610f389e04195756c11c4aca575ebaf70dbe1a3f496d" exitCode=1 Feb 24 02:17:04.853195 master-0 kubenswrapper[7864]: I0224 02:17:04.853092 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" event={"ID":"8e70a9f5-1154-40e9-a487-21e36e7f420a","Type":"ContainerDied","Data":"092cbfe8313c87ea7a79610f389e04195756c11c4aca575ebaf70dbe1a3f496d"} Feb 24 02:17:04.854567 master-0 kubenswrapper[7864]: I0224 02:17:04.854510 7864 scope.go:117] "RemoveContainer" containerID="092cbfe8313c87ea7a79610f389e04195756c11c4aca575ebaf70dbe1a3f496d" Feb 24 02:17:05.086565 master-0 kubenswrapper[7864]: I0224 02:17:05.086481 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:05.086565 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:05.086565 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:05.086565 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:05.087466 master-0 kubenswrapper[7864]: I0224 02:17:05.086621 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:05.867326 master-0 kubenswrapper[7864]: I0224 02:17:05.867245 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-8l58x_f6e7b773-7ecd-4a5c-8bef-d672f371e7e5/snapshot-controller/3.log" Feb 24 02:17:05.868213 master-0 kubenswrapper[7864]: I0224 02:17:05.868156 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-8l58x_f6e7b773-7ecd-4a5c-8bef-d672f371e7e5/snapshot-controller/2.log" Feb 24 02:17:05.868309 master-0 kubenswrapper[7864]: I0224 02:17:05.868250 7864 generic.go:334] "Generic (PLEG): container finished" podID="f6e7b773-7ecd-4a5c-8bef-d672f371e7e5" containerID="361cb30b3b5fa4aef37729ebe34ed6dc589b5812da2325814ef9feeb894335c5" exitCode=1 Feb 24 02:17:05.868432 master-0 kubenswrapper[7864]: I0224 02:17:05.868373 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" event={"ID":"f6e7b773-7ecd-4a5c-8bef-d672f371e7e5","Type":"ContainerDied","Data":"361cb30b3b5fa4aef37729ebe34ed6dc589b5812da2325814ef9feeb894335c5"} Feb 24 02:17:05.868608 master-0 kubenswrapper[7864]: I0224 02:17:05.868467 7864 scope.go:117] "RemoveContainer" containerID="4291e0907ef19a32f51af939efa4c04ce2db9b3453891c7e12142da00a40e520" Feb 24 02:17:05.869205 master-0 kubenswrapper[7864]: I0224 02:17:05.869153 7864 scope.go:117] "RemoveContainer" containerID="361cb30b3b5fa4aef37729ebe34ed6dc589b5812da2325814ef9feeb894335c5" Feb 24 02:17:05.869654 master-0 kubenswrapper[7864]: E0224 02:17:05.869517 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-8l58x_openshift-cluster-storage-operator(f6e7b773-7ecd-4a5c-8bef-d672f371e7e5)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" podUID="f6e7b773-7ecd-4a5c-8bef-d672f371e7e5" Feb 24 02:17:05.875084 master-0 kubenswrapper[7864]: I0224 02:17:05.875011 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-8znkt_8e70a9f5-1154-40e9-a487-21e36e7f420a/config-sync-controllers/0.log" Feb 24 02:17:05.885950 master-0 kubenswrapper[7864]: I0224 02:17:05.885890 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" event={"ID":"8e70a9f5-1154-40e9-a487-21e36e7f420a","Type":"ContainerStarted","Data":"f39565eb2bd7d13e3f60e289b141d73bd5aa5e4222b88ea5807cf96c776110cb"} Feb 24 02:17:06.084773 master-0 kubenswrapper[7864]: I0224 02:17:06.084644 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:06.084773 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:06.084773 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:06.084773 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:06.084773 master-0 kubenswrapper[7864]: I0224 02:17:06.084734 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:06.889991 master-0 kubenswrapper[7864]: I0224 02:17:06.889906 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-8znkt_8e70a9f5-1154-40e9-a487-21e36e7f420a/config-sync-controllers/0.log" Feb 24 02:17:06.890996 master-0 kubenswrapper[7864]: I0224 02:17:06.890842 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-8znkt_8e70a9f5-1154-40e9-a487-21e36e7f420a/cluster-cloud-controller-manager/0.log" Feb 24 02:17:06.890996 master-0 kubenswrapper[7864]: I0224 02:17:06.890921 7864 generic.go:334] "Generic (PLEG): container finished" podID="8e70a9f5-1154-40e9-a487-21e36e7f420a" containerID="d769b62bae7da248060974b37bc61a65e0831df5e231d5c8e62a89bc58f3df85" exitCode=1 Feb 24 02:17:06.891144 master-0 kubenswrapper[7864]: I0224 02:17:06.891028 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" event={"ID":"8e70a9f5-1154-40e9-a487-21e36e7f420a","Type":"ContainerDied","Data":"d769b62bae7da248060974b37bc61a65e0831df5e231d5c8e62a89bc58f3df85"} Feb 24 02:17:06.892189 master-0 kubenswrapper[7864]: I0224 02:17:06.892142 7864 scope.go:117] "RemoveContainer" containerID="d769b62bae7da248060974b37bc61a65e0831df5e231d5c8e62a89bc58f3df85" Feb 24 02:17:06.895920 master-0 kubenswrapper[7864]: I0224 02:17:06.895868 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-8l58x_f6e7b773-7ecd-4a5c-8bef-d672f371e7e5/snapshot-controller/3.log" Feb 24 02:17:07.085884 master-0 kubenswrapper[7864]: I0224 02:17:07.085771 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:07.085884 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:07.085884 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:07.085884 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:07.086350 master-0 kubenswrapper[7864]: I0224 02:17:07.085884 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:07.909955 master-0 kubenswrapper[7864]: I0224 02:17:07.909864 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-8znkt_8e70a9f5-1154-40e9-a487-21e36e7f420a/config-sync-controllers/0.log" Feb 24 02:17:07.911013 master-0 kubenswrapper[7864]: I0224 02:17:07.910939 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-8znkt_8e70a9f5-1154-40e9-a487-21e36e7f420a/cluster-cloud-controller-manager/0.log" Feb 24 02:17:07.911162 master-0 kubenswrapper[7864]: I0224 02:17:07.911081 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" event={"ID":"8e70a9f5-1154-40e9-a487-21e36e7f420a","Type":"ContainerStarted","Data":"97b35331462aaa0a369d9be117443f796eda569592d8ad8fbb17987616408b1a"} Feb 24 02:17:08.085161 master-0 kubenswrapper[7864]: I0224 02:17:08.085068 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:08.085161 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:08.085161 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:08.085161 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:08.085641 master-0 kubenswrapper[7864]: I0224 02:17:08.085160 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:08.939647 master-0 kubenswrapper[7864]: I0224 02:17:08.939486 7864 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:17:08.941131 master-0 kubenswrapper[7864]: I0224 02:17:08.939967 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:17:08.941375 master-0 kubenswrapper[7864]: I0224 02:17:08.941313 7864 scope.go:117] "RemoveContainer" containerID="2ba78e893a4a0218430d836aa7034890de4059e422e2aea2e06c126f90cc740a" Feb 24 02:17:08.942073 master-0 kubenswrapper[7864]: E0224 02:17:08.942006 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-6f5488b997-4qf9p_openshift-marketplace(91d16f7b-390a-4d9d-99d6-cc8e210801d1)\"" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" podUID="91d16f7b-390a-4d9d-99d6-cc8e210801d1" Feb 24 02:17:09.085265 master-0 kubenswrapper[7864]: I0224 02:17:09.085144 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:09.085265 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:09.085265 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:09.085265 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:09.085677 master-0 kubenswrapper[7864]: I0224 02:17:09.085262 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:09.928024 master-0 kubenswrapper[7864]: I0224 02:17:09.927889 7864 scope.go:117] "RemoveContainer" containerID="2ba78e893a4a0218430d836aa7034890de4059e422e2aea2e06c126f90cc740a" Feb 24 02:17:09.928420 master-0 kubenswrapper[7864]: E0224 02:17:09.928229 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-6f5488b997-4qf9p_openshift-marketplace(91d16f7b-390a-4d9d-99d6-cc8e210801d1)\"" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" podUID="91d16f7b-390a-4d9d-99d6-cc8e210801d1" Feb 24 02:17:10.084952 master-0 kubenswrapper[7864]: I0224 02:17:10.084853 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:10.084952 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:10.084952 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:10.084952 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:10.085652 master-0 kubenswrapper[7864]: I0224 02:17:10.084979 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:10.226105 master-0 kubenswrapper[7864]: I0224 02:17:10.217378 7864 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 02:17:11.085341 master-0 kubenswrapper[7864]: I0224 02:17:11.085174 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:11.085341 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:11.085341 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:11.085341 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:11.085341 master-0 kubenswrapper[7864]: I0224 02:17:11.085321 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:11.974838 master-0 kubenswrapper[7864]: I0224 02:17:11.974740 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-jhklz_4f5b3b93-a59d-495c-a311-8913fa6000fc/manager/1.log" Feb 24 02:17:11.976044 master-0 kubenswrapper[7864]: I0224 02:17:11.975988 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-jhklz_4f5b3b93-a59d-495c-a311-8913fa6000fc/manager/0.log" Feb 24 02:17:11.977071 master-0 kubenswrapper[7864]: I0224 02:17:11.976977 7864 generic.go:334] "Generic (PLEG): container finished" podID="4f5b3b93-a59d-495c-a311-8913fa6000fc" containerID="8ea7a40a6269170c26a7054768f3ac9bed9d2f95f70afef7c8db1dfa7590c2a4" exitCode=1 Feb 24 02:17:11.977199 master-0 kubenswrapper[7864]: I0224 02:17:11.977111 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" event={"ID":"4f5b3b93-a59d-495c-a311-8913fa6000fc","Type":"ContainerDied","Data":"8ea7a40a6269170c26a7054768f3ac9bed9d2f95f70afef7c8db1dfa7590c2a4"} Feb 24 02:17:11.977286 master-0 kubenswrapper[7864]: I0224 02:17:11.977220 7864 scope.go:117] "RemoveContainer" containerID="2a70331e31f309db225d3996274bc257195cff624763144e3200d4a89257b219" Feb 24 02:17:11.978334 master-0 kubenswrapper[7864]: I0224 02:17:11.978299 7864 scope.go:117] "RemoveContainer" containerID="8ea7a40a6269170c26a7054768f3ac9bed9d2f95f70afef7c8db1dfa7590c2a4" Feb 24 02:17:11.978789 master-0 kubenswrapper[7864]: E0224 02:17:11.978733 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=catalogd-controller-manager-84b8d9d697-jhklz_openshift-catalogd(4f5b3b93-a59d-495c-a311-8913fa6000fc)\"" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" podUID="4f5b3b93-a59d-495c-a311-8913fa6000fc" Feb 24 02:17:11.980480 master-0 kubenswrapper[7864]: I0224 02:17:11.979844 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-hvr8b_4a2d8ef6-14ac-490d-a931-7082344d3f46/manager/1.log" Feb 24 02:17:11.982118 master-0 kubenswrapper[7864]: I0224 02:17:11.982068 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-hvr8b_4a2d8ef6-14ac-490d-a931-7082344d3f46/manager/0.log" Feb 24 02:17:11.982258 master-0 kubenswrapper[7864]: I0224 02:17:11.982198 7864 generic.go:334] "Generic (PLEG): container finished" podID="4a2d8ef6-14ac-490d-a931-7082344d3f46" containerID="8f261203a7d383fb41e07035f6494d12c5ece1ea073bc54bdb848ad1f13db388" exitCode=1 Feb 24 02:17:11.982515 master-0 kubenswrapper[7864]: I0224 02:17:11.982299 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" event={"ID":"4a2d8ef6-14ac-490d-a931-7082344d3f46","Type":"ContainerDied","Data":"8f261203a7d383fb41e07035f6494d12c5ece1ea073bc54bdb848ad1f13db388"} Feb 24 02:17:11.983451 master-0 kubenswrapper[7864]: I0224 02:17:11.983402 7864 scope.go:117] "RemoveContainer" containerID="8f261203a7d383fb41e07035f6494d12c5ece1ea073bc54bdb848ad1f13db388" Feb 24 02:17:11.983873 master-0 kubenswrapper[7864]: E0224 02:17:11.983820 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=operator-controller-controller-manager-9cc7d7bb-hvr8b_openshift-operator-controller(4a2d8ef6-14ac-490d-a931-7082344d3f46)\"" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" podUID="4a2d8ef6-14ac-490d-a931-7082344d3f46" Feb 24 02:17:12.033410 master-0 kubenswrapper[7864]: I0224 02:17:12.033315 7864 scope.go:117] "RemoveContainer" containerID="e69376d98cee67244b069177748eb8161f1ffee16e9b9f5abd63b6aff145de6c" Feb 24 02:17:12.085111 master-0 kubenswrapper[7864]: I0224 02:17:12.085054 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:12.085111 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:12.085111 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:12.085111 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:12.085405 master-0 kubenswrapper[7864]: I0224 02:17:12.085126 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:12.997408 master-0 kubenswrapper[7864]: I0224 02:17:12.997329 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-hvr8b_4a2d8ef6-14ac-490d-a931-7082344d3f46/manager/1.log" Feb 24 02:17:13.001242 master-0 kubenswrapper[7864]: I0224 02:17:13.001173 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-jhklz_4f5b3b93-a59d-495c-a311-8913fa6000fc/manager/1.log" Feb 24 02:17:13.085764 master-0 kubenswrapper[7864]: I0224 02:17:13.085670 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:13.085764 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:13.085764 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:13.085764 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:13.086699 master-0 kubenswrapper[7864]: I0224 02:17:13.085789 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:14.085733 master-0 kubenswrapper[7864]: I0224 02:17:14.085552 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:14.085733 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:14.085733 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:14.085733 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:14.085733 master-0 kubenswrapper[7864]: I0224 02:17:14.085710 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:15.085451 master-0 kubenswrapper[7864]: I0224 02:17:15.085313 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:15.085451 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:15.085451 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:15.085451 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:15.086908 master-0 kubenswrapper[7864]: I0224 02:17:15.085450 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:15.674437 master-0 kubenswrapper[7864]: I0224 02:17:15.674344 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:17:15.675201 master-0 kubenswrapper[7864]: I0224 02:17:15.675150 7864 scope.go:117] "RemoveContainer" containerID="8ea7a40a6269170c26a7054768f3ac9bed9d2f95f70afef7c8db1dfa7590c2a4" Feb 24 02:17:15.675669 master-0 kubenswrapper[7864]: E0224 02:17:15.675566 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=catalogd-controller-manager-84b8d9d697-jhklz_openshift-catalogd(4f5b3b93-a59d-495c-a311-8913fa6000fc)\"" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" podUID="4f5b3b93-a59d-495c-a311-8913fa6000fc" Feb 24 02:17:15.849416 master-0 kubenswrapper[7864]: I0224 02:17:15.849275 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:17:15.850228 master-0 kubenswrapper[7864]: I0224 02:17:15.850170 7864 scope.go:117] "RemoveContainer" containerID="8f261203a7d383fb41e07035f6494d12c5ece1ea073bc54bdb848ad1f13db388" Feb 24 02:17:15.850641 master-0 kubenswrapper[7864]: E0224 02:17:15.850550 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=operator-controller-controller-manager-9cc7d7bb-hvr8b_openshift-operator-controller(4a2d8ef6-14ac-490d-a931-7082344d3f46)\"" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" podUID="4a2d8ef6-14ac-490d-a931-7082344d3f46" Feb 24 02:17:16.084809 master-0 kubenswrapper[7864]: I0224 02:17:16.084715 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:16.084809 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:16.084809 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:16.084809 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:16.084809 master-0 kubenswrapper[7864]: I0224 02:17:16.084796 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:16.579901 master-0 kubenswrapper[7864]: E0224 02:17:16.579763 7864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Feb 24 02:17:17.085725 master-0 kubenswrapper[7864]: I0224 02:17:17.085612 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:17.085725 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:17.085725 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:17.085725 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:17.085725 master-0 kubenswrapper[7864]: I0224 02:17:17.085718 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:17.875195 master-0 kubenswrapper[7864]: I0224 02:17:17.875109 7864 scope.go:117] "RemoveContainer" containerID="049ca3ed1273887ef2dfb1c46cae5f8f4c14254dc46c7407bca3af34b7c3bdfe" Feb 24 02:17:18.085044 master-0 kubenswrapper[7864]: I0224 02:17:18.084979 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:18.085044 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:18.085044 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:18.085044 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:18.085355 master-0 kubenswrapper[7864]: I0224 02:17:18.085090 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:19.074229 master-0 kubenswrapper[7864]: I0224 02:17:19.074133 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-6dlqb_c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/ingress-operator/3.log" Feb 24 02:17:19.075179 master-0 kubenswrapper[7864]: I0224 02:17:19.074937 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" event={"ID":"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334","Type":"ContainerStarted","Data":"7ebe8b93ac5fdd43a48b73d5d4aae71709a78d8ce5151017ae8a23f470fe9ff8"} Feb 24 02:17:19.085039 master-0 kubenswrapper[7864]: I0224 02:17:19.084970 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:19.085039 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:19.085039 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:19.085039 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:19.085363 master-0 kubenswrapper[7864]: I0224 02:17:19.085080 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:20.085157 master-0 kubenswrapper[7864]: I0224 02:17:20.085044 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:20.085157 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:20.085157 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:20.085157 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:20.085157 master-0 kubenswrapper[7864]: I0224 02:17:20.085140 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:20.218750 master-0 kubenswrapper[7864]: I0224 02:17:20.218637 7864 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 02:17:20.218750 master-0 kubenswrapper[7864]: I0224 02:17:20.218765 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:17:20.219956 master-0 kubenswrapper[7864]: I0224 02:17:20.219902 7864 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 24 02:17:20.220068 master-0 kubenswrapper[7864]: I0224 02:17:20.220025 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" containerID="cri-o://234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503" gracePeriod=30 Feb 24 02:17:20.344110 master-0 kubenswrapper[7864]: E0224 02:17:20.344035 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 24 02:17:20.875427 master-0 kubenswrapper[7864]: I0224 02:17:20.875312 7864 scope.go:117] "RemoveContainer" containerID="361cb30b3b5fa4aef37729ebe34ed6dc589b5812da2325814ef9feeb894335c5" Feb 24 02:17:20.876014 master-0 kubenswrapper[7864]: E0224 02:17:20.875717 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-8l58x_openshift-cluster-storage-operator(f6e7b773-7ecd-4a5c-8bef-d672f371e7e5)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" podUID="f6e7b773-7ecd-4a5c-8bef-d672f371e7e5" Feb 24 02:17:21.084698 master-0 kubenswrapper[7864]: I0224 02:17:21.084538 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:21.084698 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:21.084698 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:21.084698 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:21.084698 master-0 kubenswrapper[7864]: I0224 02:17:21.084674 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:21.096129 master-0 kubenswrapper[7864]: I0224 02:17:21.096012 7864 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503" exitCode=2 Feb 24 02:17:21.096129 master-0 kubenswrapper[7864]: I0224 02:17:21.096077 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503"} Feb 24 02:17:21.096129 master-0 kubenswrapper[7864]: I0224 02:17:21.096131 7864 scope.go:117] "RemoveContainer" containerID="5b0e89fd54937406992276c2b46728950e8622b38ab169cee211632dcaf018e1" Feb 24 02:17:21.097215 master-0 kubenswrapper[7864]: I0224 02:17:21.097134 7864 scope.go:117] "RemoveContainer" containerID="234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503" Feb 24 02:17:21.097634 master-0 kubenswrapper[7864]: E0224 02:17:21.097556 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 24 02:17:22.085135 master-0 kubenswrapper[7864]: I0224 02:17:22.085038 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:22.085135 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:22.085135 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:22.085135 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:22.086235 master-0 kubenswrapper[7864]: I0224 02:17:22.085142 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:22.420429 master-0 kubenswrapper[7864]: I0224 02:17:22.420177 7864 status_manager.go:851] "Failed to get status for pod" podUID="adc1097b-c1ab-4f09-965d-1c819671475b" pod="openshift-network-node-identity/network-node-identity-p5b6q" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods network-node-identity-p5b6q)" Feb 24 02:17:23.085282 master-0 kubenswrapper[7864]: I0224 02:17:23.085186 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:23.085282 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:23.085282 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:23.085282 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:23.085282 master-0 kubenswrapper[7864]: I0224 02:17:23.085266 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:23.875058 master-0 kubenswrapper[7864]: I0224 02:17:23.874946 7864 scope.go:117] "RemoveContainer" containerID="2ba78e893a4a0218430d836aa7034890de4059e422e2aea2e06c126f90cc740a" Feb 24 02:17:24.084643 master-0 kubenswrapper[7864]: I0224 02:17:24.084555 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:24.084643 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:24.084643 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:24.084643 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:24.085047 master-0 kubenswrapper[7864]: I0224 02:17:24.084676 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:24.132312 master-0 kubenswrapper[7864]: I0224 02:17:24.132090 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" event={"ID":"91d16f7b-390a-4d9d-99d6-cc8e210801d1","Type":"ContainerStarted","Data":"36bf2499ceb16a6789bfaea260bc661782023dffc5c354b07ad277186683d4ac"} Feb 24 02:17:24.133769 master-0 kubenswrapper[7864]: I0224 02:17:24.133701 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:17:24.135631 master-0 kubenswrapper[7864]: I0224 02:17:24.135551 7864 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-4qf9p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" start-of-body= Feb 24 02:17:24.135962 master-0 kubenswrapper[7864]: I0224 02:17:24.135886 7864 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" podUID="91d16f7b-390a-4d9d-99d6-cc8e210801d1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" Feb 24 02:17:24.484295 master-0 kubenswrapper[7864]: I0224 02:17:24.484224 7864 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:17:24.485979 master-0 kubenswrapper[7864]: I0224 02:17:24.485953 7864 scope.go:117] "RemoveContainer" containerID="234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503" Feb 24 02:17:24.486862 master-0 kubenswrapper[7864]: E0224 02:17:24.486807 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 24 02:17:25.085168 master-0 kubenswrapper[7864]: I0224 02:17:25.085100 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:25.085168 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:25.085168 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:25.085168 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:25.085791 master-0 kubenswrapper[7864]: I0224 02:17:25.085193 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:25.145102 master-0 kubenswrapper[7864]: I0224 02:17:25.144999 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:17:25.674204 master-0 kubenswrapper[7864]: I0224 02:17:25.674111 7864 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:17:25.675051 master-0 kubenswrapper[7864]: I0224 02:17:25.675005 7864 scope.go:117] "RemoveContainer" containerID="8ea7a40a6269170c26a7054768f3ac9bed9d2f95f70afef7c8db1dfa7590c2a4" Feb 24 02:17:25.849848 master-0 kubenswrapper[7864]: I0224 02:17:25.849752 7864 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:17:25.850891 master-0 kubenswrapper[7864]: I0224 02:17:25.850819 7864 scope.go:117] "RemoveContainer" containerID="8f261203a7d383fb41e07035f6494d12c5ece1ea073bc54bdb848ad1f13db388" Feb 24 02:17:26.086020 master-0 kubenswrapper[7864]: I0224 02:17:26.085949 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:26.086020 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:26.086020 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:26.086020 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:26.086505 master-0 kubenswrapper[7864]: I0224 02:17:26.086074 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:26.167242 master-0 kubenswrapper[7864]: I0224 02:17:26.167160 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-jhklz_4f5b3b93-a59d-495c-a311-8913fa6000fc/manager/1.log" Feb 24 02:17:26.169197 master-0 kubenswrapper[7864]: I0224 02:17:26.169132 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" event={"ID":"4f5b3b93-a59d-495c-a311-8913fa6000fc","Type":"ContainerStarted","Data":"448360d167a3924b4e80b020c352dc3f31f6a37b9004d07ffe025473c90dfad5"} Feb 24 02:17:26.170063 master-0 kubenswrapper[7864]: I0224 02:17:26.170010 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:17:27.085545 master-0 kubenswrapper[7864]: I0224 02:17:27.085424 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:27.085545 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:27.085545 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:27.085545 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:27.085545 master-0 kubenswrapper[7864]: I0224 02:17:27.085566 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:27.182881 master-0 kubenswrapper[7864]: I0224 02:17:27.182735 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-hvr8b_4a2d8ef6-14ac-490d-a931-7082344d3f46/manager/1.log" Feb 24 02:17:27.184535 master-0 kubenswrapper[7864]: I0224 02:17:27.184453 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" event={"ID":"4a2d8ef6-14ac-490d-a931-7082344d3f46","Type":"ContainerStarted","Data":"9646b934b2718a877d84d3844b7ac6e3d8136d87f64b2e2fac02d09f99a5f0af"} Feb 24 02:17:28.084936 master-0 kubenswrapper[7864]: I0224 02:17:28.084792 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:28.084936 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:28.084936 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:28.084936 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:28.084936 master-0 kubenswrapper[7864]: I0224 02:17:28.084902 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:29.085144 master-0 kubenswrapper[7864]: I0224 02:17:29.085030 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:29.085144 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:29.085144 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:29.085144 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:29.086313 master-0 kubenswrapper[7864]: I0224 02:17:29.085152 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:30.085374 master-0 kubenswrapper[7864]: I0224 02:17:30.085257 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:30.085374 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:30.085374 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:30.085374 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:30.085374 master-0 kubenswrapper[7864]: I0224 02:17:30.085367 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:31.085146 master-0 kubenswrapper[7864]: I0224 02:17:31.085048 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:31.085146 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:31.085146 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:31.085146 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:31.086268 master-0 kubenswrapper[7864]: I0224 02:17:31.085159 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:32.084789 master-0 kubenswrapper[7864]: I0224 02:17:32.084674 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:32.084789 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:32.084789 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:32.084789 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:32.085542 master-0 kubenswrapper[7864]: I0224 02:17:32.084794 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:32.328190 master-0 kubenswrapper[7864]: E0224 02:17:32.328082 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T02:17:22Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T02:17:22Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T02:17:22Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T02:17:22Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:17:32.562399 master-0 kubenswrapper[7864]: E0224 02:17:32.562177 7864 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.18970d0d76c45509 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:BackOff,Message:Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:15:29.871893769 +0000 UTC m=+694.199547421,LastTimestamp:2026-02-24 02:15:30.88634129 +0000 UTC m=+695.213994942,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:17:32.982026 master-0 kubenswrapper[7864]: E0224 02:17:32.981921 7864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 24 02:17:33.084671 master-0 kubenswrapper[7864]: I0224 02:17:33.084497 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:33.084671 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:33.084671 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:33.084671 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:33.084671 master-0 kubenswrapper[7864]: I0224 02:17:33.084644 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:34.085708 master-0 kubenswrapper[7864]: I0224 02:17:34.085616 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:34.085708 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:34.085708 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:34.085708 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:34.086847 master-0 kubenswrapper[7864]: I0224 02:17:34.085742 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:34.875252 master-0 kubenswrapper[7864]: I0224 02:17:34.875102 7864 scope.go:117] "RemoveContainer" containerID="361cb30b3b5fa4aef37729ebe34ed6dc589b5812da2325814ef9feeb894335c5" Feb 24 02:17:34.876375 master-0 kubenswrapper[7864]: E0224 02:17:34.876294 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-8l58x_openshift-cluster-storage-operator(f6e7b773-7ecd-4a5c-8bef-d672f371e7e5)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" podUID="f6e7b773-7ecd-4a5c-8bef-d672f371e7e5" Feb 24 02:17:35.085265 master-0 kubenswrapper[7864]: I0224 02:17:35.085180 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:35.085265 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:35.085265 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:35.085265 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:35.085791 master-0 kubenswrapper[7864]: I0224 02:17:35.085276 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:35.677629 master-0 kubenswrapper[7864]: I0224 02:17:35.677480 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:17:35.849612 master-0 kubenswrapper[7864]: I0224 02:17:35.849478 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:17:35.852608 master-0 kubenswrapper[7864]: I0224 02:17:35.852513 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:17:36.085031 master-0 kubenswrapper[7864]: I0224 02:17:36.084903 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:36.085031 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:36.085031 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:36.085031 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:36.085031 master-0 kubenswrapper[7864]: I0224 02:17:36.084996 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:36.830095 master-0 kubenswrapper[7864]: E0224 02:17:36.829978 7864 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 24 02:17:37.084465 master-0 kubenswrapper[7864]: I0224 02:17:37.084331 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:37.084465 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:37.084465 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:37.084465 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:37.084465 master-0 kubenswrapper[7864]: I0224 02:17:37.084423 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:37.280706 master-0 kubenswrapper[7864]: I0224 02:17:37.280606 7864 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="bb9a80ed6d7d7eb83242571f651240d13b6fe2b3ccfaff6770e496961a1600a5" exitCode=0 Feb 24 02:17:37.280706 master-0 kubenswrapper[7864]: I0224 02:17:37.280659 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"bb9a80ed6d7d7eb83242571f651240d13b6fe2b3ccfaff6770e496961a1600a5"} Feb 24 02:17:37.281345 master-0 kubenswrapper[7864]: I0224 02:17:37.281286 7864 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="fc3a7b55-0847-44db-87c3-4a0d6e333219" Feb 24 02:17:37.281345 master-0 kubenswrapper[7864]: I0224 02:17:37.281331 7864 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="fc3a7b55-0847-44db-87c3-4a0d6e333219" Feb 24 02:17:37.876189 master-0 kubenswrapper[7864]: I0224 02:17:37.876048 7864 scope.go:117] "RemoveContainer" containerID="234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503" Feb 24 02:17:37.877245 master-0 kubenswrapper[7864]: E0224 02:17:37.876607 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 24 02:17:38.085402 master-0 kubenswrapper[7864]: I0224 02:17:38.085256 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:38.085402 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:38.085402 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:38.085402 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:38.085842 master-0 kubenswrapper[7864]: I0224 02:17:38.085390 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:39.085740 master-0 kubenswrapper[7864]: I0224 02:17:39.085604 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:39.085740 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:39.085740 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:39.085740 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:39.085740 master-0 kubenswrapper[7864]: I0224 02:17:39.085699 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:40.086173 master-0 kubenswrapper[7864]: I0224 02:17:40.086069 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:40.086173 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:40.086173 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:40.086173 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:40.087274 master-0 kubenswrapper[7864]: I0224 02:17:40.086195 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:41.086095 master-0 kubenswrapper[7864]: I0224 02:17:41.085984 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:41.086095 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:41.086095 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:41.086095 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:41.087340 master-0 kubenswrapper[7864]: I0224 02:17:41.086111 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:42.084683 master-0 kubenswrapper[7864]: I0224 02:17:42.084601 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:42.084683 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:42.084683 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:42.084683 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:42.085191 master-0 kubenswrapper[7864]: I0224 02:17:42.084719 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:42.330127 master-0 kubenswrapper[7864]: E0224 02:17:42.329698 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:17:43.085345 master-0 kubenswrapper[7864]: I0224 02:17:43.085258 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:43.085345 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:43.085345 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:43.085345 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:43.085922 master-0 kubenswrapper[7864]: I0224 02:17:43.085373 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:44.085794 master-0 kubenswrapper[7864]: I0224 02:17:44.085682 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:44.085794 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:44.085794 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:44.085794 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:44.086861 master-0 kubenswrapper[7864]: I0224 02:17:44.085794 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:45.085810 master-0 kubenswrapper[7864]: I0224 02:17:45.085701 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:45.085810 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:45.085810 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:45.085810 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:45.086971 master-0 kubenswrapper[7864]: I0224 02:17:45.085833 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:45.366223 master-0 kubenswrapper[7864]: I0224 02:17:45.366023 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-7dd9c7d7b9-sjqsx_8ebd1a97-ff7b-4a10-a1b5-956e427478a8/machine-approver-controller/0.log" Feb 24 02:17:45.367129 master-0 kubenswrapper[7864]: I0224 02:17:45.367031 7864 generic.go:334] "Generic (PLEG): container finished" podID="8ebd1a97-ff7b-4a10-a1b5-956e427478a8" containerID="1ad25f42402e4374c8b94191386be9d7bc2003ced71ea11e24c2117025405399" exitCode=255 Feb 24 02:17:45.367229 master-0 kubenswrapper[7864]: I0224 02:17:45.367153 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" event={"ID":"8ebd1a97-ff7b-4a10-a1b5-956e427478a8","Type":"ContainerDied","Data":"1ad25f42402e4374c8b94191386be9d7bc2003ced71ea11e24c2117025405399"} Feb 24 02:17:45.367951 master-0 kubenswrapper[7864]: I0224 02:17:45.367903 7864 scope.go:117] "RemoveContainer" containerID="1ad25f42402e4374c8b94191386be9d7bc2003ced71ea11e24c2117025405399" Feb 24 02:17:46.085733 master-0 kubenswrapper[7864]: I0224 02:17:46.085646 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:46.085733 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:46.085733 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:46.085733 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:46.086839 master-0 kubenswrapper[7864]: I0224 02:17:46.085758 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:46.385479 master-0 kubenswrapper[7864]: I0224 02:17:46.385261 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-7dd9c7d7b9-sjqsx_8ebd1a97-ff7b-4a10-a1b5-956e427478a8/machine-approver-controller/0.log" Feb 24 02:17:46.386287 master-0 kubenswrapper[7864]: I0224 02:17:46.386195 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" event={"ID":"8ebd1a97-ff7b-4a10-a1b5-956e427478a8","Type":"ContainerStarted","Data":"3595b27c901a436cc869a6f11c6c419c015d879fa7a8bd4cad8a61ebd21bfc83"} Feb 24 02:17:47.085439 master-0 kubenswrapper[7864]: I0224 02:17:47.085318 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:47.085439 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:47.085439 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:47.085439 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:47.085968 master-0 kubenswrapper[7864]: I0224 02:17:47.085444 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:48.085521 master-0 kubenswrapper[7864]: I0224 02:17:48.085405 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:48.085521 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:48.085521 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:48.085521 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:48.085521 master-0 kubenswrapper[7864]: I0224 02:17:48.085524 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:49.084229 master-0 kubenswrapper[7864]: I0224 02:17:49.084090 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:49.084229 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:49.084229 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:49.084229 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:49.084229 master-0 kubenswrapper[7864]: I0224 02:17:49.084195 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:49.875496 master-0 kubenswrapper[7864]: I0224 02:17:49.875375 7864 scope.go:117] "RemoveContainer" containerID="234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503" Feb 24 02:17:49.875496 master-0 kubenswrapper[7864]: I0224 02:17:49.875467 7864 scope.go:117] "RemoveContainer" containerID="361cb30b3b5fa4aef37729ebe34ed6dc589b5812da2325814ef9feeb894335c5" Feb 24 02:17:49.876436 master-0 kubenswrapper[7864]: E0224 02:17:49.875939 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 24 02:17:49.983948 master-0 kubenswrapper[7864]: E0224 02:17:49.983807 7864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 24 02:17:50.085239 master-0 kubenswrapper[7864]: I0224 02:17:50.085158 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:50.085239 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:50.085239 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:50.085239 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:50.085531 master-0 kubenswrapper[7864]: I0224 02:17:50.085267 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:50.425128 master-0 kubenswrapper[7864]: I0224 02:17:50.424997 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-8l58x_f6e7b773-7ecd-4a5c-8bef-d672f371e7e5/snapshot-controller/3.log" Feb 24 02:17:50.425443 master-0 kubenswrapper[7864]: I0224 02:17:50.425129 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" event={"ID":"f6e7b773-7ecd-4a5c-8bef-d672f371e7e5","Type":"ContainerStarted","Data":"de767e9040507053af6384aa3263bafeca9d1e8f1b629cb4c6dfbeef7e5cb93d"} Feb 24 02:17:51.085406 master-0 kubenswrapper[7864]: I0224 02:17:51.085075 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:51.085406 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:51.085406 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:51.085406 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:51.085406 master-0 kubenswrapper[7864]: I0224 02:17:51.085190 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:52.084592 master-0 kubenswrapper[7864]: I0224 02:17:52.084471 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:52.084592 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:52.084592 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:52.084592 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:52.085049 master-0 kubenswrapper[7864]: I0224 02:17:52.084620 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:52.330309 master-0 kubenswrapper[7864]: E0224 02:17:52.330212 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:17:52.449464 master-0 kubenswrapper[7864]: I0224 02:17:52.449324 7864 generic.go:334] "Generic (PLEG): container finished" podID="e8d6a6c0-b944-4206-9178-9a9930b303b9" containerID="5ed088abb8fdf119602dca1779c3b84da28af95aaab8dcf8c7df738c7d83aa56" exitCode=0 Feb 24 02:17:52.449464 master-0 kubenswrapper[7864]: I0224 02:17:52.449423 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" event={"ID":"e8d6a6c0-b944-4206-9178-9a9930b303b9","Type":"ContainerDied","Data":"5ed088abb8fdf119602dca1779c3b84da28af95aaab8dcf8c7df738c7d83aa56"} Feb 24 02:17:52.450642 master-0 kubenswrapper[7864]: I0224 02:17:52.450597 7864 scope.go:117] "RemoveContainer" containerID="5ed088abb8fdf119602dca1779c3b84da28af95aaab8dcf8c7df738c7d83aa56" Feb 24 02:17:53.085215 master-0 kubenswrapper[7864]: I0224 02:17:53.085107 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:53.085215 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:53.085215 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:53.085215 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:53.085683 master-0 kubenswrapper[7864]: I0224 02:17:53.085243 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:53.463820 master-0 kubenswrapper[7864]: I0224 02:17:53.463693 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" event={"ID":"e8d6a6c0-b944-4206-9178-9a9930b303b9","Type":"ContainerStarted","Data":"00fb88dd6ea7a9ddb5d7ddf189575d130d7630f12716d8a17f87c83ef377ea62"} Feb 24 02:17:53.464480 master-0 kubenswrapper[7864]: I0224 02:17:53.464134 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:17:53.470390 master-0 kubenswrapper[7864]: I0224 02:17:53.470322 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:17:54.084767 master-0 kubenswrapper[7864]: I0224 02:17:54.084693 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:54.084767 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:54.084767 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:54.084767 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:54.085203 master-0 kubenswrapper[7864]: I0224 02:17:54.084811 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:54.479184 master-0 kubenswrapper[7864]: I0224 02:17:54.479107 7864 generic.go:334] "Generic (PLEG): container finished" podID="523033b8-4101-4a55-8320-55bef04ddaaf" containerID="6dce9a18bd8067d5b09584fe75915e8862f74f2dfbc221872c96fc50a87d78c5" exitCode=0 Feb 24 02:17:54.480154 master-0 kubenswrapper[7864]: I0224 02:17:54.480089 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" event={"ID":"523033b8-4101-4a55-8320-55bef04ddaaf","Type":"ContainerDied","Data":"6dce9a18bd8067d5b09584fe75915e8862f74f2dfbc221872c96fc50a87d78c5"} Feb 24 02:17:54.480363 master-0 kubenswrapper[7864]: I0224 02:17:54.480337 7864 scope.go:117] "RemoveContainer" containerID="1d78e51e0a1da7f353fa2fc0c8e9c9a46d124e7c769ba9917e9138703d244089" Feb 24 02:17:54.481389 master-0 kubenswrapper[7864]: I0224 02:17:54.481344 7864 scope.go:117] "RemoveContainer" containerID="6dce9a18bd8067d5b09584fe75915e8862f74f2dfbc221872c96fc50a87d78c5" Feb 24 02:17:54.481794 master-0 kubenswrapper[7864]: E0224 02:17:54.481740 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-cluster-manager pod=ovnkube-control-plane-5d8dfcdc87-bb22k_openshift-ovn-kubernetes(523033b8-4101-4a55-8320-55bef04ddaaf)\"" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" podUID="523033b8-4101-4a55-8320-55bef04ddaaf" Feb 24 02:17:54.482270 master-0 kubenswrapper[7864]: I0224 02:17:54.482206 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-ckntz_a4cea44a-1c6e-465f-97df-2c951056cb85/control-plane-machine-set-operator/0.log" Feb 24 02:17:54.482380 master-0 kubenswrapper[7864]: I0224 02:17:54.482304 7864 generic.go:334] "Generic (PLEG): container finished" podID="a4cea44a-1c6e-465f-97df-2c951056cb85" containerID="333b5a9659128a52ffaed5da3c25d8feb0986d4e855c20f96e40ad31f9cb9171" exitCode=1 Feb 24 02:17:54.483537 master-0 kubenswrapper[7864]: I0224 02:17:54.482374 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-ckntz" event={"ID":"a4cea44a-1c6e-465f-97df-2c951056cb85","Type":"ContainerDied","Data":"333b5a9659128a52ffaed5da3c25d8feb0986d4e855c20f96e40ad31f9cb9171"} Feb 24 02:17:54.483537 master-0 kubenswrapper[7864]: I0224 02:17:54.483160 7864 scope.go:117] "RemoveContainer" containerID="333b5a9659128a52ffaed5da3c25d8feb0986d4e855c20f96e40ad31f9cb9171" Feb 24 02:17:55.085659 master-0 kubenswrapper[7864]: I0224 02:17:55.085553 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:55.085659 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:55.085659 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:55.085659 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:55.086253 master-0 kubenswrapper[7864]: I0224 02:17:55.085681 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:55.499165 master-0 kubenswrapper[7864]: I0224 02:17:55.499064 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-ckntz_a4cea44a-1c6e-465f-97df-2c951056cb85/control-plane-machine-set-operator/0.log" Feb 24 02:17:55.500138 master-0 kubenswrapper[7864]: I0224 02:17:55.499231 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-ckntz" event={"ID":"a4cea44a-1c6e-465f-97df-2c951056cb85","Type":"ContainerStarted","Data":"fbabd347815449a60e4b2b5993aa92710830e3b0983b74f7142b339cad432918"} Feb 24 02:17:55.503067 master-0 kubenswrapper[7864]: I0224 02:17:55.502991 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k98fq_7b4e3ba0-5194-4e20-8f12-dea4b67504fe/cluster-baremetal-operator/1.log" Feb 24 02:17:55.504499 master-0 kubenswrapper[7864]: I0224 02:17:55.504414 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k98fq_7b4e3ba0-5194-4e20-8f12-dea4b67504fe/cluster-baremetal-operator/0.log" Feb 24 02:17:55.504717 master-0 kubenswrapper[7864]: I0224 02:17:55.504527 7864 generic.go:334] "Generic (PLEG): container finished" podID="7b4e3ba0-5194-4e20-8f12-dea4b67504fe" containerID="4302401dd0c403009a73a84bbcac7a1752f583128d1d71005e6d83a2aeed0734" exitCode=1 Feb 24 02:17:55.504717 master-0 kubenswrapper[7864]: I0224 02:17:55.504667 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" event={"ID":"7b4e3ba0-5194-4e20-8f12-dea4b67504fe","Type":"ContainerDied","Data":"4302401dd0c403009a73a84bbcac7a1752f583128d1d71005e6d83a2aeed0734"} Feb 24 02:17:55.504912 master-0 kubenswrapper[7864]: I0224 02:17:55.504790 7864 scope.go:117] "RemoveContainer" containerID="7144c5e947ad686471e67b52048230854640c3d324dfe4c40330e542a4803eda" Feb 24 02:17:55.505410 master-0 kubenswrapper[7864]: I0224 02:17:55.505355 7864 scope.go:117] "RemoveContainer" containerID="4302401dd0c403009a73a84bbcac7a1752f583128d1d71005e6d83a2aeed0734" Feb 24 02:17:55.505837 master-0 kubenswrapper[7864]: E0224 02:17:55.505781 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-d6bb9bb76-k98fq_openshift-machine-api(7b4e3ba0-5194-4e20-8f12-dea4b67504fe)\"" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" podUID="7b4e3ba0-5194-4e20-8f12-dea4b67504fe" Feb 24 02:17:56.085122 master-0 kubenswrapper[7864]: I0224 02:17:56.085027 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:56.085122 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:56.085122 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:56.085122 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:56.085654 master-0 kubenswrapper[7864]: I0224 02:17:56.085129 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:56.517213 master-0 kubenswrapper[7864]: I0224 02:17:56.517120 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k98fq_7b4e3ba0-5194-4e20-8f12-dea4b67504fe/cluster-baremetal-operator/1.log" Feb 24 02:17:57.085427 master-0 kubenswrapper[7864]: I0224 02:17:57.085330 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:57.085427 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:57.085427 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:57.085427 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:57.086012 master-0 kubenswrapper[7864]: I0224 02:17:57.085451 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:58.084887 master-0 kubenswrapper[7864]: I0224 02:17:58.084792 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:58.084887 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:58.084887 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:58.084887 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:58.086004 master-0 kubenswrapper[7864]: I0224 02:17:58.084894 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:17:59.085282 master-0 kubenswrapper[7864]: I0224 02:17:59.085191 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:17:59.085282 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:17:59.085282 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:17:59.085282 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:17:59.086427 master-0 kubenswrapper[7864]: I0224 02:17:59.085303 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:18:00.085396 master-0 kubenswrapper[7864]: I0224 02:18:00.085266 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:18:00.085396 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:18:00.085396 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:18:00.085396 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:18:00.087013 master-0 kubenswrapper[7864]: I0224 02:18:00.085400 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:18:01.085810 master-0 kubenswrapper[7864]: I0224 02:18:01.085681 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:18:01.085810 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:18:01.085810 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:18:01.085810 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:18:01.086886 master-0 kubenswrapper[7864]: I0224 02:18:01.085820 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:18:02.084650 master-0 kubenswrapper[7864]: I0224 02:18:02.084089 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:18:02.084650 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:18:02.084650 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:18:02.084650 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:18:02.084650 master-0 kubenswrapper[7864]: I0224 02:18:02.084240 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:18:02.331151 master-0 kubenswrapper[7864]: E0224 02:18:02.331052 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:18:03.086111 master-0 kubenswrapper[7864]: I0224 02:18:03.086004 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:18:03.086111 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:18:03.086111 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:18:03.086111 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:18:03.086816 master-0 kubenswrapper[7864]: I0224 02:18:03.086142 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:18:03.875337 master-0 kubenswrapper[7864]: I0224 02:18:03.875241 7864 scope.go:117] "RemoveContainer" containerID="234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503" Feb 24 02:18:03.876271 master-0 kubenswrapper[7864]: E0224 02:18:03.875749 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 24 02:18:04.084955 master-0 kubenswrapper[7864]: I0224 02:18:04.084871 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:18:04.084955 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:18:04.084955 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:18:04.084955 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:18:04.084955 master-0 kubenswrapper[7864]: I0224 02:18:04.084991 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:18:05.085104 master-0 kubenswrapper[7864]: I0224 02:18:05.084999 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:18:05.085104 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:18:05.085104 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:18:05.085104 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:18:05.086210 master-0 kubenswrapper[7864]: I0224 02:18:05.085112 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:18:05.875289 master-0 kubenswrapper[7864]: I0224 02:18:05.875154 7864 scope.go:117] "RemoveContainer" containerID="6dce9a18bd8067d5b09584fe75915e8862f74f2dfbc221872c96fc50a87d78c5" Feb 24 02:18:06.084995 master-0 kubenswrapper[7864]: I0224 02:18:06.084922 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:18:06.084995 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:18:06.084995 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:18:06.084995 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:18:06.085964 master-0 kubenswrapper[7864]: I0224 02:18:06.085008 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:18:06.565163 master-0 kubenswrapper[7864]: E0224 02:18:06.564944 7864 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.18970c7d89167c61 kube-system 8913 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:56c3cb71c9851003c8de7e7c5db4b87e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:05:11 +0000 UTC,LastTimestamp:2026-02-24 02:15:32.915625702 +0000 UTC m=+697.243279354,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:18:06.622145 master-0 kubenswrapper[7864]: I0224 02:18:06.622082 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" event={"ID":"523033b8-4101-4a55-8320-55bef04ddaaf","Type":"ContainerStarted","Data":"08da08bd752d65477ab01471c9630dda4850b6474f22c31e418eb4d79c852e14"} Feb 24 02:18:06.985111 master-0 kubenswrapper[7864]: E0224 02:18:06.985010 7864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 24 02:18:07.085164 master-0 kubenswrapper[7864]: I0224 02:18:07.085025 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:18:07.085164 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:18:07.085164 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:18:07.085164 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:18:07.085164 master-0 kubenswrapper[7864]: I0224 02:18:07.085128 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:18:08.085557 master-0 kubenswrapper[7864]: I0224 02:18:08.085416 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:18:08.085557 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:18:08.085557 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:18:08.085557 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:18:08.085557 master-0 kubenswrapper[7864]: I0224 02:18:08.085520 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:18:09.084814 master-0 kubenswrapper[7864]: I0224 02:18:09.084721 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:18:09.084814 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:18:09.084814 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:18:09.084814 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:18:09.085346 master-0 kubenswrapper[7864]: I0224 02:18:09.084827 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:18:09.875477 master-0 kubenswrapper[7864]: I0224 02:18:09.875384 7864 scope.go:117] "RemoveContainer" containerID="4302401dd0c403009a73a84bbcac7a1752f583128d1d71005e6d83a2aeed0734" Feb 24 02:18:10.085896 master-0 kubenswrapper[7864]: I0224 02:18:10.085805 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:18:10.085896 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:18:10.085896 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:18:10.085896 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:18:10.086277 master-0 kubenswrapper[7864]: I0224 02:18:10.085914 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:18:10.661937 master-0 kubenswrapper[7864]: I0224 02:18:10.661851 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k98fq_7b4e3ba0-5194-4e20-8f12-dea4b67504fe/cluster-baremetal-operator/1.log" Feb 24 02:18:10.663557 master-0 kubenswrapper[7864]: I0224 02:18:10.662553 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" event={"ID":"7b4e3ba0-5194-4e20-8f12-dea4b67504fe","Type":"ContainerStarted","Data":"08908d93b30c563001e2ff25650203b4026284e8bf58ed4cc26c75825e885fed"} Feb 24 02:18:11.086072 master-0 kubenswrapper[7864]: I0224 02:18:11.085985 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:18:11.086072 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:18:11.086072 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:18:11.086072 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:18:11.087159 master-0 kubenswrapper[7864]: I0224 02:18:11.086091 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:18:11.087159 master-0 kubenswrapper[7864]: I0224 02:18:11.086176 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:18:11.087159 master-0 kubenswrapper[7864]: I0224 02:18:11.086960 7864 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"16b88bdb19342563d81116f8e13c7e868beadde1813cafcd204be1678600a199"} pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" containerMessage="Container router failed startup probe, will be restarted" Feb 24 02:18:11.087159 master-0 kubenswrapper[7864]: I0224 02:18:11.087029 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" containerID="cri-o://16b88bdb19342563d81116f8e13c7e868beadde1813cafcd204be1678600a199" gracePeriod=3600 Feb 24 02:18:11.285713 master-0 kubenswrapper[7864]: E0224 02:18:11.285563 7864 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 24 02:18:11.676722 master-0 kubenswrapper[7864]: I0224 02:18:11.676659 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"0c6beafb153b866ff3e8e8fd1b01b6ddbde73e4585489b844b97c2df21c90765"} Feb 24 02:18:12.331953 master-0 kubenswrapper[7864]: E0224 02:18:12.331880 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:18:12.331953 master-0 kubenswrapper[7864]: E0224 02:18:12.331941 7864 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 02:18:12.694715 master-0 kubenswrapper[7864]: I0224 02:18:12.694640 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"dd9e8ab8ba4d28f1a7531541449124d4c22497cafc8d64913441d5478c0d7774"} Feb 24 02:18:12.694882 master-0 kubenswrapper[7864]: I0224 02:18:12.694723 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"39edb4f2fd57b036ce39c1f95caba6b35ee9046cbc47aee42bae09ac48747aa5"} Feb 24 02:18:13.716913 master-0 kubenswrapper[7864]: I0224 02:18:13.716805 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"38979f2032cf905704224ecf95ea405c1d3628e64550fb0512de42cd82d16c3b"} Feb 24 02:18:13.716913 master-0 kubenswrapper[7864]: I0224 02:18:13.716898 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"4c5a831827aa8c6629b7edf7fcbb96edd2fce87cb622d23171cd4cc0b00518a0"} Feb 24 02:18:13.717945 master-0 kubenswrapper[7864]: I0224 02:18:13.717347 7864 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="fc3a7b55-0847-44db-87c3-4a0d6e333219" Feb 24 02:18:13.717945 master-0 kubenswrapper[7864]: I0224 02:18:13.717401 7864 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="fc3a7b55-0847-44db-87c3-4a0d6e333219" Feb 24 02:18:16.919304 master-0 kubenswrapper[7864]: I0224 02:18:16.919096 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 24 02:18:16.919304 master-0 kubenswrapper[7864]: I0224 02:18:16.919166 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 24 02:18:17.875280 master-0 kubenswrapper[7864]: I0224 02:18:17.875147 7864 scope.go:117] "RemoveContainer" containerID="234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503" Feb 24 02:18:17.875697 master-0 kubenswrapper[7864]: E0224 02:18:17.875619 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 24 02:18:20.790196 master-0 kubenswrapper[7864]: I0224 02:18:20.790118 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-8l58x_f6e7b773-7ecd-4a5c-8bef-d672f371e7e5/snapshot-controller/4.log" Feb 24 02:18:20.791174 master-0 kubenswrapper[7864]: I0224 02:18:20.791021 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-8l58x_f6e7b773-7ecd-4a5c-8bef-d672f371e7e5/snapshot-controller/3.log" Feb 24 02:18:20.791174 master-0 kubenswrapper[7864]: I0224 02:18:20.791110 7864 generic.go:334] "Generic (PLEG): container finished" podID="f6e7b773-7ecd-4a5c-8bef-d672f371e7e5" containerID="de767e9040507053af6384aa3263bafeca9d1e8f1b629cb4c6dfbeef7e5cb93d" exitCode=1 Feb 24 02:18:20.791174 master-0 kubenswrapper[7864]: I0224 02:18:20.791166 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" event={"ID":"f6e7b773-7ecd-4a5c-8bef-d672f371e7e5","Type":"ContainerDied","Data":"de767e9040507053af6384aa3263bafeca9d1e8f1b629cb4c6dfbeef7e5cb93d"} Feb 24 02:18:20.791387 master-0 kubenswrapper[7864]: I0224 02:18:20.791235 7864 scope.go:117] "RemoveContainer" containerID="361cb30b3b5fa4aef37729ebe34ed6dc589b5812da2325814ef9feeb894335c5" Feb 24 02:18:20.792348 master-0 kubenswrapper[7864]: I0224 02:18:20.792277 7864 scope.go:117] "RemoveContainer" containerID="de767e9040507053af6384aa3263bafeca9d1e8f1b629cb4c6dfbeef7e5cb93d" Feb 24 02:18:20.792824 master-0 kubenswrapper[7864]: E0224 02:18:20.792759 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-8l58x_openshift-cluster-storage-operator(f6e7b773-7ecd-4a5c-8bef-d672f371e7e5)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" podUID="f6e7b773-7ecd-4a5c-8bef-d672f371e7e5" Feb 24 02:18:21.837320 master-0 kubenswrapper[7864]: I0224 02:18:21.837264 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-8l58x_f6e7b773-7ecd-4a5c-8bef-d672f371e7e5/snapshot-controller/4.log" Feb 24 02:18:22.422824 master-0 kubenswrapper[7864]: I0224 02:18:22.422717 7864 status_manager.go:851] "Failed to get status for pod" podUID="56c3cb71c9851003c8de7e7c5db4b87e" pod="kube-system/bootstrap-kube-scheduler-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods bootstrap-kube-scheduler-master-0)" Feb 24 02:18:23.052272 master-0 kubenswrapper[7864]: I0224 02:18:23.052206 7864 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-0" Feb 24 02:18:23.062360 master-0 kubenswrapper[7864]: I0224 02:18:23.062309 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 24 02:18:23.093620 master-0 kubenswrapper[7864]: I0224 02:18:23.089504 7864 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 24 02:18:23.158347 master-0 kubenswrapper[7864]: I0224 02:18:23.158269 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f"] Feb 24 02:18:23.160550 master-0 kubenswrapper[7864]: I0224 02:18:23.160512 7864 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-dg77f"] Feb 24 02:18:23.891453 master-0 kubenswrapper[7864]: I0224 02:18:23.891315 7864 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc3d08db-45fa-4fef-b1fd-2875f22d5c45" path="/var/lib/kubelet/pods/dc3d08db-45fa-4fef-b1fd-2875f22d5c45/volumes" Feb 24 02:18:23.985750 master-0 kubenswrapper[7864]: E0224 02:18:23.985683 7864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Feb 24 02:18:26.958480 master-0 kubenswrapper[7864]: I0224 02:18:26.958404 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 24 02:18:31.949012 master-0 kubenswrapper[7864]: I0224 02:18:31.948923 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 24 02:18:32.598068 master-0 kubenswrapper[7864]: I0224 02:18:32.597928 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 24 02:18:32.874731 master-0 kubenswrapper[7864]: I0224 02:18:32.874539 7864 scope.go:117] "RemoveContainer" containerID="de767e9040507053af6384aa3263bafeca9d1e8f1b629cb4c6dfbeef7e5cb93d" Feb 24 02:18:32.875053 master-0 kubenswrapper[7864]: E0224 02:18:32.875022 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-8l58x_openshift-cluster-storage-operator(f6e7b773-7ecd-4a5c-8bef-d672f371e7e5)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" podUID="f6e7b773-7ecd-4a5c-8bef-d672f371e7e5" Feb 24 02:18:32.875427 master-0 kubenswrapper[7864]: I0224 02:18:32.875356 7864 scope.go:117] "RemoveContainer" containerID="234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503" Feb 24 02:18:32.876020 master-0 kubenswrapper[7864]: E0224 02:18:32.875858 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 24 02:18:32.944478 master-0 kubenswrapper[7864]: I0224 02:18:32.944225 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=0.944171312 podStartE2EDuration="944.171312ms" podCreationTimestamp="2026-02-24 02:18:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:18:32.942386032 +0000 UTC m=+877.270039694" watchObservedRunningTime="2026-02-24 02:18:32.944171312 +0000 UTC m=+877.271824964" Feb 24 02:18:32.981600 master-0 kubenswrapper[7864]: E0224 02:18:32.981492 7864 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Feb 24 02:18:40.568953 master-0 kubenswrapper[7864]: E0224 02:18:40.568707 7864 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.18970c7da714a206 kube-system 8916 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:56c3cb71c9851003c8de7e7c5db4b87e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:05:12 +0000 UTC,LastTimestamp:2026-02-24 02:15:33.288189789 +0000 UTC m=+697.615843451,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:18:40.987014 master-0 kubenswrapper[7864]: E0224 02:18:40.986875 7864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 24 02:18:42.735516 master-0 kubenswrapper[7864]: E0224 02:18:42.735411 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T02:18:32Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T02:18:32Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T02:18:32Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T02:18:32Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:18:45.881961 master-0 kubenswrapper[7864]: I0224 02:18:45.881835 7864 scope.go:117] "RemoveContainer" containerID="de767e9040507053af6384aa3263bafeca9d1e8f1b629cb4c6dfbeef7e5cb93d" Feb 24 02:18:45.883041 master-0 kubenswrapper[7864]: E0224 02:18:45.882249 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-8l58x_openshift-cluster-storage-operator(f6e7b773-7ecd-4a5c-8bef-d672f371e7e5)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" podUID="f6e7b773-7ecd-4a5c-8bef-d672f371e7e5" Feb 24 02:18:46.874755 master-0 kubenswrapper[7864]: I0224 02:18:46.874537 7864 scope.go:117] "RemoveContainer" containerID="234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503" Feb 24 02:18:46.875654 master-0 kubenswrapper[7864]: E0224 02:18:46.875613 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 24 02:18:52.736794 master-0 kubenswrapper[7864]: E0224 02:18:52.736436 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:18:58.207055 master-0 kubenswrapper[7864]: I0224 02:18:58.206965 7864 generic.go:334] "Generic (PLEG): container finished" podID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerID="16b88bdb19342563d81116f8e13c7e868beadde1813cafcd204be1678600a199" exitCode=0 Feb 24 02:18:58.207055 master-0 kubenswrapper[7864]: I0224 02:18:58.207045 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" event={"ID":"6a08a1e4-cf92-4733-a8af-c7ac5b21e925","Type":"ContainerDied","Data":"16b88bdb19342563d81116f8e13c7e868beadde1813cafcd204be1678600a199"} Feb 24 02:18:58.208100 master-0 kubenswrapper[7864]: I0224 02:18:58.207094 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" event={"ID":"6a08a1e4-cf92-4733-a8af-c7ac5b21e925","Type":"ContainerStarted","Data":"623a5a87ba4a222b47eda7260b4c2df4f22508607ad25afe2feb179bbb1bb4b6"} Feb 24 02:18:58.208100 master-0 kubenswrapper[7864]: I0224 02:18:58.207126 7864 scope.go:117] "RemoveContainer" containerID="2b3213a5477253a9e7e61477d63287c522e2dc114274fd4d6193d811f2552967" Feb 24 02:18:58.874499 master-0 kubenswrapper[7864]: I0224 02:18:58.874368 7864 scope.go:117] "RemoveContainer" containerID="de767e9040507053af6384aa3263bafeca9d1e8f1b629cb4c6dfbeef7e5cb93d" Feb 24 02:18:58.874843 master-0 kubenswrapper[7864]: E0224 02:18:58.874813 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-8l58x_openshift-cluster-storage-operator(f6e7b773-7ecd-4a5c-8bef-d672f371e7e5)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" podUID="f6e7b773-7ecd-4a5c-8bef-d672f371e7e5" Feb 24 02:18:59.082499 master-0 kubenswrapper[7864]: I0224 02:18:59.082420 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:18:59.086187 master-0 kubenswrapper[7864]: I0224 02:18:59.086128 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:18:59.086187 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:18:59.086187 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:18:59.086187 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:18:59.086510 master-0 kubenswrapper[7864]: I0224 02:18:59.086214 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:18:59.875310 master-0 kubenswrapper[7864]: I0224 02:18:59.875217 7864 scope.go:117] "RemoveContainer" containerID="234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503" Feb 24 02:18:59.876447 master-0 kubenswrapper[7864]: E0224 02:18:59.875753 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 24 02:19:00.085756 master-0 kubenswrapper[7864]: I0224 02:19:00.085697 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:00.085756 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:00.085756 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:00.085756 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:00.086293 master-0 kubenswrapper[7864]: I0224 02:19:00.086242 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:01.084611 master-0 kubenswrapper[7864]: I0224 02:19:01.084509 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:01.084611 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:01.084611 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:01.084611 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:01.085858 master-0 kubenswrapper[7864]: I0224 02:19:01.084670 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:02.081795 master-0 kubenswrapper[7864]: I0224 02:19:02.081654 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:19:02.084865 master-0 kubenswrapper[7864]: I0224 02:19:02.084768 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:02.084865 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:02.084865 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:02.084865 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:02.085747 master-0 kubenswrapper[7864]: I0224 02:19:02.084912 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:02.737667 master-0 kubenswrapper[7864]: E0224 02:19:02.737544 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:19:03.085664 master-0 kubenswrapper[7864]: I0224 02:19:03.085427 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:03.085664 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:03.085664 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:03.085664 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:03.085664 master-0 kubenswrapper[7864]: I0224 02:19:03.085521 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:04.084915 master-0 kubenswrapper[7864]: I0224 02:19:04.084812 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:04.084915 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:04.084915 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:04.084915 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:04.085474 master-0 kubenswrapper[7864]: I0224 02:19:04.084953 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:05.085079 master-0 kubenswrapper[7864]: I0224 02:19:05.084941 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:05.085079 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:05.085079 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:05.085079 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:05.086163 master-0 kubenswrapper[7864]: I0224 02:19:05.085075 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:06.085018 master-0 kubenswrapper[7864]: I0224 02:19:06.084890 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:06.085018 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:06.085018 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:06.085018 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:06.086188 master-0 kubenswrapper[7864]: I0224 02:19:06.085048 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:07.085177 master-0 kubenswrapper[7864]: I0224 02:19:07.085089 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:07.085177 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:07.085177 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:07.085177 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:07.086257 master-0 kubenswrapper[7864]: I0224 02:19:07.085241 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:08.084534 master-0 kubenswrapper[7864]: I0224 02:19:08.084440 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:08.084534 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:08.084534 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:08.084534 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:08.084534 master-0 kubenswrapper[7864]: I0224 02:19:08.084537 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:09.084888 master-0 kubenswrapper[7864]: I0224 02:19:09.084755 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:09.084888 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:09.084888 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:09.084888 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:09.086051 master-0 kubenswrapper[7864]: I0224 02:19:09.084922 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:10.085741 master-0 kubenswrapper[7864]: I0224 02:19:10.085658 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:10.085741 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:10.085741 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:10.085741 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:10.086833 master-0 kubenswrapper[7864]: I0224 02:19:10.085773 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:10.334760 master-0 kubenswrapper[7864]: I0224 02:19:10.334642 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k98fq_7b4e3ba0-5194-4e20-8f12-dea4b67504fe/cluster-baremetal-operator/2.log" Feb 24 02:19:10.335652 master-0 kubenswrapper[7864]: I0224 02:19:10.335616 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k98fq_7b4e3ba0-5194-4e20-8f12-dea4b67504fe/cluster-baremetal-operator/1.log" Feb 24 02:19:10.336636 master-0 kubenswrapper[7864]: I0224 02:19:10.336442 7864 generic.go:334] "Generic (PLEG): container finished" podID="7b4e3ba0-5194-4e20-8f12-dea4b67504fe" containerID="08908d93b30c563001e2ff25650203b4026284e8bf58ed4cc26c75825e885fed" exitCode=1 Feb 24 02:19:10.336636 master-0 kubenswrapper[7864]: I0224 02:19:10.336540 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" event={"ID":"7b4e3ba0-5194-4e20-8f12-dea4b67504fe","Type":"ContainerDied","Data":"08908d93b30c563001e2ff25650203b4026284e8bf58ed4cc26c75825e885fed"} Feb 24 02:19:10.336830 master-0 kubenswrapper[7864]: I0224 02:19:10.336655 7864 scope.go:117] "RemoveContainer" containerID="4302401dd0c403009a73a84bbcac7a1752f583128d1d71005e6d83a2aeed0734" Feb 24 02:19:10.337549 master-0 kubenswrapper[7864]: I0224 02:19:10.337484 7864 scope.go:117] "RemoveContainer" containerID="08908d93b30c563001e2ff25650203b4026284e8bf58ed4cc26c75825e885fed" Feb 24 02:19:10.338080 master-0 kubenswrapper[7864]: E0224 02:19:10.338015 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-d6bb9bb76-k98fq_openshift-machine-api(7b4e3ba0-5194-4e20-8f12-dea4b67504fe)\"" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" podUID="7b4e3ba0-5194-4e20-8f12-dea4b67504fe" Feb 24 02:19:10.875756 master-0 kubenswrapper[7864]: I0224 02:19:10.875677 7864 scope.go:117] "RemoveContainer" containerID="234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503" Feb 24 02:19:10.876201 master-0 kubenswrapper[7864]: E0224 02:19:10.876144 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 24 02:19:11.085355 master-0 kubenswrapper[7864]: I0224 02:19:11.085276 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:11.085355 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:11.085355 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:11.085355 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:11.085789 master-0 kubenswrapper[7864]: I0224 02:19:11.085367 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:11.349074 master-0 kubenswrapper[7864]: I0224 02:19:11.348994 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k98fq_7b4e3ba0-5194-4e20-8f12-dea4b67504fe/cluster-baremetal-operator/2.log" Feb 24 02:19:12.085204 master-0 kubenswrapper[7864]: I0224 02:19:12.085074 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:12.085204 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:12.085204 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:12.085204 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:12.085204 master-0 kubenswrapper[7864]: I0224 02:19:12.085189 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:12.738958 master-0 kubenswrapper[7864]: E0224 02:19:12.738813 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:19:13.085395 master-0 kubenswrapper[7864]: I0224 02:19:13.085155 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:13.085395 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:13.085395 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:13.085395 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:13.085395 master-0 kubenswrapper[7864]: I0224 02:19:13.085289 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:13.876927 master-0 kubenswrapper[7864]: I0224 02:19:13.876777 7864 scope.go:117] "RemoveContainer" containerID="de767e9040507053af6384aa3263bafeca9d1e8f1b629cb4c6dfbeef7e5cb93d" Feb 24 02:19:13.878084 master-0 kubenswrapper[7864]: E0224 02:19:13.877188 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-8l58x_openshift-cluster-storage-operator(f6e7b773-7ecd-4a5c-8bef-d672f371e7e5)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" podUID="f6e7b773-7ecd-4a5c-8bef-d672f371e7e5" Feb 24 02:19:14.085446 master-0 kubenswrapper[7864]: I0224 02:19:14.085384 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:14.085446 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:14.085446 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:14.085446 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:14.086055 master-0 kubenswrapper[7864]: I0224 02:19:14.086009 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:14.572067 master-0 kubenswrapper[7864]: E0224 02:19:14.571887 7864 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.18970c7daca85c27 kube-system 8917 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:56c3cb71c9851003c8de7e7c5db4b87e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:05:12 +0000 UTC,LastTimestamp:2026-02-24 02:15:33.303242361 +0000 UTC m=+697.630896013,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:19:15.085814 master-0 kubenswrapper[7864]: I0224 02:19:15.085754 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:15.085814 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:15.085814 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:15.085814 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:15.086876 master-0 kubenswrapper[7864]: I0224 02:19:15.086745 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:16.085275 master-0 kubenswrapper[7864]: I0224 02:19:16.085173 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:16.085275 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:16.085275 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:16.085275 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:16.085885 master-0 kubenswrapper[7864]: I0224 02:19:16.085308 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:17.085744 master-0 kubenswrapper[7864]: I0224 02:19:17.085622 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:17.085744 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:17.085744 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:17.085744 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:17.086943 master-0 kubenswrapper[7864]: I0224 02:19:17.085803 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:18.085358 master-0 kubenswrapper[7864]: I0224 02:19:18.085267 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:18.085358 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:18.085358 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:18.085358 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:18.085943 master-0 kubenswrapper[7864]: I0224 02:19:18.085360 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:18.415829 master-0 kubenswrapper[7864]: I0224 02:19:18.415724 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-6dlqb_c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/ingress-operator/4.log" Feb 24 02:19:18.416919 master-0 kubenswrapper[7864]: I0224 02:19:18.416736 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-6dlqb_c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/ingress-operator/3.log" Feb 24 02:19:18.417476 master-0 kubenswrapper[7864]: I0224 02:19:18.417354 7864 generic.go:334] "Generic (PLEG): container finished" podID="c3278a82-ee70-4d6c-9c96-f8cb1bcb9334" containerID="7ebe8b93ac5fdd43a48b73d5d4aae71709a78d8ce5151017ae8a23f470fe9ff8" exitCode=1 Feb 24 02:19:18.417476 master-0 kubenswrapper[7864]: I0224 02:19:18.417447 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" event={"ID":"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334","Type":"ContainerDied","Data":"7ebe8b93ac5fdd43a48b73d5d4aae71709a78d8ce5151017ae8a23f470fe9ff8"} Feb 24 02:19:18.417864 master-0 kubenswrapper[7864]: I0224 02:19:18.417534 7864 scope.go:117] "RemoveContainer" containerID="049ca3ed1273887ef2dfb1c46cae5f8f4c14254dc46c7407bca3af34b7c3bdfe" Feb 24 02:19:18.418400 master-0 kubenswrapper[7864]: I0224 02:19:18.418325 7864 scope.go:117] "RemoveContainer" containerID="7ebe8b93ac5fdd43a48b73d5d4aae71709a78d8ce5151017ae8a23f470fe9ff8" Feb 24 02:19:18.418918 master-0 kubenswrapper[7864]: E0224 02:19:18.418852 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-6dlqb_openshift-ingress-operator(c3278a82-ee70-4d6c-9c96-f8cb1bcb9334)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" podUID="c3278a82-ee70-4d6c-9c96-f8cb1bcb9334" Feb 24 02:19:19.085016 master-0 kubenswrapper[7864]: I0224 02:19:19.084937 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:19.085016 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:19.085016 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:19.085016 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:19.085496 master-0 kubenswrapper[7864]: I0224 02:19:19.085020 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:19.430158 master-0 kubenswrapper[7864]: I0224 02:19:19.429982 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-6dlqb_c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/ingress-operator/4.log" Feb 24 02:19:20.086063 master-0 kubenswrapper[7864]: I0224 02:19:20.085903 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:20.086063 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:20.086063 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:20.086063 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:20.086892 master-0 kubenswrapper[7864]: I0224 02:19:20.086779 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:21.085242 master-0 kubenswrapper[7864]: I0224 02:19:21.085130 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:21.085242 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:21.085242 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:21.085242 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:21.086433 master-0 kubenswrapper[7864]: I0224 02:19:21.085272 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:22.084982 master-0 kubenswrapper[7864]: I0224 02:19:22.084886 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:22.084982 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:22.084982 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:22.084982 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:22.086373 master-0 kubenswrapper[7864]: I0224 02:19:22.084986 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:22.739438 master-0 kubenswrapper[7864]: E0224 02:19:22.739360 7864 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:19:22.739438 master-0 kubenswrapper[7864]: E0224 02:19:22.739418 7864 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 02:19:23.085123 master-0 kubenswrapper[7864]: I0224 02:19:23.084919 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:23.085123 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:23.085123 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:23.085123 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:23.085123 master-0 kubenswrapper[7864]: I0224 02:19:23.085052 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:24.085196 master-0 kubenswrapper[7864]: I0224 02:19:24.085092 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:24.085196 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:24.085196 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:24.085196 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:24.086366 master-0 kubenswrapper[7864]: I0224 02:19:24.085210 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:24.875130 master-0 kubenswrapper[7864]: I0224 02:19:24.875025 7864 scope.go:117] "RemoveContainer" containerID="08908d93b30c563001e2ff25650203b4026284e8bf58ed4cc26c75825e885fed" Feb 24 02:19:24.875517 master-0 kubenswrapper[7864]: E0224 02:19:24.875442 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-d6bb9bb76-k98fq_openshift-machine-api(7b4e3ba0-5194-4e20-8f12-dea4b67504fe)\"" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" podUID="7b4e3ba0-5194-4e20-8f12-dea4b67504fe" Feb 24 02:19:25.085450 master-0 kubenswrapper[7864]: I0224 02:19:25.085320 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:25.085450 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:25.085450 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:25.085450 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:25.086861 master-0 kubenswrapper[7864]: I0224 02:19:25.085461 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:25.882223 master-0 kubenswrapper[7864]: I0224 02:19:25.882124 7864 scope.go:117] "RemoveContainer" containerID="234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503" Feb 24 02:19:25.882734 master-0 kubenswrapper[7864]: E0224 02:19:25.882678 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 24 02:19:26.084838 master-0 kubenswrapper[7864]: I0224 02:19:26.084756 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:26.084838 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:26.084838 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:26.084838 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:26.085182 master-0 kubenswrapper[7864]: I0224 02:19:26.084873 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:27.085031 master-0 kubenswrapper[7864]: I0224 02:19:27.084926 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:27.085031 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:27.085031 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:27.085031 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:27.086089 master-0 kubenswrapper[7864]: I0224 02:19:27.085033 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:28.085706 master-0 kubenswrapper[7864]: I0224 02:19:28.085609 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:28.085706 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:28.085706 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:28.085706 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:28.086892 master-0 kubenswrapper[7864]: I0224 02:19:28.085722 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:28.875524 master-0 kubenswrapper[7864]: I0224 02:19:28.875426 7864 scope.go:117] "RemoveContainer" containerID="de767e9040507053af6384aa3263bafeca9d1e8f1b629cb4c6dfbeef7e5cb93d" Feb 24 02:19:28.875991 master-0 kubenswrapper[7864]: E0224 02:19:28.875892 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-8l58x_openshift-cluster-storage-operator(f6e7b773-7ecd-4a5c-8bef-d672f371e7e5)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" podUID="f6e7b773-7ecd-4a5c-8bef-d672f371e7e5" Feb 24 02:19:29.084479 master-0 kubenswrapper[7864]: I0224 02:19:29.084421 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:29.084479 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:29.084479 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:29.084479 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:29.084986 master-0 kubenswrapper[7864]: I0224 02:19:29.084938 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:30.084902 master-0 kubenswrapper[7864]: I0224 02:19:30.084810 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:30.084902 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:30.084902 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:30.084902 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:30.084902 master-0 kubenswrapper[7864]: I0224 02:19:30.084900 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:31.085948 master-0 kubenswrapper[7864]: I0224 02:19:31.085823 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:31.085948 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:31.085948 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:31.085948 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:31.085948 master-0 kubenswrapper[7864]: I0224 02:19:31.085942 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:32.085445 master-0 kubenswrapper[7864]: I0224 02:19:32.085296 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:32.085445 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:32.085445 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:32.085445 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:32.085445 master-0 kubenswrapper[7864]: I0224 02:19:32.085393 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:33.085342 master-0 kubenswrapper[7864]: I0224 02:19:33.085210 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:33.085342 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:33.085342 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:33.085342 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:33.085342 master-0 kubenswrapper[7864]: I0224 02:19:33.085318 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:33.875033 master-0 kubenswrapper[7864]: I0224 02:19:33.874965 7864 scope.go:117] "RemoveContainer" containerID="7ebe8b93ac5fdd43a48b73d5d4aae71709a78d8ce5151017ae8a23f470fe9ff8" Feb 24 02:19:33.875740 master-0 kubenswrapper[7864]: E0224 02:19:33.875390 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-6dlqb_openshift-ingress-operator(c3278a82-ee70-4d6c-9c96-f8cb1bcb9334)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" podUID="c3278a82-ee70-4d6c-9c96-f8cb1bcb9334" Feb 24 02:19:34.086289 master-0 kubenswrapper[7864]: I0224 02:19:34.086225 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:34.086289 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:34.086289 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:34.086289 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:34.086621 master-0 kubenswrapper[7864]: I0224 02:19:34.086308 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:35.085091 master-0 kubenswrapper[7864]: I0224 02:19:35.085006 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:35.085091 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:35.085091 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:35.085091 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:35.086157 master-0 kubenswrapper[7864]: I0224 02:19:35.085097 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:35.904616 master-0 kubenswrapper[7864]: I0224 02:19:35.899039 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 24 02:19:35.904616 master-0 kubenswrapper[7864]: E0224 02:19:35.899519 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c12652f5-003f-4b77-b2bb-b666c9d7bb53" containerName="installer" Feb 24 02:19:35.904616 master-0 kubenswrapper[7864]: I0224 02:19:35.899545 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="c12652f5-003f-4b77-b2bb-b666c9d7bb53" containerName="installer" Feb 24 02:19:35.904616 master-0 kubenswrapper[7864]: E0224 02:19:35.899604 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc3d08db-45fa-4fef-b1fd-2875f22d5c45" containerName="kube-rbac-proxy" Feb 24 02:19:35.904616 master-0 kubenswrapper[7864]: I0224 02:19:35.899619 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc3d08db-45fa-4fef-b1fd-2875f22d5c45" containerName="kube-rbac-proxy" Feb 24 02:19:35.904616 master-0 kubenswrapper[7864]: E0224 02:19:35.899677 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc3d08db-45fa-4fef-b1fd-2875f22d5c45" containerName="multus-admission-controller" Feb 24 02:19:35.904616 master-0 kubenswrapper[7864]: I0224 02:19:35.899691 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc3d08db-45fa-4fef-b1fd-2875f22d5c45" containerName="multus-admission-controller" Feb 24 02:19:35.904616 master-0 kubenswrapper[7864]: E0224 02:19:35.899737 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50c78047-1c4d-4535-ba2c-31f080d6a57d" containerName="installer" Feb 24 02:19:35.904616 master-0 kubenswrapper[7864]: I0224 02:19:35.899750 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="50c78047-1c4d-4535-ba2c-31f080d6a57d" containerName="installer" Feb 24 02:19:35.904616 master-0 kubenswrapper[7864]: I0224 02:19:35.899990 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc3d08db-45fa-4fef-b1fd-2875f22d5c45" containerName="kube-rbac-proxy" Feb 24 02:19:35.904616 master-0 kubenswrapper[7864]: I0224 02:19:35.900020 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="50c78047-1c4d-4535-ba2c-31f080d6a57d" containerName="installer" Feb 24 02:19:35.904616 master-0 kubenswrapper[7864]: I0224 02:19:35.900041 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="c12652f5-003f-4b77-b2bb-b666c9d7bb53" containerName="installer" Feb 24 02:19:35.904616 master-0 kubenswrapper[7864]: I0224 02:19:35.900072 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc3d08db-45fa-4fef-b1fd-2875f22d5c45" containerName="multus-admission-controller" Feb 24 02:19:35.913607 master-0 kubenswrapper[7864]: I0224 02:19:35.912378 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 24 02:19:35.917082 master-0 kubenswrapper[7864]: I0224 02:19:35.917008 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 24 02:19:35.917540 master-0 kubenswrapper[7864]: I0224 02:19:35.917481 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-rchfr" Feb 24 02:19:35.917540 master-0 kubenswrapper[7864]: I0224 02:19:35.917518 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 24 02:19:35.999213 master-0 kubenswrapper[7864]: I0224 02:19:35.999132 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/803f3561-75a1-42ac-afba-fc5bb0407f9c-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"803f3561-75a1-42ac-afba-fc5bb0407f9c\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 24 02:19:35.999213 master-0 kubenswrapper[7864]: I0224 02:19:35.999205 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/803f3561-75a1-42ac-afba-fc5bb0407f9c-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"803f3561-75a1-42ac-afba-fc5bb0407f9c\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 24 02:19:35.999610 master-0 kubenswrapper[7864]: I0224 02:19:35.999438 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/803f3561-75a1-42ac-afba-fc5bb0407f9c-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"803f3561-75a1-42ac-afba-fc5bb0407f9c\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 24 02:19:36.085129 master-0 kubenswrapper[7864]: I0224 02:19:36.085049 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:36.085129 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:36.085129 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:36.085129 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:36.086002 master-0 kubenswrapper[7864]: I0224 02:19:36.085144 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:36.101142 master-0 kubenswrapper[7864]: I0224 02:19:36.101072 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/803f3561-75a1-42ac-afba-fc5bb0407f9c-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"803f3561-75a1-42ac-afba-fc5bb0407f9c\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 24 02:19:36.101278 master-0 kubenswrapper[7864]: I0224 02:19:36.101210 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/803f3561-75a1-42ac-afba-fc5bb0407f9c-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"803f3561-75a1-42ac-afba-fc5bb0407f9c\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 24 02:19:36.101278 master-0 kubenswrapper[7864]: I0224 02:19:36.101246 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/803f3561-75a1-42ac-afba-fc5bb0407f9c-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"803f3561-75a1-42ac-afba-fc5bb0407f9c\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 24 02:19:36.101420 master-0 kubenswrapper[7864]: I0224 02:19:36.101353 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/803f3561-75a1-42ac-afba-fc5bb0407f9c-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"803f3561-75a1-42ac-afba-fc5bb0407f9c\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 24 02:19:36.101703 master-0 kubenswrapper[7864]: I0224 02:19:36.101547 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/803f3561-75a1-42ac-afba-fc5bb0407f9c-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"803f3561-75a1-42ac-afba-fc5bb0407f9c\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 24 02:19:36.131227 master-0 kubenswrapper[7864]: I0224 02:19:36.131172 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/803f3561-75a1-42ac-afba-fc5bb0407f9c-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"803f3561-75a1-42ac-afba-fc5bb0407f9c\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 24 02:19:36.255344 master-0 kubenswrapper[7864]: I0224 02:19:36.255261 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 24 02:19:36.835723 master-0 kubenswrapper[7864]: I0224 02:19:36.835490 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 24 02:19:36.875514 master-0 kubenswrapper[7864]: I0224 02:19:36.875450 7864 scope.go:117] "RemoveContainer" containerID="08908d93b30c563001e2ff25650203b4026284e8bf58ed4cc26c75825e885fed" Feb 24 02:19:37.085274 master-0 kubenswrapper[7864]: I0224 02:19:37.085203 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:37.085274 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:37.085274 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:37.085274 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:37.086810 master-0 kubenswrapper[7864]: I0224 02:19:37.086732 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:37.598997 master-0 kubenswrapper[7864]: I0224 02:19:37.598773 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"803f3561-75a1-42ac-afba-fc5bb0407f9c","Type":"ContainerStarted","Data":"ed33cafb8f2802b88aad0c136ff60076a67f08a4decd0aaf5badcc7c3008dd37"} Feb 24 02:19:37.598997 master-0 kubenswrapper[7864]: I0224 02:19:37.598857 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"803f3561-75a1-42ac-afba-fc5bb0407f9c","Type":"ContainerStarted","Data":"469ea89978a106e8d886f7d5bc0536175799cd2440585cb8454e806af61e0350"} Feb 24 02:19:37.603056 master-0 kubenswrapper[7864]: I0224 02:19:37.602995 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k98fq_7b4e3ba0-5194-4e20-8f12-dea4b67504fe/cluster-baremetal-operator/2.log" Feb 24 02:19:37.603663 master-0 kubenswrapper[7864]: I0224 02:19:37.603622 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" event={"ID":"7b4e3ba0-5194-4e20-8f12-dea4b67504fe","Type":"ContainerStarted","Data":"6d16a4b4c8918a6f68fb2a5efd3e381184ca33865224072dbe4960214adf0d1a"} Feb 24 02:19:37.637234 master-0 kubenswrapper[7864]: I0224 02:19:37.630748 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podStartSLOduration=2.630717026 podStartE2EDuration="2.630717026s" podCreationTimestamp="2026-02-24 02:19:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:19:37.625945462 +0000 UTC m=+941.953599124" watchObservedRunningTime="2026-02-24 02:19:37.630717026 +0000 UTC m=+941.958370678" Feb 24 02:19:38.085431 master-0 kubenswrapper[7864]: I0224 02:19:38.085297 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:38.085431 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:38.085431 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:38.085431 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:38.086823 master-0 kubenswrapper[7864]: I0224 02:19:38.085462 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:39.086004 master-0 kubenswrapper[7864]: I0224 02:19:39.085899 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:39.086004 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:39.086004 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:39.086004 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:39.087612 master-0 kubenswrapper[7864]: I0224 02:19:39.086050 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:39.875526 master-0 kubenswrapper[7864]: I0224 02:19:39.875437 7864 scope.go:117] "RemoveContainer" containerID="234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503" Feb 24 02:19:39.875956 master-0 kubenswrapper[7864]: I0224 02:19:39.875826 7864 scope.go:117] "RemoveContainer" containerID="de767e9040507053af6384aa3263bafeca9d1e8f1b629cb4c6dfbeef7e5cb93d" Feb 24 02:19:39.876038 master-0 kubenswrapper[7864]: E0224 02:19:39.875956 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 24 02:19:39.876300 master-0 kubenswrapper[7864]: E0224 02:19:39.876238 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-8l58x_openshift-cluster-storage-operator(f6e7b773-7ecd-4a5c-8bef-d672f371e7e5)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" podUID="f6e7b773-7ecd-4a5c-8bef-d672f371e7e5" Feb 24 02:19:40.086101 master-0 kubenswrapper[7864]: I0224 02:19:40.085999 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:40.086101 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:40.086101 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:40.086101 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:40.087316 master-0 kubenswrapper[7864]: I0224 02:19:40.086106 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:41.085706 master-0 kubenswrapper[7864]: I0224 02:19:41.085567 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:41.085706 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:41.085706 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:41.085706 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:41.086085 master-0 kubenswrapper[7864]: I0224 02:19:41.085732 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:41.665045 master-0 kubenswrapper[7864]: I0224 02:19:41.664987 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-retry-1-master-0"] Feb 24 02:19:41.665916 master-0 kubenswrapper[7864]: I0224 02:19:41.665748 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 24 02:19:41.667688 master-0 kubenswrapper[7864]: I0224 02:19:41.667646 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-k86pk" Feb 24 02:19:41.668904 master-0 kubenswrapper[7864]: I0224 02:19:41.668852 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 24 02:19:41.677706 master-0 kubenswrapper[7864]: I0224 02:19:41.677655 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-retry-1-master-0"] Feb 24 02:19:41.687198 master-0 kubenswrapper[7864]: I0224 02:19:41.687139 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/eb9f7dc4-e69f-4fc1-bb1a-1878971d279d-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"eb9f7dc4-e69f-4fc1-bb1a-1878971d279d\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 24 02:19:41.687409 master-0 kubenswrapper[7864]: I0224 02:19:41.687221 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb9f7dc4-e69f-4fc1-bb1a-1878971d279d-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"eb9f7dc4-e69f-4fc1-bb1a-1878971d279d\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 24 02:19:41.687409 master-0 kubenswrapper[7864]: I0224 02:19:41.687278 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb9f7dc4-e69f-4fc1-bb1a-1878971d279d-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"eb9f7dc4-e69f-4fc1-bb1a-1878971d279d\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 24 02:19:41.788508 master-0 kubenswrapper[7864]: I0224 02:19:41.788447 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/eb9f7dc4-e69f-4fc1-bb1a-1878971d279d-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"eb9f7dc4-e69f-4fc1-bb1a-1878971d279d\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 24 02:19:41.788839 master-0 kubenswrapper[7864]: I0224 02:19:41.788549 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb9f7dc4-e69f-4fc1-bb1a-1878971d279d-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"eb9f7dc4-e69f-4fc1-bb1a-1878971d279d\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 24 02:19:41.788839 master-0 kubenswrapper[7864]: I0224 02:19:41.788565 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/eb9f7dc4-e69f-4fc1-bb1a-1878971d279d-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"eb9f7dc4-e69f-4fc1-bb1a-1878971d279d\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 24 02:19:41.788839 master-0 kubenswrapper[7864]: I0224 02:19:41.788624 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb9f7dc4-e69f-4fc1-bb1a-1878971d279d-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"eb9f7dc4-e69f-4fc1-bb1a-1878971d279d\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 24 02:19:41.788839 master-0 kubenswrapper[7864]: I0224 02:19:41.788777 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb9f7dc4-e69f-4fc1-bb1a-1878971d279d-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"eb9f7dc4-e69f-4fc1-bb1a-1878971d279d\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 24 02:19:41.809468 master-0 kubenswrapper[7864]: I0224 02:19:41.809410 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb9f7dc4-e69f-4fc1-bb1a-1878971d279d-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"eb9f7dc4-e69f-4fc1-bb1a-1878971d279d\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 24 02:19:41.992453 master-0 kubenswrapper[7864]: I0224 02:19:41.992399 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 24 02:19:42.083733 master-0 kubenswrapper[7864]: I0224 02:19:42.083673 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:42.083733 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:42.083733 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:42.083733 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:42.084080 master-0 kubenswrapper[7864]: I0224 02:19:42.083749 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:42.238346 master-0 kubenswrapper[7864]: I0224 02:19:42.238294 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-retry-1-master-0"] Feb 24 02:19:42.255173 master-0 kubenswrapper[7864]: W0224 02:19:42.253266 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podeb9f7dc4_e69f_4fc1_bb1a_1878971d279d.slice/crio-37f8d2ab136a840c907679de284c242a764649e281bfa4d2ebf7dc5249d87848 WatchSource:0}: Error finding container 37f8d2ab136a840c907679de284c242a764649e281bfa4d2ebf7dc5249d87848: Status 404 returned error can't find the container with id 37f8d2ab136a840c907679de284c242a764649e281bfa4d2ebf7dc5249d87848 Feb 24 02:19:42.642515 master-0 kubenswrapper[7864]: I0224 02:19:42.642435 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" event={"ID":"eb9f7dc4-e69f-4fc1-bb1a-1878971d279d","Type":"ContainerStarted","Data":"37f8d2ab136a840c907679de284c242a764649e281bfa4d2ebf7dc5249d87848"} Feb 24 02:19:43.084601 master-0 kubenswrapper[7864]: I0224 02:19:43.084462 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:43.084601 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:43.084601 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:43.084601 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:43.085713 master-0 kubenswrapper[7864]: I0224 02:19:43.084608 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:43.656851 master-0 kubenswrapper[7864]: I0224 02:19:43.656726 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" event={"ID":"eb9f7dc4-e69f-4fc1-bb1a-1878971d279d","Type":"ContainerStarted","Data":"7141c97ac4f89de3381db3a918874906220134e85f0b183e0b4eaacb0434c063"} Feb 24 02:19:43.661309 master-0 kubenswrapper[7864]: I0224 02:19:43.661244 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-8586dccc9b-sl5hz_b36d8451-0fda-4d9d-a850-d05c8f847016/openshift-apiserver-operator/2.log" Feb 24 02:19:43.662206 master-0 kubenswrapper[7864]: I0224 02:19:43.662139 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-8586dccc9b-sl5hz_b36d8451-0fda-4d9d-a850-d05c8f847016/openshift-apiserver-operator/1.log" Feb 24 02:19:43.662302 master-0 kubenswrapper[7864]: I0224 02:19:43.662240 7864 generic.go:334] "Generic (PLEG): container finished" podID="b36d8451-0fda-4d9d-a850-d05c8f847016" containerID="b55dffa76d343a2f86cafe3f911f9b262284cdca97531fab28e85dab3a7157d5" exitCode=1 Feb 24 02:19:43.662375 master-0 kubenswrapper[7864]: I0224 02:19:43.662301 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" event={"ID":"b36d8451-0fda-4d9d-a850-d05c8f847016","Type":"ContainerDied","Data":"b55dffa76d343a2f86cafe3f911f9b262284cdca97531fab28e85dab3a7157d5"} Feb 24 02:19:43.662375 master-0 kubenswrapper[7864]: I0224 02:19:43.662362 7864 scope.go:117] "RemoveContainer" containerID="9d6fec3fa582bb40f876b3cafc2f570058ac361dce1068e53e87a7e383e88cf2" Feb 24 02:19:43.663082 master-0 kubenswrapper[7864]: I0224 02:19:43.663006 7864 scope.go:117] "RemoveContainer" containerID="b55dffa76d343a2f86cafe3f911f9b262284cdca97531fab28e85dab3a7157d5" Feb 24 02:19:43.687894 master-0 kubenswrapper[7864]: I0224 02:19:43.687805 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" podStartSLOduration=2.687785341 podStartE2EDuration="2.687785341s" podCreationTimestamp="2026-02-24 02:19:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:19:43.687718449 +0000 UTC m=+948.015372101" watchObservedRunningTime="2026-02-24 02:19:43.687785341 +0000 UTC m=+948.015438993" Feb 24 02:19:44.085799 master-0 kubenswrapper[7864]: I0224 02:19:44.085694 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:44.085799 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:44.085799 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:44.085799 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:44.087007 master-0 kubenswrapper[7864]: I0224 02:19:44.085835 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:44.685974 master-0 kubenswrapper[7864]: I0224 02:19:44.685887 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-8586dccc9b-sl5hz_b36d8451-0fda-4d9d-a850-d05c8f847016/openshift-apiserver-operator/2.log" Feb 24 02:19:44.686340 master-0 kubenswrapper[7864]: I0224 02:19:44.686061 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" event={"ID":"b36d8451-0fda-4d9d-a850-d05c8f847016","Type":"ContainerStarted","Data":"439729411b82e0e29cbe7419552c7cfd16042f6d754bf9586cfa89a02bdfce23"} Feb 24 02:19:45.085182 master-0 kubenswrapper[7864]: I0224 02:19:45.085071 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:45.085182 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:45.085182 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:45.085182 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:45.085721 master-0 kubenswrapper[7864]: I0224 02:19:45.085194 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:46.085570 master-0 kubenswrapper[7864]: I0224 02:19:46.085464 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:46.085570 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:46.085570 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:46.085570 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:46.085570 master-0 kubenswrapper[7864]: I0224 02:19:46.085561 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:46.875523 master-0 kubenswrapper[7864]: I0224 02:19:46.875275 7864 scope.go:117] "RemoveContainer" containerID="7ebe8b93ac5fdd43a48b73d5d4aae71709a78d8ce5151017ae8a23f470fe9ff8" Feb 24 02:19:46.875922 master-0 kubenswrapper[7864]: E0224 02:19:46.875848 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-6dlqb_openshift-ingress-operator(c3278a82-ee70-4d6c-9c96-f8cb1bcb9334)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" podUID="c3278a82-ee70-4d6c-9c96-f8cb1bcb9334" Feb 24 02:19:47.086171 master-0 kubenswrapper[7864]: I0224 02:19:47.086034 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:47.086171 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:47.086171 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:47.086171 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:47.087368 master-0 kubenswrapper[7864]: I0224 02:19:47.086201 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:47.612815 master-0 kubenswrapper[7864]: I0224 02:19:47.612305 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 24 02:19:47.612815 master-0 kubenswrapper[7864]: I0224 02:19:47.612681 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podUID="803f3561-75a1-42ac-afba-fc5bb0407f9c" containerName="installer" containerID="cri-o://ed33cafb8f2802b88aad0c136ff60076a67f08a4decd0aaf5badcc7c3008dd37" gracePeriod=30 Feb 24 02:19:48.084998 master-0 kubenswrapper[7864]: I0224 02:19:48.084907 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:48.084998 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:48.084998 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:48.084998 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:48.085505 master-0 kubenswrapper[7864]: I0224 02:19:48.085010 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:48.575687 master-0 kubenswrapper[7864]: E0224 02:19:48.575369 7864 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.18970d0d76c45509 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:BackOff,Message:Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:15:29.871893769 +0000 UTC m=+694.199547421,LastTimestamp:2026-02-24 02:15:34.485623906 +0000 UTC m=+698.813277578,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:19:49.084620 master-0 kubenswrapper[7864]: I0224 02:19:49.084471 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:49.084620 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:49.084620 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:49.084620 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:49.085100 master-0 kubenswrapper[7864]: I0224 02:19:49.084616 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:50.084836 master-0 kubenswrapper[7864]: I0224 02:19:50.084764 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:50.084836 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:50.084836 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:50.084836 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:50.084836 master-0 kubenswrapper[7864]: I0224 02:19:50.084842 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:50.410689 master-0 kubenswrapper[7864]: I0224 02:19:50.410391 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 24 02:19:50.411972 master-0 kubenswrapper[7864]: I0224 02:19:50.411926 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:19:50.433500 master-0 kubenswrapper[7864]: I0224 02:19:50.433417 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 24 02:19:50.541356 master-0 kubenswrapper[7864]: I0224 02:19:50.541276 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5508683b-09ae-47a1-89fd-b0891a881e09-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"5508683b-09ae-47a1-89fd-b0891a881e09\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:19:50.541722 master-0 kubenswrapper[7864]: I0224 02:19:50.541521 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5508683b-09ae-47a1-89fd-b0891a881e09\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:19:50.541938 master-0 kubenswrapper[7864]: I0224 02:19:50.541880 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5508683b-09ae-47a1-89fd-b0891a881e09-var-lock\") pod \"installer-2-master-0\" (UID: \"5508683b-09ae-47a1-89fd-b0891a881e09\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:19:50.645070 master-0 kubenswrapper[7864]: I0224 02:19:50.644997 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5508683b-09ae-47a1-89fd-b0891a881e09-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"5508683b-09ae-47a1-89fd-b0891a881e09\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:19:50.645249 master-0 kubenswrapper[7864]: I0224 02:19:50.645127 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5508683b-09ae-47a1-89fd-b0891a881e09\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:19:50.645526 master-0 kubenswrapper[7864]: I0224 02:19:50.645406 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5508683b-09ae-47a1-89fd-b0891a881e09-var-lock\") pod \"installer-2-master-0\" (UID: \"5508683b-09ae-47a1-89fd-b0891a881e09\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:19:50.645526 master-0 kubenswrapper[7864]: I0224 02:19:50.645421 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5508683b-09ae-47a1-89fd-b0891a881e09-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"5508683b-09ae-47a1-89fd-b0891a881e09\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:19:50.645739 master-0 kubenswrapper[7864]: I0224 02:19:50.645635 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5508683b-09ae-47a1-89fd-b0891a881e09-var-lock\") pod \"installer-2-master-0\" (UID: \"5508683b-09ae-47a1-89fd-b0891a881e09\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:19:50.674671 master-0 kubenswrapper[7864]: I0224 02:19:50.674524 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5508683b-09ae-47a1-89fd-b0891a881e09\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:19:50.781826 master-0 kubenswrapper[7864]: I0224 02:19:50.781681 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:19:51.085015 master-0 kubenswrapper[7864]: I0224 02:19:51.084927 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:51.085015 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:51.085015 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:51.085015 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:51.086054 master-0 kubenswrapper[7864]: I0224 02:19:51.085037 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:51.291726 master-0 kubenswrapper[7864]: I0224 02:19:51.291504 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 24 02:19:51.306743 master-0 kubenswrapper[7864]: W0224 02:19:51.306657 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5508683b_09ae_47a1_89fd_b0891a881e09.slice/crio-84c61cc1c9db5c98d72530e3929e9be3931d553cf93f1ab6621e14c53b3476fa WatchSource:0}: Error finding container 84c61cc1c9db5c98d72530e3929e9be3931d553cf93f1ab6621e14c53b3476fa: Status 404 returned error can't find the container with id 84c61cc1c9db5c98d72530e3929e9be3931d553cf93f1ab6621e14c53b3476fa Feb 24 02:19:51.750608 master-0 kubenswrapper[7864]: I0224 02:19:51.750489 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"5508683b-09ae-47a1-89fd-b0891a881e09","Type":"ContainerStarted","Data":"84c61cc1c9db5c98d72530e3929e9be3931d553cf93f1ab6621e14c53b3476fa"} Feb 24 02:19:51.876605 master-0 kubenswrapper[7864]: I0224 02:19:51.876495 7864 scope.go:117] "RemoveContainer" containerID="234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503" Feb 24 02:19:51.877065 master-0 kubenswrapper[7864]: E0224 02:19:51.876999 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 24 02:19:52.084834 master-0 kubenswrapper[7864]: I0224 02:19:52.084730 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:52.084834 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:52.084834 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:52.084834 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:52.084834 master-0 kubenswrapper[7864]: I0224 02:19:52.084816 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:52.763602 master-0 kubenswrapper[7864]: I0224 02:19:52.763491 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"5508683b-09ae-47a1-89fd-b0891a881e09","Type":"ContainerStarted","Data":"6bd403605e79109075e7b61bac31b57ae266809e2fcec35f73761229b419851f"} Feb 24 02:19:52.802458 master-0 kubenswrapper[7864]: I0224 02:19:52.802323 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-0" podStartSLOduration=2.80228687 podStartE2EDuration="2.80228687s" podCreationTimestamp="2026-02-24 02:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:19:52.792971789 +0000 UTC m=+957.120625451" watchObservedRunningTime="2026-02-24 02:19:52.80228687 +0000 UTC m=+957.129940532" Feb 24 02:19:52.876827 master-0 kubenswrapper[7864]: I0224 02:19:52.876751 7864 scope.go:117] "RemoveContainer" containerID="de767e9040507053af6384aa3263bafeca9d1e8f1b629cb4c6dfbeef7e5cb93d" Feb 24 02:19:53.084564 master-0 kubenswrapper[7864]: I0224 02:19:53.084490 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:53.084564 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:53.084564 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:53.084564 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:53.085244 master-0 kubenswrapper[7864]: I0224 02:19:53.084613 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:53.777145 master-0 kubenswrapper[7864]: I0224 02:19:53.777090 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-8l58x_f6e7b773-7ecd-4a5c-8bef-d672f371e7e5/snapshot-controller/4.log" Feb 24 02:19:53.777742 master-0 kubenswrapper[7864]: I0224 02:19:53.777662 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" event={"ID":"f6e7b773-7ecd-4a5c-8bef-d672f371e7e5","Type":"ContainerStarted","Data":"f8325f862e5ed3523628a1c7d7820991fcc2eb76b4b3b4a1a68c402a2679e36b"} Feb 24 02:19:54.085047 master-0 kubenswrapper[7864]: I0224 02:19:54.084823 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:54.085047 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:54.085047 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:54.085047 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:54.085047 master-0 kubenswrapper[7864]: I0224 02:19:54.084944 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:55.085291 master-0 kubenswrapper[7864]: I0224 02:19:55.085214 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:55.085291 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:55.085291 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:55.085291 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:55.086147 master-0 kubenswrapper[7864]: I0224 02:19:55.085324 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:56.084503 master-0 kubenswrapper[7864]: I0224 02:19:56.084404 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:56.084503 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:56.084503 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:56.084503 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:56.085098 master-0 kubenswrapper[7864]: I0224 02:19:56.084533 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:57.085135 master-0 kubenswrapper[7864]: I0224 02:19:57.085052 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:57.085135 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:57.085135 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:57.085135 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:57.086185 master-0 kubenswrapper[7864]: I0224 02:19:57.085173 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:58.084973 master-0 kubenswrapper[7864]: I0224 02:19:58.084884 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:58.084973 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:58.084973 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:58.084973 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:58.086112 master-0 kubenswrapper[7864]: I0224 02:19:58.085699 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:59.084955 master-0 kubenswrapper[7864]: I0224 02:19:59.084856 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:19:59.084955 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:19:59.084955 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:19:59.084955 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:19:59.086317 master-0 kubenswrapper[7864]: I0224 02:19:59.084969 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:19:59.875850 master-0 kubenswrapper[7864]: I0224 02:19:59.875781 7864 scope.go:117] "RemoveContainer" containerID="7ebe8b93ac5fdd43a48b73d5d4aae71709a78d8ce5151017ae8a23f470fe9ff8" Feb 24 02:19:59.876283 master-0 kubenswrapper[7864]: E0224 02:19:59.876232 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-6dlqb_openshift-ingress-operator(c3278a82-ee70-4d6c-9c96-f8cb1bcb9334)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" podUID="c3278a82-ee70-4d6c-9c96-f8cb1bcb9334" Feb 24 02:20:00.085381 master-0 kubenswrapper[7864]: I0224 02:20:00.085260 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:00.085381 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:00.085381 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:00.085381 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:00.086517 master-0 kubenswrapper[7864]: I0224 02:20:00.085741 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:01.090258 master-0 kubenswrapper[7864]: I0224 02:20:01.090146 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:01.090258 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:01.090258 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:01.090258 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:01.091451 master-0 kubenswrapper[7864]: I0224 02:20:01.090280 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:02.084663 master-0 kubenswrapper[7864]: I0224 02:20:02.084566 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:02.084663 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:02.084663 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:02.084663 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:02.085190 master-0 kubenswrapper[7864]: I0224 02:20:02.084703 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:03.085497 master-0 kubenswrapper[7864]: I0224 02:20:03.085396 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:03.085497 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:03.085497 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:03.085497 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:03.086665 master-0 kubenswrapper[7864]: I0224 02:20:03.085504 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:04.327052 master-0 kubenswrapper[7864]: I0224 02:20:04.326966 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:04.327052 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:04.327052 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:04.327052 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:04.328440 master-0 kubenswrapper[7864]: I0224 02:20:04.327066 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:04.875234 master-0 kubenswrapper[7864]: I0224 02:20:04.875152 7864 scope.go:117] "RemoveContainer" containerID="234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503" Feb 24 02:20:05.085388 master-0 kubenswrapper[7864]: I0224 02:20:05.085303 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:05.085388 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:05.085388 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:05.085388 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:05.085930 master-0 kubenswrapper[7864]: I0224 02:20:05.085422 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:05.909752 master-0 kubenswrapper[7864]: I0224 02:20:05.909317 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"7695e9f7f4d83409e69769002fffe39946b7e5abced96f2196fee9110126dfbb"} Feb 24 02:20:06.084435 master-0 kubenswrapper[7864]: I0224 02:20:06.084351 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:06.084435 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:06.084435 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:06.084435 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:06.084435 master-0 kubenswrapper[7864]: I0224 02:20:06.084446 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:07.084937 master-0 kubenswrapper[7864]: I0224 02:20:07.084844 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:07.084937 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:07.084937 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:07.084937 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:07.084937 master-0 kubenswrapper[7864]: I0224 02:20:07.084935 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:07.217136 master-0 kubenswrapper[7864]: I0224 02:20:07.217028 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:20:07.225126 master-0 kubenswrapper[7864]: I0224 02:20:07.225094 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:20:07.926440 master-0 kubenswrapper[7864]: I0224 02:20:07.926292 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:20:08.085299 master-0 kubenswrapper[7864]: I0224 02:20:08.085184 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:08.085299 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:08.085299 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:08.085299 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:08.086716 master-0 kubenswrapper[7864]: I0224 02:20:08.085324 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:08.939043 master-0 kubenswrapper[7864]: I0224 02:20:08.938823 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_803f3561-75a1-42ac-afba-fc5bb0407f9c/installer/0.log" Feb 24 02:20:08.939043 master-0 kubenswrapper[7864]: I0224 02:20:08.938920 7864 generic.go:334] "Generic (PLEG): container finished" podID="803f3561-75a1-42ac-afba-fc5bb0407f9c" containerID="ed33cafb8f2802b88aad0c136ff60076a67f08a4decd0aaf5badcc7c3008dd37" exitCode=1 Feb 24 02:20:08.939614 master-0 kubenswrapper[7864]: I0224 02:20:08.939138 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"803f3561-75a1-42ac-afba-fc5bb0407f9c","Type":"ContainerDied","Data":"ed33cafb8f2802b88aad0c136ff60076a67f08a4decd0aaf5badcc7c3008dd37"} Feb 24 02:20:09.067816 master-0 kubenswrapper[7864]: I0224 02:20:09.067740 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_803f3561-75a1-42ac-afba-fc5bb0407f9c/installer/0.log" Feb 24 02:20:09.067990 master-0 kubenswrapper[7864]: I0224 02:20:09.067865 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 24 02:20:09.085671 master-0 kubenswrapper[7864]: I0224 02:20:09.085008 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:09.085671 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:09.085671 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:09.085671 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:09.085671 master-0 kubenswrapper[7864]: I0224 02:20:09.085091 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:09.170241 master-0 kubenswrapper[7864]: I0224 02:20:09.170150 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/803f3561-75a1-42ac-afba-fc5bb0407f9c-var-lock\") pod \"803f3561-75a1-42ac-afba-fc5bb0407f9c\" (UID: \"803f3561-75a1-42ac-afba-fc5bb0407f9c\") " Feb 24 02:20:09.170570 master-0 kubenswrapper[7864]: I0224 02:20:09.170333 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/803f3561-75a1-42ac-afba-fc5bb0407f9c-kube-api-access\") pod \"803f3561-75a1-42ac-afba-fc5bb0407f9c\" (UID: \"803f3561-75a1-42ac-afba-fc5bb0407f9c\") " Feb 24 02:20:09.170570 master-0 kubenswrapper[7864]: I0224 02:20:09.170356 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/803f3561-75a1-42ac-afba-fc5bb0407f9c-var-lock" (OuterVolumeSpecName: "var-lock") pod "803f3561-75a1-42ac-afba-fc5bb0407f9c" (UID: "803f3561-75a1-42ac-afba-fc5bb0407f9c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:20:09.170570 master-0 kubenswrapper[7864]: I0224 02:20:09.170532 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/803f3561-75a1-42ac-afba-fc5bb0407f9c-kubelet-dir\") pod \"803f3561-75a1-42ac-afba-fc5bb0407f9c\" (UID: \"803f3561-75a1-42ac-afba-fc5bb0407f9c\") " Feb 24 02:20:09.171119 master-0 kubenswrapper[7864]: I0224 02:20:09.171058 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/803f3561-75a1-42ac-afba-fc5bb0407f9c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "803f3561-75a1-42ac-afba-fc5bb0407f9c" (UID: "803f3561-75a1-42ac-afba-fc5bb0407f9c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:20:09.171317 master-0 kubenswrapper[7864]: I0224 02:20:09.171188 7864 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/803f3561-75a1-42ac-afba-fc5bb0407f9c-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 02:20:09.175400 master-0 kubenswrapper[7864]: I0224 02:20:09.175338 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/803f3561-75a1-42ac-afba-fc5bb0407f9c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "803f3561-75a1-42ac-afba-fc5bb0407f9c" (UID: "803f3561-75a1-42ac-afba-fc5bb0407f9c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:20:09.274745 master-0 kubenswrapper[7864]: I0224 02:20:09.274668 7864 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/803f3561-75a1-42ac-afba-fc5bb0407f9c-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 02:20:09.274745 master-0 kubenswrapper[7864]: I0224 02:20:09.274736 7864 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/803f3561-75a1-42ac-afba-fc5bb0407f9c-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:20:09.948833 master-0 kubenswrapper[7864]: I0224 02:20:09.948741 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_803f3561-75a1-42ac-afba-fc5bb0407f9c/installer/0.log" Feb 24 02:20:09.949024 master-0 kubenswrapper[7864]: I0224 02:20:09.948853 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"803f3561-75a1-42ac-afba-fc5bb0407f9c","Type":"ContainerDied","Data":"469ea89978a106e8d886f7d5bc0536175799cd2440585cb8454e806af61e0350"} Feb 24 02:20:09.949024 master-0 kubenswrapper[7864]: I0224 02:20:09.948917 7864 scope.go:117] "RemoveContainer" containerID="ed33cafb8f2802b88aad0c136ff60076a67f08a4decd0aaf5badcc7c3008dd37" Feb 24 02:20:09.949165 master-0 kubenswrapper[7864]: I0224 02:20:09.949109 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 24 02:20:09.979314 master-0 kubenswrapper[7864]: I0224 02:20:09.978692 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 24 02:20:09.990004 master-0 kubenswrapper[7864]: I0224 02:20:09.989950 7864 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 24 02:20:10.085615 master-0 kubenswrapper[7864]: I0224 02:20:10.085490 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:10.085615 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:10.085615 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:10.085615 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:10.086788 master-0 kubenswrapper[7864]: I0224 02:20:10.085634 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:11.085307 master-0 kubenswrapper[7864]: I0224 02:20:11.085223 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:11.085307 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:11.085307 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:11.085307 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:11.085849 master-0 kubenswrapper[7864]: I0224 02:20:11.085335 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:11.888779 master-0 kubenswrapper[7864]: I0224 02:20:11.888679 7864 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="803f3561-75a1-42ac-afba-fc5bb0407f9c" path="/var/lib/kubelet/pods/803f3561-75a1-42ac-afba-fc5bb0407f9c/volumes" Feb 24 02:20:12.084871 master-0 kubenswrapper[7864]: I0224 02:20:12.084778 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:12.084871 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:12.084871 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:12.084871 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:12.085248 master-0 kubenswrapper[7864]: I0224 02:20:12.084883 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:12.874754 master-0 kubenswrapper[7864]: I0224 02:20:12.874666 7864 scope.go:117] "RemoveContainer" containerID="7ebe8b93ac5fdd43a48b73d5d4aae71709a78d8ce5151017ae8a23f470fe9ff8" Feb 24 02:20:12.875178 master-0 kubenswrapper[7864]: E0224 02:20:12.875144 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-6dlqb_openshift-ingress-operator(c3278a82-ee70-4d6c-9c96-f8cb1bcb9334)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" podUID="c3278a82-ee70-4d6c-9c96-f8cb1bcb9334" Feb 24 02:20:13.084758 master-0 kubenswrapper[7864]: I0224 02:20:13.084661 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:13.084758 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:13.084758 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:13.084758 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:13.085817 master-0 kubenswrapper[7864]: I0224 02:20:13.084793 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:14.084666 master-0 kubenswrapper[7864]: I0224 02:20:14.084559 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:14.084666 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:14.084666 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:14.084666 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:14.085812 master-0 kubenswrapper[7864]: I0224 02:20:14.084754 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:15.085442 master-0 kubenswrapper[7864]: I0224 02:20:15.085352 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:15.085442 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:15.085442 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:15.085442 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:15.086561 master-0 kubenswrapper[7864]: I0224 02:20:15.085464 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:15.746151 master-0 kubenswrapper[7864]: I0224 02:20:15.746043 7864 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 24 02:20:15.746779 master-0 kubenswrapper[7864]: I0224 02:20:15.746660 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" containerID="cri-o://00f4257f3724c1a89a1afb3daabff19398e78a26b25bcfbd1dca685042ce886f" gracePeriod=30 Feb 24 02:20:15.746911 master-0 kubenswrapper[7864]: I0224 02:20:15.746737 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" containerID="cri-o://7695e9f7f4d83409e69769002fffe39946b7e5abced96f2196fee9110126dfbb" gracePeriod=30 Feb 24 02:20:15.748655 master-0 kubenswrapper[7864]: I0224 02:20:15.747795 7864 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 24 02:20:15.748655 master-0 kubenswrapper[7864]: E0224 02:20:15.748355 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 24 02:20:15.748655 master-0 kubenswrapper[7864]: I0224 02:20:15.748390 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 24 02:20:15.748655 master-0 kubenswrapper[7864]: E0224 02:20:15.748436 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 02:20:15.748655 master-0 kubenswrapper[7864]: I0224 02:20:15.748452 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 02:20:15.748655 master-0 kubenswrapper[7864]: E0224 02:20:15.748480 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 02:20:15.748655 master-0 kubenswrapper[7864]: I0224 02:20:15.748496 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 02:20:15.748655 master-0 kubenswrapper[7864]: E0224 02:20:15.748528 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 02:20:15.748655 master-0 kubenswrapper[7864]: I0224 02:20:15.748544 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 02:20:15.748655 master-0 kubenswrapper[7864]: E0224 02:20:15.748570 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 02:20:15.748655 master-0 kubenswrapper[7864]: I0224 02:20:15.748632 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 02:20:15.748655 master-0 kubenswrapper[7864]: E0224 02:20:15.748653 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 24 02:20:15.748655 master-0 kubenswrapper[7864]: I0224 02:20:15.748672 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 24 02:20:15.749903 master-0 kubenswrapper[7864]: E0224 02:20:15.748698 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 02:20:15.749903 master-0 kubenswrapper[7864]: I0224 02:20:15.748716 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 02:20:15.749903 master-0 kubenswrapper[7864]: E0224 02:20:15.748748 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 02:20:15.749903 master-0 kubenswrapper[7864]: I0224 02:20:15.748767 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 02:20:15.749903 master-0 kubenswrapper[7864]: E0224 02:20:15.748791 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 02:20:15.749903 master-0 kubenswrapper[7864]: I0224 02:20:15.748808 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 02:20:15.749903 master-0 kubenswrapper[7864]: E0224 02:20:15.748862 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="803f3561-75a1-42ac-afba-fc5bb0407f9c" containerName="installer" Feb 24 02:20:15.749903 master-0 kubenswrapper[7864]: I0224 02:20:15.748880 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="803f3561-75a1-42ac-afba-fc5bb0407f9c" containerName="installer" Feb 24 02:20:15.749903 master-0 kubenswrapper[7864]: I0224 02:20:15.749232 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 02:20:15.749903 master-0 kubenswrapper[7864]: I0224 02:20:15.749262 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 24 02:20:15.749903 master-0 kubenswrapper[7864]: I0224 02:20:15.749293 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 02:20:15.749903 master-0 kubenswrapper[7864]: I0224 02:20:15.749317 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 24 02:20:15.749903 master-0 kubenswrapper[7864]: I0224 02:20:15.749351 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 02:20:15.749903 master-0 kubenswrapper[7864]: I0224 02:20:15.749372 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 02:20:15.749903 master-0 kubenswrapper[7864]: I0224 02:20:15.749407 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="803f3561-75a1-42ac-afba-fc5bb0407f9c" containerName="installer" Feb 24 02:20:15.749903 master-0 kubenswrapper[7864]: E0224 02:20:15.749749 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 02:20:15.749903 master-0 kubenswrapper[7864]: I0224 02:20:15.749776 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 02:20:15.751442 master-0 kubenswrapper[7864]: I0224 02:20:15.750091 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 02:20:15.751442 master-0 kubenswrapper[7864]: I0224 02:20:15.750123 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 02:20:15.751442 master-0 kubenswrapper[7864]: I0224 02:20:15.750148 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 02:20:15.751442 master-0 kubenswrapper[7864]: I0224 02:20:15.750173 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 02:20:15.752465 master-0 kubenswrapper[7864]: I0224 02:20:15.752400 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:20:15.902189 master-0 kubenswrapper[7864]: I0224 02:20:15.898815 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/986668ae1bbdf9cce9dceeca068e9031-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"986668ae1bbdf9cce9dceeca068e9031\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:20:15.902189 master-0 kubenswrapper[7864]: I0224 02:20:15.898887 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/986668ae1bbdf9cce9dceeca068e9031-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"986668ae1bbdf9cce9dceeca068e9031\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:20:15.912089 master-0 kubenswrapper[7864]: I0224 02:20:15.911951 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 24 02:20:15.969792 master-0 kubenswrapper[7864]: I0224 02:20:15.969734 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:20:16.000893 master-0 kubenswrapper[7864]: I0224 02:20:16.000364 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/986668ae1bbdf9cce9dceeca068e9031-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"986668ae1bbdf9cce9dceeca068e9031\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:20:16.000893 master-0 kubenswrapper[7864]: I0224 02:20:16.000430 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/986668ae1bbdf9cce9dceeca068e9031-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"986668ae1bbdf9cce9dceeca068e9031\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:20:16.000893 master-0 kubenswrapper[7864]: I0224 02:20:16.000657 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/986668ae1bbdf9cce9dceeca068e9031-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"986668ae1bbdf9cce9dceeca068e9031\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:20:16.000893 master-0 kubenswrapper[7864]: I0224 02:20:16.000708 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/986668ae1bbdf9cce9dceeca068e9031-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"986668ae1bbdf9cce9dceeca068e9031\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:20:16.018601 master-0 kubenswrapper[7864]: I0224 02:20:16.018079 7864 generic.go:334] "Generic (PLEG): container finished" podID="eb9f7dc4-e69f-4fc1-bb1a-1878971d279d" containerID="7141c97ac4f89de3381db3a918874906220134e85f0b183e0b4eaacb0434c063" exitCode=0 Feb 24 02:20:16.018601 master-0 kubenswrapper[7864]: I0224 02:20:16.018163 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" event={"ID":"eb9f7dc4-e69f-4fc1-bb1a-1878971d279d","Type":"ContainerDied","Data":"7141c97ac4f89de3381db3a918874906220134e85f0b183e0b4eaacb0434c063"} Feb 24 02:20:16.033592 master-0 kubenswrapper[7864]: I0224 02:20:16.030623 7864 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="7695e9f7f4d83409e69769002fffe39946b7e5abced96f2196fee9110126dfbb" exitCode=0 Feb 24 02:20:16.033592 master-0 kubenswrapper[7864]: I0224 02:20:16.030744 7864 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="00f4257f3724c1a89a1afb3daabff19398e78a26b25bcfbd1dca685042ce886f" exitCode=0 Feb 24 02:20:16.033592 master-0 kubenswrapper[7864]: I0224 02:20:16.030771 7864 scope.go:117] "RemoveContainer" containerID="7695e9f7f4d83409e69769002fffe39946b7e5abced96f2196fee9110126dfbb" Feb 24 02:20:16.033592 master-0 kubenswrapper[7864]: I0224 02:20:16.030790 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 02:20:16.084604 master-0 kubenswrapper[7864]: I0224 02:20:16.079618 7864 scope.go:117] "RemoveContainer" containerID="234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503" Feb 24 02:20:16.084827 master-0 kubenswrapper[7864]: I0224 02:20:16.084754 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:16.084827 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:16.084827 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:16.084827 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:16.084827 master-0 kubenswrapper[7864]: I0224 02:20:16.084808 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:16.104725 master-0 kubenswrapper[7864]: I0224 02:20:16.101325 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 24 02:20:16.104725 master-0 kubenswrapper[7864]: I0224 02:20:16.101378 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 24 02:20:16.104725 master-0 kubenswrapper[7864]: I0224 02:20:16.101403 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 24 02:20:16.104725 master-0 kubenswrapper[7864]: I0224 02:20:16.101434 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:20:16.104725 master-0 kubenswrapper[7864]: I0224 02:20:16.101483 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 24 02:20:16.104725 master-0 kubenswrapper[7864]: I0224 02:20:16.101509 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets" (OuterVolumeSpecName: "secrets") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:20:16.104725 master-0 kubenswrapper[7864]: I0224 02:20:16.101551 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:20:16.104725 master-0 kubenswrapper[7864]: I0224 02:20:16.101557 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 24 02:20:16.104725 master-0 kubenswrapper[7864]: I0224 02:20:16.101568 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs" (OuterVolumeSpecName: "logs") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:20:16.104725 master-0 kubenswrapper[7864]: I0224 02:20:16.101608 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config" (OuterVolumeSpecName: "config") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:20:16.104725 master-0 kubenswrapper[7864]: I0224 02:20:16.102300 7864 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:20:16.104725 master-0 kubenswrapper[7864]: I0224 02:20:16.102315 7864 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Feb 24 02:20:16.104725 master-0 kubenswrapper[7864]: I0224 02:20:16.102326 7864 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") on node \"master-0\" DevicePath \"\"" Feb 24 02:20:16.104725 master-0 kubenswrapper[7864]: I0224 02:20:16.102337 7864 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Feb 24 02:20:16.104725 master-0 kubenswrapper[7864]: I0224 02:20:16.102429 7864 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 02:20:16.104725 master-0 kubenswrapper[7864]: I0224 02:20:16.102893 7864 scope.go:117] "RemoveContainer" containerID="00f4257f3724c1a89a1afb3daabff19398e78a26b25bcfbd1dca685042ce886f" Feb 24 02:20:16.116958 master-0 kubenswrapper[7864]: I0224 02:20:16.116924 7864 scope.go:117] "RemoveContainer" containerID="1547c14f4eb619b96bd863bc221c6f76692c0c715d2c89a10b3b0a90e8c9a765" Feb 24 02:20:16.142958 master-0 kubenswrapper[7864]: E0224 02:20:16.142918 7864 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podeb9f7dc4_e69f_4fc1_bb1a_1878971d279d.slice/crio-conmon-7141c97ac4f89de3381db3a918874906220134e85f0b183e0b4eaacb0434c063.scope\": RecentStats: unable to find data in memory cache]" Feb 24 02:20:16.147161 master-0 kubenswrapper[7864]: I0224 02:20:16.147121 7864 scope.go:117] "RemoveContainer" containerID="7695e9f7f4d83409e69769002fffe39946b7e5abced96f2196fee9110126dfbb" Feb 24 02:20:16.147732 master-0 kubenswrapper[7864]: E0224 02:20:16.147673 7864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7695e9f7f4d83409e69769002fffe39946b7e5abced96f2196fee9110126dfbb\": container with ID starting with 7695e9f7f4d83409e69769002fffe39946b7e5abced96f2196fee9110126dfbb not found: ID does not exist" containerID="7695e9f7f4d83409e69769002fffe39946b7e5abced96f2196fee9110126dfbb" Feb 24 02:20:16.147832 master-0 kubenswrapper[7864]: I0224 02:20:16.147741 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7695e9f7f4d83409e69769002fffe39946b7e5abced96f2196fee9110126dfbb"} err="failed to get container status \"7695e9f7f4d83409e69769002fffe39946b7e5abced96f2196fee9110126dfbb\": rpc error: code = NotFound desc = could not find container \"7695e9f7f4d83409e69769002fffe39946b7e5abced96f2196fee9110126dfbb\": container with ID starting with 7695e9f7f4d83409e69769002fffe39946b7e5abced96f2196fee9110126dfbb not found: ID does not exist" Feb 24 02:20:16.147832 master-0 kubenswrapper[7864]: I0224 02:20:16.147779 7864 scope.go:117] "RemoveContainer" containerID="234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503" Feb 24 02:20:16.148364 master-0 kubenswrapper[7864]: E0224 02:20:16.148318 7864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503\": container with ID starting with 234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503 not found: ID does not exist" containerID="234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503" Feb 24 02:20:16.148448 master-0 kubenswrapper[7864]: I0224 02:20:16.148356 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503"} err="failed to get container status \"234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503\": rpc error: code = NotFound desc = could not find container \"234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503\": container with ID starting with 234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503 not found: ID does not exist" Feb 24 02:20:16.148448 master-0 kubenswrapper[7864]: I0224 02:20:16.148381 7864 scope.go:117] "RemoveContainer" containerID="00f4257f3724c1a89a1afb3daabff19398e78a26b25bcfbd1dca685042ce886f" Feb 24 02:20:16.148932 master-0 kubenswrapper[7864]: E0224 02:20:16.148872 7864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00f4257f3724c1a89a1afb3daabff19398e78a26b25bcfbd1dca685042ce886f\": container with ID starting with 00f4257f3724c1a89a1afb3daabff19398e78a26b25bcfbd1dca685042ce886f not found: ID does not exist" containerID="00f4257f3724c1a89a1afb3daabff19398e78a26b25bcfbd1dca685042ce886f" Feb 24 02:20:16.149044 master-0 kubenswrapper[7864]: I0224 02:20:16.148947 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00f4257f3724c1a89a1afb3daabff19398e78a26b25bcfbd1dca685042ce886f"} err="failed to get container status \"00f4257f3724c1a89a1afb3daabff19398e78a26b25bcfbd1dca685042ce886f\": rpc error: code = NotFound desc = could not find container \"00f4257f3724c1a89a1afb3daabff19398e78a26b25bcfbd1dca685042ce886f\": container with ID starting with 00f4257f3724c1a89a1afb3daabff19398e78a26b25bcfbd1dca685042ce886f not found: ID does not exist" Feb 24 02:20:16.149044 master-0 kubenswrapper[7864]: I0224 02:20:16.148989 7864 scope.go:117] "RemoveContainer" containerID="1547c14f4eb619b96bd863bc221c6f76692c0c715d2c89a10b3b0a90e8c9a765" Feb 24 02:20:16.149415 master-0 kubenswrapper[7864]: E0224 02:20:16.149368 7864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1547c14f4eb619b96bd863bc221c6f76692c0c715d2c89a10b3b0a90e8c9a765\": container with ID starting with 1547c14f4eb619b96bd863bc221c6f76692c0c715d2c89a10b3b0a90e8c9a765 not found: ID does not exist" containerID="1547c14f4eb619b96bd863bc221c6f76692c0c715d2c89a10b3b0a90e8c9a765" Feb 24 02:20:16.149415 master-0 kubenswrapper[7864]: I0224 02:20:16.149400 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1547c14f4eb619b96bd863bc221c6f76692c0c715d2c89a10b3b0a90e8c9a765"} err="failed to get container status \"1547c14f4eb619b96bd863bc221c6f76692c0c715d2c89a10b3b0a90e8c9a765\": rpc error: code = NotFound desc = could not find container \"1547c14f4eb619b96bd863bc221c6f76692c0c715d2c89a10b3b0a90e8c9a765\": container with ID starting with 1547c14f4eb619b96bd863bc221c6f76692c0c715d2c89a10b3b0a90e8c9a765 not found: ID does not exist" Feb 24 02:20:16.149415 master-0 kubenswrapper[7864]: I0224 02:20:16.149418 7864 scope.go:117] "RemoveContainer" containerID="7695e9f7f4d83409e69769002fffe39946b7e5abced96f2196fee9110126dfbb" Feb 24 02:20:16.149923 master-0 kubenswrapper[7864]: I0224 02:20:16.149873 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7695e9f7f4d83409e69769002fffe39946b7e5abced96f2196fee9110126dfbb"} err="failed to get container status \"7695e9f7f4d83409e69769002fffe39946b7e5abced96f2196fee9110126dfbb\": rpc error: code = NotFound desc = could not find container \"7695e9f7f4d83409e69769002fffe39946b7e5abced96f2196fee9110126dfbb\": container with ID starting with 7695e9f7f4d83409e69769002fffe39946b7e5abced96f2196fee9110126dfbb not found: ID does not exist" Feb 24 02:20:16.149923 master-0 kubenswrapper[7864]: I0224 02:20:16.149903 7864 scope.go:117] "RemoveContainer" containerID="234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503" Feb 24 02:20:16.150358 master-0 kubenswrapper[7864]: I0224 02:20:16.150281 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503"} err="failed to get container status \"234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503\": rpc error: code = NotFound desc = could not find container \"234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503\": container with ID starting with 234a47ecb3b391c1e0c552959fbe19967c8d649e899f4b1ca62fd5e73e38c503 not found: ID does not exist" Feb 24 02:20:16.150440 master-0 kubenswrapper[7864]: I0224 02:20:16.150357 7864 scope.go:117] "RemoveContainer" containerID="00f4257f3724c1a89a1afb3daabff19398e78a26b25bcfbd1dca685042ce886f" Feb 24 02:20:16.150802 master-0 kubenswrapper[7864]: I0224 02:20:16.150761 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00f4257f3724c1a89a1afb3daabff19398e78a26b25bcfbd1dca685042ce886f"} err="failed to get container status \"00f4257f3724c1a89a1afb3daabff19398e78a26b25bcfbd1dca685042ce886f\": rpc error: code = NotFound desc = could not find container \"00f4257f3724c1a89a1afb3daabff19398e78a26b25bcfbd1dca685042ce886f\": container with ID starting with 00f4257f3724c1a89a1afb3daabff19398e78a26b25bcfbd1dca685042ce886f not found: ID does not exist" Feb 24 02:20:16.150802 master-0 kubenswrapper[7864]: I0224 02:20:16.150785 7864 scope.go:117] "RemoveContainer" containerID="1547c14f4eb619b96bd863bc221c6f76692c0c715d2c89a10b3b0a90e8c9a765" Feb 24 02:20:16.151119 master-0 kubenswrapper[7864]: I0224 02:20:16.151076 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1547c14f4eb619b96bd863bc221c6f76692c0c715d2c89a10b3b0a90e8c9a765"} err="failed to get container status \"1547c14f4eb619b96bd863bc221c6f76692c0c715d2c89a10b3b0a90e8c9a765\": rpc error: code = NotFound desc = could not find container \"1547c14f4eb619b96bd863bc221c6f76692c0c715d2c89a10b3b0a90e8c9a765\": container with ID starting with 1547c14f4eb619b96bd863bc221c6f76692c0c715d2c89a10b3b0a90e8c9a765 not found: ID does not exist" Feb 24 02:20:16.180728 master-0 kubenswrapper[7864]: I0224 02:20:16.180673 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:20:16.230750 master-0 kubenswrapper[7864]: W0224 02:20:16.213687 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod986668ae1bbdf9cce9dceeca068e9031.slice/crio-d7b6636edfdbd08e77deab1053c053904d89a5b39e0804954f8c80e56f1c9467 WatchSource:0}: Error finding container d7b6636edfdbd08e77deab1053c053904d89a5b39e0804954f8c80e56f1c9467: Status 404 returned error can't find the container with id d7b6636edfdbd08e77deab1053c053904d89a5b39e0804954f8c80e56f1c9467 Feb 24 02:20:17.048157 master-0 kubenswrapper[7864]: I0224 02:20:17.047989 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"986668ae1bbdf9cce9dceeca068e9031","Type":"ContainerStarted","Data":"8dc4659ecc15c6bfe49a9925903b4f4687f239838392613c4865c83a8905650b"} Feb 24 02:20:17.048532 master-0 kubenswrapper[7864]: I0224 02:20:17.048503 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"986668ae1bbdf9cce9dceeca068e9031","Type":"ContainerStarted","Data":"802885a1b2ef2df10b5ffa5642c921493548e8e31f3eaf3dc4bdd2f7c156af95"} Feb 24 02:20:17.048765 master-0 kubenswrapper[7864]: I0224 02:20:17.048737 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"986668ae1bbdf9cce9dceeca068e9031","Type":"ContainerStarted","Data":"d7b6636edfdbd08e77deab1053c053904d89a5b39e0804954f8c80e56f1c9467"} Feb 24 02:20:17.090717 master-0 kubenswrapper[7864]: I0224 02:20:17.090021 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:17.090717 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:17.090717 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:17.090717 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:17.090717 master-0 kubenswrapper[7864]: I0224 02:20:17.090125 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:17.520155 master-0 kubenswrapper[7864]: I0224 02:20:17.520097 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 24 02:20:17.640693 master-0 kubenswrapper[7864]: I0224 02:20:17.640623 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/eb9f7dc4-e69f-4fc1-bb1a-1878971d279d-var-lock\") pod \"eb9f7dc4-e69f-4fc1-bb1a-1878971d279d\" (UID: \"eb9f7dc4-e69f-4fc1-bb1a-1878971d279d\") " Feb 24 02:20:17.641227 master-0 kubenswrapper[7864]: I0224 02:20:17.640821 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb9f7dc4-e69f-4fc1-bb1a-1878971d279d-var-lock" (OuterVolumeSpecName: "var-lock") pod "eb9f7dc4-e69f-4fc1-bb1a-1878971d279d" (UID: "eb9f7dc4-e69f-4fc1-bb1a-1878971d279d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:20:17.641391 master-0 kubenswrapper[7864]: I0224 02:20:17.641362 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb9f7dc4-e69f-4fc1-bb1a-1878971d279d-kube-api-access\") pod \"eb9f7dc4-e69f-4fc1-bb1a-1878971d279d\" (UID: \"eb9f7dc4-e69f-4fc1-bb1a-1878971d279d\") " Feb 24 02:20:17.641725 master-0 kubenswrapper[7864]: I0224 02:20:17.641698 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb9f7dc4-e69f-4fc1-bb1a-1878971d279d-kubelet-dir\") pod \"eb9f7dc4-e69f-4fc1-bb1a-1878971d279d\" (UID: \"eb9f7dc4-e69f-4fc1-bb1a-1878971d279d\") " Feb 24 02:20:17.641906 master-0 kubenswrapper[7864]: I0224 02:20:17.641835 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb9f7dc4-e69f-4fc1-bb1a-1878971d279d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "eb9f7dc4-e69f-4fc1-bb1a-1878971d279d" (UID: "eb9f7dc4-e69f-4fc1-bb1a-1878971d279d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:20:17.642543 master-0 kubenswrapper[7864]: I0224 02:20:17.642514 7864 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb9f7dc4-e69f-4fc1-bb1a-1878971d279d-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:20:17.642724 master-0 kubenswrapper[7864]: I0224 02:20:17.642702 7864 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/eb9f7dc4-e69f-4fc1-bb1a-1878971d279d-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 02:20:17.646199 master-0 kubenswrapper[7864]: I0224 02:20:17.646113 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb9f7dc4-e69f-4fc1-bb1a-1878971d279d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "eb9f7dc4-e69f-4fc1-bb1a-1878971d279d" (UID: "eb9f7dc4-e69f-4fc1-bb1a-1878971d279d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:20:17.745508 master-0 kubenswrapper[7864]: I0224 02:20:17.745426 7864 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb9f7dc4-e69f-4fc1-bb1a-1878971d279d-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 02:20:17.902180 master-0 kubenswrapper[7864]: I0224 02:20:17.902108 7864 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9ad9373c007a4fcd25e70622bdc8deb" path="/var/lib/kubelet/pods/c9ad9373c007a4fcd25e70622bdc8deb/volumes" Feb 24 02:20:17.903250 master-0 kubenswrapper[7864]: I0224 02:20:17.903009 7864 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Feb 24 02:20:17.928567 master-0 kubenswrapper[7864]: I0224 02:20:17.928421 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 24 02:20:17.928567 master-0 kubenswrapper[7864]: I0224 02:20:17.928476 7864 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="c2be8065-2cea-4042-afbc-0a0332954ae9" Feb 24 02:20:17.934863 master-0 kubenswrapper[7864]: I0224 02:20:17.934549 7864 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 24 02:20:17.934863 master-0 kubenswrapper[7864]: I0224 02:20:17.934656 7864 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="c2be8065-2cea-4042-afbc-0a0332954ae9" Feb 24 02:20:18.062332 master-0 kubenswrapper[7864]: I0224 02:20:18.062236 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"986668ae1bbdf9cce9dceeca068e9031","Type":"ContainerStarted","Data":"dea6fc53fd855f67e834ba785b44e2405a5a05c89259e81225c2459f00ab9410"} Feb 24 02:20:18.062332 master-0 kubenswrapper[7864]: I0224 02:20:18.062325 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"986668ae1bbdf9cce9dceeca068e9031","Type":"ContainerStarted","Data":"f7e48d7a0d2c98b03ed618e0f0670a90b569c740794e4265b966c9259a6da4db"} Feb 24 02:20:18.065940 master-0 kubenswrapper[7864]: I0224 02:20:18.065864 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" event={"ID":"eb9f7dc4-e69f-4fc1-bb1a-1878971d279d","Type":"ContainerDied","Data":"37f8d2ab136a840c907679de284c242a764649e281bfa4d2ebf7dc5249d87848"} Feb 24 02:20:18.065940 master-0 kubenswrapper[7864]: I0224 02:20:18.065936 7864 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37f8d2ab136a840c907679de284c242a764649e281bfa4d2ebf7dc5249d87848" Feb 24 02:20:18.066132 master-0 kubenswrapper[7864]: I0224 02:20:18.066031 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 24 02:20:18.086193 master-0 kubenswrapper[7864]: I0224 02:20:18.086105 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:18.086193 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:18.086193 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:18.086193 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:18.086193 master-0 kubenswrapper[7864]: I0224 02:20:18.086200 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:18.108279 master-0 kubenswrapper[7864]: I0224 02:20:18.108136 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=3.108098992 podStartE2EDuration="3.108098992s" podCreationTimestamp="2026-02-24 02:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:20:18.094444639 +0000 UTC m=+982.422098311" watchObservedRunningTime="2026-02-24 02:20:18.108098992 +0000 UTC m=+982.435752664" Feb 24 02:20:18.162288 master-0 kubenswrapper[7864]: I0224 02:20:18.162211 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-retry-1-master-0"] Feb 24 02:20:18.162771 master-0 kubenswrapper[7864]: E0224 02:20:18.162737 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb9f7dc4-e69f-4fc1-bb1a-1878971d279d" containerName="installer" Feb 24 02:20:18.162771 master-0 kubenswrapper[7864]: I0224 02:20:18.162770 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb9f7dc4-e69f-4fc1-bb1a-1878971d279d" containerName="installer" Feb 24 02:20:18.163060 master-0 kubenswrapper[7864]: I0224 02:20:18.163029 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb9f7dc4-e69f-4fc1-bb1a-1878971d279d" containerName="installer" Feb 24 02:20:18.163958 master-0 kubenswrapper[7864]: I0224 02:20:18.163918 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Feb 24 02:20:18.170855 master-0 kubenswrapper[7864]: I0224 02:20:18.170800 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 24 02:20:18.170855 master-0 kubenswrapper[7864]: I0224 02:20:18.170813 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-5hsr5" Feb 24 02:20:18.177939 master-0 kubenswrapper[7864]: I0224 02:20:18.177871 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-retry-1-master-0"] Feb 24 02:20:18.255713 master-0 kubenswrapper[7864]: I0224 02:20:18.255624 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27f0c4d0-17dd-49ed-a8a4-7be1d82738c7-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"27f0c4d0-17dd-49ed-a8a4-7be1d82738c7\") " pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Feb 24 02:20:18.255903 master-0 kubenswrapper[7864]: I0224 02:20:18.255763 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/27f0c4d0-17dd-49ed-a8a4-7be1d82738c7-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"27f0c4d0-17dd-49ed-a8a4-7be1d82738c7\") " pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Feb 24 02:20:18.256506 master-0 kubenswrapper[7864]: I0224 02:20:18.256409 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27f0c4d0-17dd-49ed-a8a4-7be1d82738c7-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"27f0c4d0-17dd-49ed-a8a4-7be1d82738c7\") " pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Feb 24 02:20:18.359136 master-0 kubenswrapper[7864]: I0224 02:20:18.359028 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27f0c4d0-17dd-49ed-a8a4-7be1d82738c7-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"27f0c4d0-17dd-49ed-a8a4-7be1d82738c7\") " pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Feb 24 02:20:18.359285 master-0 kubenswrapper[7864]: I0224 02:20:18.359215 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27f0c4d0-17dd-49ed-a8a4-7be1d82738c7-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"27f0c4d0-17dd-49ed-a8a4-7be1d82738c7\") " pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Feb 24 02:20:18.359417 master-0 kubenswrapper[7864]: I0224 02:20:18.359369 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/27f0c4d0-17dd-49ed-a8a4-7be1d82738c7-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"27f0c4d0-17dd-49ed-a8a4-7be1d82738c7\") " pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Feb 24 02:20:18.359635 master-0 kubenswrapper[7864]: I0224 02:20:18.359567 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/27f0c4d0-17dd-49ed-a8a4-7be1d82738c7-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"27f0c4d0-17dd-49ed-a8a4-7be1d82738c7\") " pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Feb 24 02:20:18.360237 master-0 kubenswrapper[7864]: I0224 02:20:18.360185 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27f0c4d0-17dd-49ed-a8a4-7be1d82738c7-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"27f0c4d0-17dd-49ed-a8a4-7be1d82738c7\") " pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Feb 24 02:20:18.389484 master-0 kubenswrapper[7864]: I0224 02:20:18.389397 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27f0c4d0-17dd-49ed-a8a4-7be1d82738c7-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"27f0c4d0-17dd-49ed-a8a4-7be1d82738c7\") " pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Feb 24 02:20:18.503535 master-0 kubenswrapper[7864]: I0224 02:20:18.503445 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Feb 24 02:20:19.036212 master-0 kubenswrapper[7864]: W0224 02:20:19.036115 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod27f0c4d0_17dd_49ed_a8a4_7be1d82738c7.slice/crio-f562d675528b0b4d595c589a6e7d7e57e6c855daab8a48f034a1f5df7b3b29a7 WatchSource:0}: Error finding container f562d675528b0b4d595c589a6e7d7e57e6c855daab8a48f034a1f5df7b3b29a7: Status 404 returned error can't find the container with id f562d675528b0b4d595c589a6e7d7e57e6c855daab8a48f034a1f5df7b3b29a7 Feb 24 02:20:19.038211 master-0 kubenswrapper[7864]: I0224 02:20:19.038138 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-retry-1-master-0"] Feb 24 02:20:19.078752 master-0 kubenswrapper[7864]: I0224 02:20:19.078564 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" event={"ID":"27f0c4d0-17dd-49ed-a8a4-7be1d82738c7","Type":"ContainerStarted","Data":"f562d675528b0b4d595c589a6e7d7e57e6c855daab8a48f034a1f5df7b3b29a7"} Feb 24 02:20:19.090976 master-0 kubenswrapper[7864]: I0224 02:20:19.090889 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:19.090976 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:19.090976 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:19.090976 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:19.091256 master-0 kubenswrapper[7864]: I0224 02:20:19.090974 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:20.085484 master-0 kubenswrapper[7864]: I0224 02:20:20.085397 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:20.085484 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:20.085484 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:20.085484 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:20.086543 master-0 kubenswrapper[7864]: I0224 02:20:20.085516 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:20.091769 master-0 kubenswrapper[7864]: I0224 02:20:20.091707 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" event={"ID":"27f0c4d0-17dd-49ed-a8a4-7be1d82738c7","Type":"ContainerStarted","Data":"14ab4ee9b49331e1ed7ba4898ca421a9e68557c1e81815d647d7a5983d4cb69f"} Feb 24 02:20:20.118024 master-0 kubenswrapper[7864]: I0224 02:20:20.117901 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" podStartSLOduration=2.117869454 podStartE2EDuration="2.117869454s" podCreationTimestamp="2026-02-24 02:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:20:20.115417085 +0000 UTC m=+984.443070747" watchObservedRunningTime="2026-02-24 02:20:20.117869454 +0000 UTC m=+984.445523106" Feb 24 02:20:21.085547 master-0 kubenswrapper[7864]: I0224 02:20:21.085395 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:21.085547 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:21.085547 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:21.085547 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:21.085547 master-0 kubenswrapper[7864]: I0224 02:20:21.085528 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:22.084735 master-0 kubenswrapper[7864]: I0224 02:20:22.084622 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:22.084735 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:22.084735 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:22.084735 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:22.085454 master-0 kubenswrapper[7864]: I0224 02:20:22.084738 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:23.085988 master-0 kubenswrapper[7864]: I0224 02:20:23.085853 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:23.085988 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:23.085988 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:23.085988 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:23.085988 master-0 kubenswrapper[7864]: I0224 02:20:23.085972 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:24.085209 master-0 kubenswrapper[7864]: I0224 02:20:24.085114 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:24.085209 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:24.085209 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:24.085209 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:24.085805 master-0 kubenswrapper[7864]: I0224 02:20:24.085225 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:25.085612 master-0 kubenswrapper[7864]: I0224 02:20:25.085493 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:25.085612 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:25.085612 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:25.085612 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:25.086430 master-0 kubenswrapper[7864]: I0224 02:20:25.085647 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:26.084701 master-0 kubenswrapper[7864]: I0224 02:20:26.084559 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:26.084701 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:26.084701 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:26.084701 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:26.085254 master-0 kubenswrapper[7864]: I0224 02:20:26.084762 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:26.181189 master-0 kubenswrapper[7864]: I0224 02:20:26.181118 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:20:26.181189 master-0 kubenswrapper[7864]: I0224 02:20:26.181193 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:20:26.182303 master-0 kubenswrapper[7864]: I0224 02:20:26.181217 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:20:26.182303 master-0 kubenswrapper[7864]: I0224 02:20:26.181246 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:20:26.189664 master-0 kubenswrapper[7864]: I0224 02:20:26.188735 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:20:26.190052 master-0 kubenswrapper[7864]: I0224 02:20:26.189808 7864 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:20:26.874923 master-0 kubenswrapper[7864]: I0224 02:20:26.874793 7864 scope.go:117] "RemoveContainer" containerID="7ebe8b93ac5fdd43a48b73d5d4aae71709a78d8ce5151017ae8a23f470fe9ff8" Feb 24 02:20:26.875451 master-0 kubenswrapper[7864]: E0224 02:20:26.875430 7864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-6dlqb_openshift-ingress-operator(c3278a82-ee70-4d6c-9c96-f8cb1bcb9334)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" podUID="c3278a82-ee70-4d6c-9c96-f8cb1bcb9334" Feb 24 02:20:27.084328 master-0 kubenswrapper[7864]: I0224 02:20:27.084254 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:27.084328 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:27.084328 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:27.084328 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:27.084708 master-0 kubenswrapper[7864]: I0224 02:20:27.084342 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:27.156494 master-0 kubenswrapper[7864]: I0224 02:20:27.156350 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:20:27.158712 master-0 kubenswrapper[7864]: I0224 02:20:27.158640 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:20:28.084702 master-0 kubenswrapper[7864]: I0224 02:20:28.084568 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:28.084702 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:28.084702 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:28.084702 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:28.086069 master-0 kubenswrapper[7864]: I0224 02:20:28.084706 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:29.084730 master-0 kubenswrapper[7864]: I0224 02:20:29.084632 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:29.084730 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:29.084730 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:29.084730 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:29.085862 master-0 kubenswrapper[7864]: I0224 02:20:29.084738 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:30.085394 master-0 kubenswrapper[7864]: I0224 02:20:30.085286 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:30.085394 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:30.085394 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:30.085394 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:30.086550 master-0 kubenswrapper[7864]: I0224 02:20:30.085402 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:31.084899 master-0 kubenswrapper[7864]: I0224 02:20:31.084748 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:31.084899 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:31.084899 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:31.084899 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:31.084899 master-0 kubenswrapper[7864]: I0224 02:20:31.084859 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:32.085091 master-0 kubenswrapper[7864]: I0224 02:20:32.085010 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:32.085091 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:32.085091 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:32.085091 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:32.086232 master-0 kubenswrapper[7864]: I0224 02:20:32.085154 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:33.086740 master-0 kubenswrapper[7864]: I0224 02:20:33.086649 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:33.086740 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:33.086740 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:33.086740 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:33.087476 master-0 kubenswrapper[7864]: I0224 02:20:33.086790 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:34.085460 master-0 kubenswrapper[7864]: I0224 02:20:34.085394 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:34.085460 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:34.085460 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:34.085460 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:34.085992 master-0 kubenswrapper[7864]: I0224 02:20:34.085491 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:35.085740 master-0 kubenswrapper[7864]: I0224 02:20:35.085650 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:35.085740 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:35.085740 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:35.085740 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:35.086921 master-0 kubenswrapper[7864]: I0224 02:20:35.085763 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:36.085714 master-0 kubenswrapper[7864]: I0224 02:20:36.085569 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:36.085714 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:36.085714 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:36.085714 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:36.086882 master-0 kubenswrapper[7864]: I0224 02:20:36.085757 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:37.085481 master-0 kubenswrapper[7864]: I0224 02:20:37.085360 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:37.085481 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:37.085481 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:37.085481 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:37.086040 master-0 kubenswrapper[7864]: I0224 02:20:37.085503 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:38.084749 master-0 kubenswrapper[7864]: I0224 02:20:38.084639 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:38.084749 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:38.084749 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:38.084749 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:38.085847 master-0 kubenswrapper[7864]: I0224 02:20:38.084783 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:39.086752 master-0 kubenswrapper[7864]: I0224 02:20:39.086646 7864 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-retry-1-master-0"] Feb 24 02:20:39.087696 master-0 kubenswrapper[7864]: I0224 02:20:39.086982 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" podUID="27f0c4d0-17dd-49ed-a8a4-7be1d82738c7" containerName="installer" containerID="cri-o://14ab4ee9b49331e1ed7ba4898ca421a9e68557c1e81815d647d7a5983d4cb69f" gracePeriod=30 Feb 24 02:20:39.087696 master-0 kubenswrapper[7864]: I0224 02:20:39.087134 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:39.087696 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:39.087696 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:39.087696 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:39.087696 master-0 kubenswrapper[7864]: I0224 02:20:39.087237 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:40.085439 master-0 kubenswrapper[7864]: I0224 02:20:40.085342 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:40.085439 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:40.085439 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:40.085439 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:40.086247 master-0 kubenswrapper[7864]: I0224 02:20:40.085467 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:40.875635 master-0 kubenswrapper[7864]: I0224 02:20:40.875538 7864 scope.go:117] "RemoveContainer" containerID="7ebe8b93ac5fdd43a48b73d5d4aae71709a78d8ce5151017ae8a23f470fe9ff8" Feb 24 02:20:41.085669 master-0 kubenswrapper[7864]: I0224 02:20:41.085489 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:41.085669 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:41.085669 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:41.085669 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:41.085669 master-0 kubenswrapper[7864]: I0224 02:20:41.085644 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:41.088135 master-0 kubenswrapper[7864]: I0224 02:20:41.088067 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 24 02:20:41.089562 master-0 kubenswrapper[7864]: I0224 02:20:41.089458 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 24 02:20:41.091375 master-0 kubenswrapper[7864]: I0224 02:20:41.091314 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd-kube-api-access\") pod \"installer-4-master-0\" (UID: \"5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 24 02:20:41.091652 master-0 kubenswrapper[7864]: I0224 02:20:41.091544 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd-var-lock\") pod \"installer-4-master-0\" (UID: \"5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 24 02:20:41.091798 master-0 kubenswrapper[7864]: I0224 02:20:41.091762 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 24 02:20:41.110248 master-0 kubenswrapper[7864]: I0224 02:20:41.110176 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 24 02:20:41.194184 master-0 kubenswrapper[7864]: I0224 02:20:41.193988 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 24 02:20:41.194184 master-0 kubenswrapper[7864]: I0224 02:20:41.194140 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd-kube-api-access\") pod \"installer-4-master-0\" (UID: \"5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 24 02:20:41.194520 master-0 kubenswrapper[7864]: I0224 02:20:41.194169 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 24 02:20:41.194520 master-0 kubenswrapper[7864]: I0224 02:20:41.194241 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd-var-lock\") pod \"installer-4-master-0\" (UID: \"5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 24 02:20:41.194520 master-0 kubenswrapper[7864]: I0224 02:20:41.194374 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd-var-lock\") pod \"installer-4-master-0\" (UID: \"5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 24 02:20:41.225810 master-0 kubenswrapper[7864]: I0224 02:20:41.225724 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd-kube-api-access\") pod \"installer-4-master-0\" (UID: \"5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 24 02:20:41.278255 master-0 kubenswrapper[7864]: I0224 02:20:41.278173 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-6dlqb_c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/ingress-operator/4.log" Feb 24 02:20:41.279092 master-0 kubenswrapper[7864]: I0224 02:20:41.279030 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" event={"ID":"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334","Type":"ContainerStarted","Data":"cf86bb9cd234b5b6c8b149beb0c0baa8d6338735d55b02c552c06e59b051b932"} Feb 24 02:20:41.434486 master-0 kubenswrapper[7864]: I0224 02:20:41.434406 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 24 02:20:42.008978 master-0 kubenswrapper[7864]: I0224 02:20:42.008894 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 24 02:20:42.087628 master-0 kubenswrapper[7864]: I0224 02:20:42.083973 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:42.087628 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:42.087628 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:42.087628 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:42.087628 master-0 kubenswrapper[7864]: I0224 02:20:42.084089 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:42.292033 master-0 kubenswrapper[7864]: I0224 02:20:42.291868 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd","Type":"ContainerStarted","Data":"87721a77ed25537b4640a4bb3b51b25ccc9baed9db06541dd0ac651dd7e4b7bb"} Feb 24 02:20:43.084759 master-0 kubenswrapper[7864]: I0224 02:20:43.084631 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:43.084759 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:43.084759 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:43.084759 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:43.085859 master-0 kubenswrapper[7864]: I0224 02:20:43.084837 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:43.304940 master-0 kubenswrapper[7864]: I0224 02:20:43.304854 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd","Type":"ContainerStarted","Data":"a1bd23ca02400c09ae750684bb9e9e78e05cea2070ce8f8f95459966c9e876eb"} Feb 24 02:20:43.339481 master-0 kubenswrapper[7864]: I0224 02:20:43.339224 7864 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-master-0" podStartSLOduration=2.339170597 podStartE2EDuration="2.339170597s" podCreationTimestamp="2026-02-24 02:20:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:20:43.330232056 +0000 UTC m=+1007.657885718" watchObservedRunningTime="2026-02-24 02:20:43.339170597 +0000 UTC m=+1007.666824249" Feb 24 02:20:44.085918 master-0 kubenswrapper[7864]: I0224 02:20:44.085823 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:44.085918 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:44.085918 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:44.085918 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:44.086966 master-0 kubenswrapper[7864]: I0224 02:20:44.085971 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:45.084877 master-0 kubenswrapper[7864]: I0224 02:20:45.084770 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:45.084877 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:45.084877 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:45.084877 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:45.085384 master-0 kubenswrapper[7864]: I0224 02:20:45.084897 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:46.084612 master-0 kubenswrapper[7864]: I0224 02:20:46.084496 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:46.084612 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:46.084612 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:46.084612 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:46.086002 master-0 kubenswrapper[7864]: I0224 02:20:46.084623 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:47.085973 master-0 kubenswrapper[7864]: I0224 02:20:47.085864 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:47.085973 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:47.085973 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:47.085973 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:47.087011 master-0 kubenswrapper[7864]: I0224 02:20:47.085994 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:48.085028 master-0 kubenswrapper[7864]: I0224 02:20:48.084957 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:48.085028 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:48.085028 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:48.085028 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:48.085650 master-0 kubenswrapper[7864]: I0224 02:20:48.085599 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:49.085146 master-0 kubenswrapper[7864]: I0224 02:20:49.085077 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:49.085146 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:49.085146 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:49.085146 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:49.086409 master-0 kubenswrapper[7864]: I0224 02:20:49.086354 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:49.303956 master-0 kubenswrapper[7864]: I0224 02:20:49.303847 7864 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 24 02:20:49.305384 master-0 kubenswrapper[7864]: I0224 02:20:49.305335 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 02:20:49.309159 master-0 kubenswrapper[7864]: I0224 02:20:49.309082 7864 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-k86pk" Feb 24 02:20:49.339698 master-0 kubenswrapper[7864]: I0224 02:20:49.339482 7864 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 24 02:20:49.346363 master-0 kubenswrapper[7864]: I0224 02:20:49.346301 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 24 02:20:49.359072 master-0 kubenswrapper[7864]: I0224 02:20:49.358990 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/070ebb2d-57a2-4c76-8c93-e09d398f3b73-var-lock\") pod \"installer-3-master-0\" (UID: \"070ebb2d-57a2-4c76-8c93-e09d398f3b73\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 02:20:49.359291 master-0 kubenswrapper[7864]: I0224 02:20:49.359099 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/070ebb2d-57a2-4c76-8c93-e09d398f3b73-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"070ebb2d-57a2-4c76-8c93-e09d398f3b73\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 02:20:49.359390 master-0 kubenswrapper[7864]: I0224 02:20:49.359366 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/070ebb2d-57a2-4c76-8c93-e09d398f3b73-kube-api-access\") pod \"installer-3-master-0\" (UID: \"070ebb2d-57a2-4c76-8c93-e09d398f3b73\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 02:20:49.463265 master-0 kubenswrapper[7864]: I0224 02:20:49.463153 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/070ebb2d-57a2-4c76-8c93-e09d398f3b73-kube-api-access\") pod \"installer-3-master-0\" (UID: \"070ebb2d-57a2-4c76-8c93-e09d398f3b73\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 02:20:49.463265 master-0 kubenswrapper[7864]: I0224 02:20:49.463258 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/070ebb2d-57a2-4c76-8c93-e09d398f3b73-var-lock\") pod \"installer-3-master-0\" (UID: \"070ebb2d-57a2-4c76-8c93-e09d398f3b73\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 02:20:49.463746 master-0 kubenswrapper[7864]: I0224 02:20:49.463312 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/070ebb2d-57a2-4c76-8c93-e09d398f3b73-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"070ebb2d-57a2-4c76-8c93-e09d398f3b73\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 02:20:49.463746 master-0 kubenswrapper[7864]: I0224 02:20:49.463483 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/070ebb2d-57a2-4c76-8c93-e09d398f3b73-var-lock\") pod \"installer-3-master-0\" (UID: \"070ebb2d-57a2-4c76-8c93-e09d398f3b73\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 02:20:49.463746 master-0 kubenswrapper[7864]: I0224 02:20:49.463517 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/070ebb2d-57a2-4c76-8c93-e09d398f3b73-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"070ebb2d-57a2-4c76-8c93-e09d398f3b73\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 02:20:49.493671 master-0 kubenswrapper[7864]: I0224 02:20:49.493618 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/070ebb2d-57a2-4c76-8c93-e09d398f3b73-kube-api-access\") pod \"installer-3-master-0\" (UID: \"070ebb2d-57a2-4c76-8c93-e09d398f3b73\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 02:20:49.670230 master-0 kubenswrapper[7864]: I0224 02:20:49.670060 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 02:20:50.085894 master-0 kubenswrapper[7864]: I0224 02:20:50.085706 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:50.085894 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:50.085894 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:50.085894 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:50.085894 master-0 kubenswrapper[7864]: I0224 02:20:50.085822 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:50.130671 master-0 kubenswrapper[7864]: I0224 02:20:50.130622 7864 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 24 02:20:50.132260 master-0 kubenswrapper[7864]: I0224 02:20:50.132206 7864 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 24 02:20:50.132540 master-0 kubenswrapper[7864]: I0224 02:20:50.132463 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:50.132738 master-0 kubenswrapper[7864]: I0224 02:20:50.132568 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" containerID="cri-o://c3247cb01609ca2f589c633a4d3bc99376b997d00c142d6d2beea7dccc5eceb8" gracePeriod=15 Feb 24 02:20:50.132941 master-0 kubenswrapper[7864]: I0224 02:20:50.132881 7864 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://5049829a75ed57116cd28e395328151f0338930890f9c71ea2b44193759d72ce" gracePeriod=15 Feb 24 02:20:50.136539 master-0 kubenswrapper[7864]: I0224 02:20:50.136483 7864 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 24 02:20:50.136932 master-0 kubenswrapper[7864]: E0224 02:20:50.136869 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 24 02:20:50.136932 master-0 kubenswrapper[7864]: I0224 02:20:50.136897 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 24 02:20:50.136932 master-0 kubenswrapper[7864]: E0224 02:20:50.136926 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 24 02:20:50.137181 master-0 kubenswrapper[7864]: I0224 02:20:50.136939 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 24 02:20:50.137181 master-0 kubenswrapper[7864]: E0224 02:20:50.136975 7864 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 24 02:20:50.137181 master-0 kubenswrapper[7864]: I0224 02:20:50.136988 7864 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 24 02:20:50.137352 master-0 kubenswrapper[7864]: I0224 02:20:50.137283 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 24 02:20:50.137352 master-0 kubenswrapper[7864]: I0224 02:20:50.137308 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 24 02:20:50.137352 master-0 kubenswrapper[7864]: I0224 02:20:50.137332 7864 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 24 02:20:50.142118 master-0 kubenswrapper[7864]: I0224 02:20:50.142061 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:20:50.181245 master-0 kubenswrapper[7864]: I0224 02:20:50.180027 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:20:50.181245 master-0 kubenswrapper[7864]: I0224 02:20:50.180256 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:20:50.181245 master-0 kubenswrapper[7864]: I0224 02:20:50.180696 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:50.182382 master-0 kubenswrapper[7864]: I0224 02:20:50.182150 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:20:50.182382 master-0 kubenswrapper[7864]: I0224 02:20:50.182299 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:50.182632 master-0 kubenswrapper[7864]: I0224 02:20:50.182407 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:50.182758 master-0 kubenswrapper[7864]: I0224 02:20:50.182682 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:50.183207 master-0 kubenswrapper[7864]: I0224 02:20:50.182831 7864 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:50.210016 master-0 kubenswrapper[7864]: I0224 02:20:50.205501 7864 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 24 02:20:50.217417 master-0 kubenswrapper[7864]: W0224 02:20:50.217356 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod070ebb2d_57a2_4c76_8c93_e09d398f3b73.slice/crio-b9883c9e4f2305d2772b0fcadf9ca3936959b05b72ec07b2a03edcd3558e0737 WatchSource:0}: Error finding container b9883c9e4f2305d2772b0fcadf9ca3936959b05b72ec07b2a03edcd3558e0737: Status 404 returned error can't find the container with id b9883c9e4f2305d2772b0fcadf9ca3936959b05b72ec07b2a03edcd3558e0737 Feb 24 02:20:50.224877 master-0 kubenswrapper[7864]: E0224 02:20:50.224200 7864 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{installer-3-master-0.18970d580d221c8f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-3-master-0,UID:070ebb2d-57a2-4c76-8c93-e09d398f3b73,APIVersion:v1,ResourceVersion:12818,FieldPath:spec.containers{installer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:20:50.222201999 +0000 UTC m=+1014.549855661,LastTimestamp:2026-02-24 02:20:50.222201999 +0000 UTC m=+1014.549855661,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 02:20:50.248665 master-0 kubenswrapper[7864]: E0224 02:20:50.245941 7864 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:20:50.284440 master-0 kubenswrapper[7864]: I0224 02:20:50.284376 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:50.284563 master-0 kubenswrapper[7864]: I0224 02:20:50.284463 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:50.284711 master-0 kubenswrapper[7864]: I0224 02:20:50.284668 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:20:50.284785 master-0 kubenswrapper[7864]: I0224 02:20:50.284752 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:50.284840 master-0 kubenswrapper[7864]: I0224 02:20:50.284813 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:50.284888 master-0 kubenswrapper[7864]: I0224 02:20:50.284838 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:20:50.284933 master-0 kubenswrapper[7864]: I0224 02:20:50.284899 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:20:50.284999 master-0 kubenswrapper[7864]: I0224 02:20:50.284968 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:50.285150 master-0 kubenswrapper[7864]: I0224 02:20:50.285108 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:20:50.285207 master-0 kubenswrapper[7864]: I0224 02:20:50.285183 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:50.285327 master-0 kubenswrapper[7864]: I0224 02:20:50.285288 7864 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:50.285428 master-0 kubenswrapper[7864]: I0224 02:20:50.285400 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:50.285500 master-0 kubenswrapper[7864]: I0224 02:20:50.285474 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:20:50.285554 master-0 kubenswrapper[7864]: I0224 02:20:50.285527 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:50.285642 master-0 kubenswrapper[7864]: I0224 02:20:50.285612 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:20:50.285694 master-0 kubenswrapper[7864]: I0224 02:20:50.285669 7864 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:50.403475 master-0 kubenswrapper[7864]: I0224 02:20:50.403288 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"070ebb2d-57a2-4c76-8c93-e09d398f3b73","Type":"ContainerStarted","Data":"b9883c9e4f2305d2772b0fcadf9ca3936959b05b72ec07b2a03edcd3558e0737"} Feb 24 02:20:50.407299 master-0 kubenswrapper[7864]: I0224 02:20:50.407242 7864 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="5049829a75ed57116cd28e395328151f0338930890f9c71ea2b44193759d72ce" exitCode=0 Feb 24 02:20:50.410090 master-0 kubenswrapper[7864]: I0224 02:20:50.410043 7864 generic.go:334] "Generic (PLEG): container finished" podID="5508683b-09ae-47a1-89fd-b0891a881e09" containerID="6bd403605e79109075e7b61bac31b57ae266809e2fcec35f73761229b419851f" exitCode=0 Feb 24 02:20:50.410181 master-0 kubenswrapper[7864]: I0224 02:20:50.410107 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"5508683b-09ae-47a1-89fd-b0891a881e09","Type":"ContainerDied","Data":"6bd403605e79109075e7b61bac31b57ae266809e2fcec35f73761229b419851f"} Feb 24 02:20:50.411896 master-0 kubenswrapper[7864]: I0224 02:20:50.411809 7864 status_manager.go:851] "Failed to get status for pod" podUID="5508683b-09ae-47a1-89fd-b0891a881e09" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:20:50.547461 master-0 kubenswrapper[7864]: I0224 02:20:50.547385 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:20:50.953367 master-0 kubenswrapper[7864]: E0224 02:20:50.953292 7864 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:20:50.954171 master-0 kubenswrapper[7864]: E0224 02:20:50.954128 7864 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:20:50.954935 master-0 kubenswrapper[7864]: E0224 02:20:50.954862 7864 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:20:50.955907 master-0 kubenswrapper[7864]: E0224 02:20:50.955833 7864 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:20:50.956701 master-0 kubenswrapper[7864]: E0224 02:20:50.956633 7864 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:20:50.956701 master-0 kubenswrapper[7864]: I0224 02:20:50.956686 7864 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 24 02:20:50.957618 master-0 kubenswrapper[7864]: E0224 02:20:50.957543 7864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Feb 24 02:20:51.084374 master-0 kubenswrapper[7864]: I0224 02:20:51.084291 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:51.084374 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:51.084374 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:51.084374 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:51.084794 master-0 kubenswrapper[7864]: I0224 02:20:51.084409 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:51.122233 master-0 kubenswrapper[7864]: I0224 02:20:51.122156 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-retry-1-master-0_27f0c4d0-17dd-49ed-a8a4-7be1d82738c7/installer/0.log" Feb 24 02:20:51.122935 master-0 kubenswrapper[7864]: I0224 02:20:51.122271 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Feb 24 02:20:51.123896 master-0 kubenswrapper[7864]: I0224 02:20:51.123809 7864 status_manager.go:851] "Failed to get status for pod" podUID="27f0c4d0-17dd-49ed-a8a4-7be1d82738c7" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-3-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:20:51.124992 master-0 kubenswrapper[7864]: I0224 02:20:51.124901 7864 status_manager.go:851] "Failed to get status for pod" podUID="5508683b-09ae-47a1-89fd-b0891a881e09" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:20:51.159516 master-0 kubenswrapper[7864]: E0224 02:20:51.159436 7864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 24 02:20:51.321388 master-0 kubenswrapper[7864]: I0224 02:20:51.321305 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27f0c4d0-17dd-49ed-a8a4-7be1d82738c7-kube-api-access\") pod \"27f0c4d0-17dd-49ed-a8a4-7be1d82738c7\" (UID: \"27f0c4d0-17dd-49ed-a8a4-7be1d82738c7\") " Feb 24 02:20:51.321618 master-0 kubenswrapper[7864]: I0224 02:20:51.321466 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27f0c4d0-17dd-49ed-a8a4-7be1d82738c7-kubelet-dir\") pod \"27f0c4d0-17dd-49ed-a8a4-7be1d82738c7\" (UID: \"27f0c4d0-17dd-49ed-a8a4-7be1d82738c7\") " Feb 24 02:20:51.321618 master-0 kubenswrapper[7864]: I0224 02:20:51.321565 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/27f0c4d0-17dd-49ed-a8a4-7be1d82738c7-var-lock\") pod \"27f0c4d0-17dd-49ed-a8a4-7be1d82738c7\" (UID: \"27f0c4d0-17dd-49ed-a8a4-7be1d82738c7\") " Feb 24 02:20:51.322372 master-0 kubenswrapper[7864]: I0224 02:20:51.322316 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27f0c4d0-17dd-49ed-a8a4-7be1d82738c7-var-lock" (OuterVolumeSpecName: "var-lock") pod "27f0c4d0-17dd-49ed-a8a4-7be1d82738c7" (UID: "27f0c4d0-17dd-49ed-a8a4-7be1d82738c7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:20:51.322912 master-0 kubenswrapper[7864]: I0224 02:20:51.322825 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27f0c4d0-17dd-49ed-a8a4-7be1d82738c7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "27f0c4d0-17dd-49ed-a8a4-7be1d82738c7" (UID: "27f0c4d0-17dd-49ed-a8a4-7be1d82738c7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:20:51.327564 master-0 kubenswrapper[7864]: I0224 02:20:51.327509 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27f0c4d0-17dd-49ed-a8a4-7be1d82738c7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "27f0c4d0-17dd-49ed-a8a4-7be1d82738c7" (UID: "27f0c4d0-17dd-49ed-a8a4-7be1d82738c7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:20:51.424190 master-0 kubenswrapper[7864]: I0224 02:20:51.424103 7864 generic.go:334] "Generic (PLEG): container finished" podID="888e23114cf20f3bf6573c5f7b88d7d0" containerID="4e2723790b49b8efef2a3cd4b841ce0f7ce144ba7c018da13a9e536997af68e4" exitCode=0 Feb 24 02:20:51.424373 master-0 kubenswrapper[7864]: I0224 02:20:51.424217 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"888e23114cf20f3bf6573c5f7b88d7d0","Type":"ContainerDied","Data":"4e2723790b49b8efef2a3cd4b841ce0f7ce144ba7c018da13a9e536997af68e4"} Feb 24 02:20:51.424373 master-0 kubenswrapper[7864]: I0224 02:20:51.424314 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"888e23114cf20f3bf6573c5f7b88d7d0","Type":"ContainerStarted","Data":"7b8c524be621d3b232cfcc53d4958e12d26da68f7931a17964ec87b85eee7bba"} Feb 24 02:20:51.424723 master-0 kubenswrapper[7864]: I0224 02:20:51.424669 7864 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/27f0c4d0-17dd-49ed-a8a4-7be1d82738c7-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 02:20:51.424832 master-0 kubenswrapper[7864]: I0224 02:20:51.424724 7864 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27f0c4d0-17dd-49ed-a8a4-7be1d82738c7-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 02:20:51.424832 master-0 kubenswrapper[7864]: I0224 02:20:51.424751 7864 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27f0c4d0-17dd-49ed-a8a4-7be1d82738c7-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:20:51.427353 master-0 kubenswrapper[7864]: I0224 02:20:51.427268 7864 status_manager.go:851] "Failed to get status for pod" podUID="27f0c4d0-17dd-49ed-a8a4-7be1d82738c7" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-3-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:20:51.427787 master-0 kubenswrapper[7864]: I0224 02:20:51.427718 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"070ebb2d-57a2-4c76-8c93-e09d398f3b73","Type":"ContainerStarted","Data":"51a3db0d894d96bab79a718a222631106e9405e2deb8d971fa5341ac8b946184"} Feb 24 02:20:51.428072 master-0 kubenswrapper[7864]: E0224 02:20:51.427981 7864 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:20:51.428952 master-0 kubenswrapper[7864]: I0224 02:20:51.428881 7864 status_manager.go:851] "Failed to get status for pod" podUID="5508683b-09ae-47a1-89fd-b0891a881e09" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:20:51.430219 master-0 kubenswrapper[7864]: I0224 02:20:51.430048 7864 status_manager.go:851] "Failed to get status for pod" podUID="27f0c4d0-17dd-49ed-a8a4-7be1d82738c7" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-3-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:20:51.431507 master-0 kubenswrapper[7864]: I0224 02:20:51.431465 7864 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-retry-1-master-0_27f0c4d0-17dd-49ed-a8a4-7be1d82738c7/installer/0.log" Feb 24 02:20:51.431656 master-0 kubenswrapper[7864]: I0224 02:20:51.431515 7864 status_manager.go:851] "Failed to get status for pod" podUID="070ebb2d-57a2-4c76-8c93-e09d398f3b73" pod="openshift-kube-controller-manager/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:20:51.431656 master-0 kubenswrapper[7864]: I0224 02:20:51.431560 7864 generic.go:334] "Generic (PLEG): container finished" podID="27f0c4d0-17dd-49ed-a8a4-7be1d82738c7" containerID="14ab4ee9b49331e1ed7ba4898ca421a9e68557c1e81815d647d7a5983d4cb69f" exitCode=1 Feb 24 02:20:51.431795 master-0 kubenswrapper[7864]: I0224 02:20:51.431642 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" event={"ID":"27f0c4d0-17dd-49ed-a8a4-7be1d82738c7","Type":"ContainerDied","Data":"14ab4ee9b49331e1ed7ba4898ca421a9e68557c1e81815d647d7a5983d4cb69f"} Feb 24 02:20:51.431795 master-0 kubenswrapper[7864]: I0224 02:20:51.431701 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" event={"ID":"27f0c4d0-17dd-49ed-a8a4-7be1d82738c7","Type":"ContainerDied","Data":"f562d675528b0b4d595c589a6e7d7e57e6c855daab8a48f034a1f5df7b3b29a7"} Feb 24 02:20:51.431795 master-0 kubenswrapper[7864]: I0224 02:20:51.431735 7864 scope.go:117] "RemoveContainer" containerID="14ab4ee9b49331e1ed7ba4898ca421a9e68557c1e81815d647d7a5983d4cb69f" Feb 24 02:20:51.431795 master-0 kubenswrapper[7864]: I0224 02:20:51.431659 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Feb 24 02:20:51.432873 master-0 kubenswrapper[7864]: I0224 02:20:51.432822 7864 status_manager.go:851] "Failed to get status for pod" podUID="5508683b-09ae-47a1-89fd-b0891a881e09" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:20:51.434107 master-0 kubenswrapper[7864]: I0224 02:20:51.434065 7864 status_manager.go:851] "Failed to get status for pod" podUID="27f0c4d0-17dd-49ed-a8a4-7be1d82738c7" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-3-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:20:51.434902 master-0 kubenswrapper[7864]: I0224 02:20:51.434830 7864 status_manager.go:851] "Failed to get status for pod" podUID="070ebb2d-57a2-4c76-8c93-e09d398f3b73" pod="openshift-kube-controller-manager/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:20:51.436649 master-0 kubenswrapper[7864]: I0224 02:20:51.435645 7864 status_manager.go:851] "Failed to get status for pod" podUID="5508683b-09ae-47a1-89fd-b0891a881e09" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:20:51.471765 master-0 kubenswrapper[7864]: I0224 02:20:51.458379 7864 status_manager.go:851] "Failed to get status for pod" podUID="27f0c4d0-17dd-49ed-a8a4-7be1d82738c7" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-3-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:20:51.471765 master-0 kubenswrapper[7864]: I0224 02:20:51.466376 7864 status_manager.go:851] "Failed to get status for pod" podUID="070ebb2d-57a2-4c76-8c93-e09d398f3b73" pod="openshift-kube-controller-manager/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:20:51.471765 master-0 kubenswrapper[7864]: I0224 02:20:51.468443 7864 status_manager.go:851] "Failed to get status for pod" podUID="5508683b-09ae-47a1-89fd-b0891a881e09" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:20:51.500084 master-0 kubenswrapper[7864]: I0224 02:20:51.500015 7864 scope.go:117] "RemoveContainer" containerID="14ab4ee9b49331e1ed7ba4898ca421a9e68557c1e81815d647d7a5983d4cb69f" Feb 24 02:20:51.501100 master-0 kubenswrapper[7864]: E0224 02:20:51.501043 7864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14ab4ee9b49331e1ed7ba4898ca421a9e68557c1e81815d647d7a5983d4cb69f\": container with ID starting with 14ab4ee9b49331e1ed7ba4898ca421a9e68557c1e81815d647d7a5983d4cb69f not found: ID does not exist" containerID="14ab4ee9b49331e1ed7ba4898ca421a9e68557c1e81815d647d7a5983d4cb69f" Feb 24 02:20:51.501198 master-0 kubenswrapper[7864]: I0224 02:20:51.501104 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14ab4ee9b49331e1ed7ba4898ca421a9e68557c1e81815d647d7a5983d4cb69f"} err="failed to get container status \"14ab4ee9b49331e1ed7ba4898ca421a9e68557c1e81815d647d7a5983d4cb69f\": rpc error: code = NotFound desc = could not find container \"14ab4ee9b49331e1ed7ba4898ca421a9e68557c1e81815d647d7a5983d4cb69f\": container with ID starting with 14ab4ee9b49331e1ed7ba4898ca421a9e68557c1e81815d647d7a5983d4cb69f not found: ID does not exist" Feb 24 02:20:51.561330 master-0 kubenswrapper[7864]: E0224 02:20:51.561237 7864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 24 02:20:51.936207 master-0 kubenswrapper[7864]: I0224 02:20:51.935459 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:20:51.937698 master-0 kubenswrapper[7864]: I0224 02:20:51.937613 7864 status_manager.go:851] "Failed to get status for pod" podUID="27f0c4d0-17dd-49ed-a8a4-7be1d82738c7" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-3-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:20:51.938675 master-0 kubenswrapper[7864]: I0224 02:20:51.938470 7864 status_manager.go:851] "Failed to get status for pod" podUID="070ebb2d-57a2-4c76-8c93-e09d398f3b73" pod="openshift-kube-controller-manager/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:20:51.939694 master-0 kubenswrapper[7864]: I0224 02:20:51.939600 7864 status_manager.go:851] "Failed to get status for pod" podUID="5508683b-09ae-47a1-89fd-b0891a881e09" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:20:52.041867 master-0 kubenswrapper[7864]: I0224 02:20:52.041714 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5508683b-09ae-47a1-89fd-b0891a881e09-kubelet-dir\") pod \"5508683b-09ae-47a1-89fd-b0891a881e09\" (UID: \"5508683b-09ae-47a1-89fd-b0891a881e09\") " Feb 24 02:20:52.042219 master-0 kubenswrapper[7864]: I0224 02:20:52.041865 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5508683b-09ae-47a1-89fd-b0891a881e09-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5508683b-09ae-47a1-89fd-b0891a881e09" (UID: "5508683b-09ae-47a1-89fd-b0891a881e09"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:20:52.042219 master-0 kubenswrapper[7864]: I0224 02:20:52.041999 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access\") pod \"5508683b-09ae-47a1-89fd-b0891a881e09\" (UID: \"5508683b-09ae-47a1-89fd-b0891a881e09\") " Feb 24 02:20:52.042219 master-0 kubenswrapper[7864]: I0224 02:20:52.042029 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5508683b-09ae-47a1-89fd-b0891a881e09-var-lock\") pod \"5508683b-09ae-47a1-89fd-b0891a881e09\" (UID: \"5508683b-09ae-47a1-89fd-b0891a881e09\") " Feb 24 02:20:52.042434 master-0 kubenswrapper[7864]: I0224 02:20:52.042349 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5508683b-09ae-47a1-89fd-b0891a881e09-var-lock" (OuterVolumeSpecName: "var-lock") pod "5508683b-09ae-47a1-89fd-b0891a881e09" (UID: "5508683b-09ae-47a1-89fd-b0891a881e09"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:20:52.043398 master-0 kubenswrapper[7864]: I0224 02:20:52.043319 7864 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5508683b-09ae-47a1-89fd-b0891a881e09-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:20:52.043398 master-0 kubenswrapper[7864]: I0224 02:20:52.043348 7864 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5508683b-09ae-47a1-89fd-b0891a881e09-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 02:20:52.046839 master-0 kubenswrapper[7864]: I0224 02:20:52.046766 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5508683b-09ae-47a1-89fd-b0891a881e09" (UID: "5508683b-09ae-47a1-89fd-b0891a881e09"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:20:52.085105 master-0 kubenswrapper[7864]: I0224 02:20:52.084947 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:52.085105 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:52.085105 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:52.085105 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:52.085105 master-0 kubenswrapper[7864]: I0224 02:20:52.085069 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:52.145310 master-0 kubenswrapper[7864]: I0224 02:20:52.145229 7864 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 02:20:52.460328 master-0 kubenswrapper[7864]: I0224 02:20:52.460156 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:20:52.460328 master-0 kubenswrapper[7864]: I0224 02:20:52.460282 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"5508683b-09ae-47a1-89fd-b0891a881e09","Type":"ContainerDied","Data":"84c61cc1c9db5c98d72530e3929e9be3931d553cf93f1ab6621e14c53b3476fa"} Feb 24 02:20:52.460444 master-0 kubenswrapper[7864]: I0224 02:20:52.460356 7864 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84c61cc1c9db5c98d72530e3929e9be3931d553cf93f1ab6621e14c53b3476fa" Feb 24 02:20:52.464887 master-0 kubenswrapper[7864]: I0224 02:20:52.464223 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"888e23114cf20f3bf6573c5f7b88d7d0","Type":"ContainerStarted","Data":"558bf7531d8535d6ff0e2eef748fcf2e0526fa528cbc80b5c0930f84e0c9378c"} Feb 24 02:20:52.464887 master-0 kubenswrapper[7864]: I0224 02:20:52.464258 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"888e23114cf20f3bf6573c5f7b88d7d0","Type":"ContainerStarted","Data":"3a6d28be061daa57e672f2fb4170c8cb1508d58e58f77136d5136349ebce9c80"} Feb 24 02:20:52.492257 master-0 kubenswrapper[7864]: I0224 02:20:52.492224 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:20:52.659339 master-0 kubenswrapper[7864]: I0224 02:20:52.658899 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"687e92a6cecf1e2beeef16a0b322ad08\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " Feb 24 02:20:52.659339 master-0 kubenswrapper[7864]: I0224 02:20:52.659054 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"687e92a6cecf1e2beeef16a0b322ad08\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " Feb 24 02:20:52.659339 master-0 kubenswrapper[7864]: I0224 02:20:52.659063 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config" (OuterVolumeSpecName: "config") pod "687e92a6cecf1e2beeef16a0b322ad08" (UID: "687e92a6cecf1e2beeef16a0b322ad08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:20:52.659339 master-0 kubenswrapper[7864]: I0224 02:20:52.659195 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"687e92a6cecf1e2beeef16a0b322ad08\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " Feb 24 02:20:52.659339 master-0 kubenswrapper[7864]: I0224 02:20:52.659244 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"687e92a6cecf1e2beeef16a0b322ad08\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " Feb 24 02:20:52.659339 master-0 kubenswrapper[7864]: I0224 02:20:52.659242 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets" (OuterVolumeSpecName: "secrets") pod "687e92a6cecf1e2beeef16a0b322ad08" (UID: "687e92a6cecf1e2beeef16a0b322ad08"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:20:52.659339 master-0 kubenswrapper[7864]: I0224 02:20:52.659316 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "687e92a6cecf1e2beeef16a0b322ad08" (UID: "687e92a6cecf1e2beeef16a0b322ad08"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:20:52.659339 master-0 kubenswrapper[7864]: I0224 02:20:52.659335 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"687e92a6cecf1e2beeef16a0b322ad08\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " Feb 24 02:20:52.659785 master-0 kubenswrapper[7864]: I0224 02:20:52.659366 7864 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"687e92a6cecf1e2beeef16a0b322ad08\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " Feb 24 02:20:52.659785 master-0 kubenswrapper[7864]: I0224 02:20:52.659733 7864 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:20:52.659785 master-0 kubenswrapper[7864]: I0224 02:20:52.659747 7864 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") on node \"master-0\" DevicePath \"\"" Feb 24 02:20:52.659785 master-0 kubenswrapper[7864]: I0224 02:20:52.659756 7864 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:20:52.659973 master-0 kubenswrapper[7864]: I0224 02:20:52.659811 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "687e92a6cecf1e2beeef16a0b322ad08" (UID: "687e92a6cecf1e2beeef16a0b322ad08"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:20:52.659973 master-0 kubenswrapper[7864]: I0224 02:20:52.659850 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs" (OuterVolumeSpecName: "logs") pod "687e92a6cecf1e2beeef16a0b322ad08" (UID: "687e92a6cecf1e2beeef16a0b322ad08"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:20:52.659973 master-0 kubenswrapper[7864]: I0224 02:20:52.659867 7864 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "687e92a6cecf1e2beeef16a0b322ad08" (UID: "687e92a6cecf1e2beeef16a0b322ad08"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:20:52.761663 master-0 kubenswrapper[7864]: I0224 02:20:52.761550 7864 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Feb 24 02:20:52.761663 master-0 kubenswrapper[7864]: I0224 02:20:52.761618 7864 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Feb 24 02:20:52.761663 master-0 kubenswrapper[7864]: I0224 02:20:52.761636 7864 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 02:20:53.084477 master-0 kubenswrapper[7864]: I0224 02:20:53.084412 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:53.084477 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:53.084477 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:53.084477 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:53.084869 master-0 kubenswrapper[7864]: I0224 02:20:53.084492 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:53.592749 master-0 kubenswrapper[7864]: I0224 02:20:53.592610 7864 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="c3247cb01609ca2f589c633a4d3bc99376b997d00c142d6d2beea7dccc5eceb8" exitCode=0 Feb 24 02:20:53.594420 master-0 kubenswrapper[7864]: I0224 02:20:53.594369 7864 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 02:20:53.609250 master-0 kubenswrapper[7864]: I0224 02:20:53.608647 7864 scope.go:117] "RemoveContainer" containerID="5049829a75ed57116cd28e395328151f0338930890f9c71ea2b44193759d72ce" Feb 24 02:20:53.617919 master-0 kubenswrapper[7864]: I0224 02:20:53.617879 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"888e23114cf20f3bf6573c5f7b88d7d0","Type":"ContainerStarted","Data":"50cb231a9b1774d52ada393ca11772e3b1ec2821c7a67614161aa92f0c51c9f1"} Feb 24 02:20:53.618413 master-0 kubenswrapper[7864]: I0224 02:20:53.618394 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"888e23114cf20f3bf6573c5f7b88d7d0","Type":"ContainerStarted","Data":"cc4f41e88d31c09269221f1953bba1f1ec74ac34cb3604d797dd60e2b7ff3d27"} Feb 24 02:20:53.618524 master-0 kubenswrapper[7864]: I0224 02:20:53.618510 7864 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:20:53.619004 master-0 kubenswrapper[7864]: I0224 02:20:53.618987 7864 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"888e23114cf20f3bf6573c5f7b88d7d0","Type":"ContainerStarted","Data":"6b1118fa0c775e5798c55738a3388475285cbd52e99121aa6b42aa4b89f48976"} Feb 24 02:20:53.653673 master-0 kubenswrapper[7864]: I0224 02:20:53.653641 7864 scope.go:117] "RemoveContainer" containerID="c3247cb01609ca2f589c633a4d3bc99376b997d00c142d6d2beea7dccc5eceb8" Feb 24 02:20:53.686449 master-0 kubenswrapper[7864]: I0224 02:20:53.686412 7864 scope.go:117] "RemoveContainer" containerID="2d7d0e564c8b8a31e63160aa69eba9da27910f88d4e6e998d994db3817f8b416" Feb 24 02:20:53.755104 master-0 kubenswrapper[7864]: I0224 02:20:53.755030 7864 scope.go:117] "RemoveContainer" containerID="5049829a75ed57116cd28e395328151f0338930890f9c71ea2b44193759d72ce" Feb 24 02:20:53.756712 master-0 kubenswrapper[7864]: E0224 02:20:53.756667 7864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5049829a75ed57116cd28e395328151f0338930890f9c71ea2b44193759d72ce\": container with ID starting with 5049829a75ed57116cd28e395328151f0338930890f9c71ea2b44193759d72ce not found: ID does not exist" containerID="5049829a75ed57116cd28e395328151f0338930890f9c71ea2b44193759d72ce" Feb 24 02:20:53.756786 master-0 kubenswrapper[7864]: I0224 02:20:53.756710 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5049829a75ed57116cd28e395328151f0338930890f9c71ea2b44193759d72ce"} err="failed to get container status \"5049829a75ed57116cd28e395328151f0338930890f9c71ea2b44193759d72ce\": rpc error: code = NotFound desc = could not find container \"5049829a75ed57116cd28e395328151f0338930890f9c71ea2b44193759d72ce\": container with ID starting with 5049829a75ed57116cd28e395328151f0338930890f9c71ea2b44193759d72ce not found: ID does not exist" Feb 24 02:20:53.756786 master-0 kubenswrapper[7864]: I0224 02:20:53.756735 7864 scope.go:117] "RemoveContainer" containerID="c3247cb01609ca2f589c633a4d3bc99376b997d00c142d6d2beea7dccc5eceb8" Feb 24 02:20:53.758147 master-0 kubenswrapper[7864]: E0224 02:20:53.758068 7864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3247cb01609ca2f589c633a4d3bc99376b997d00c142d6d2beea7dccc5eceb8\": container with ID starting with c3247cb01609ca2f589c633a4d3bc99376b997d00c142d6d2beea7dccc5eceb8 not found: ID does not exist" containerID="c3247cb01609ca2f589c633a4d3bc99376b997d00c142d6d2beea7dccc5eceb8" Feb 24 02:20:53.758147 master-0 kubenswrapper[7864]: I0224 02:20:53.758098 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3247cb01609ca2f589c633a4d3bc99376b997d00c142d6d2beea7dccc5eceb8"} err="failed to get container status \"c3247cb01609ca2f589c633a4d3bc99376b997d00c142d6d2beea7dccc5eceb8\": rpc error: code = NotFound desc = could not find container \"c3247cb01609ca2f589c633a4d3bc99376b997d00c142d6d2beea7dccc5eceb8\": container with ID starting with c3247cb01609ca2f589c633a4d3bc99376b997d00c142d6d2beea7dccc5eceb8 not found: ID does not exist" Feb 24 02:20:53.758147 master-0 kubenswrapper[7864]: I0224 02:20:53.758114 7864 scope.go:117] "RemoveContainer" containerID="2d7d0e564c8b8a31e63160aa69eba9da27910f88d4e6e998d994db3817f8b416" Feb 24 02:20:53.761840 master-0 kubenswrapper[7864]: E0224 02:20:53.761792 7864 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d7d0e564c8b8a31e63160aa69eba9da27910f88d4e6e998d994db3817f8b416\": container with ID starting with 2d7d0e564c8b8a31e63160aa69eba9da27910f88d4e6e998d994db3817f8b416 not found: ID does not exist" containerID="2d7d0e564c8b8a31e63160aa69eba9da27910f88d4e6e998d994db3817f8b416" Feb 24 02:20:53.761922 master-0 kubenswrapper[7864]: I0224 02:20:53.761848 7864 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d7d0e564c8b8a31e63160aa69eba9da27910f88d4e6e998d994db3817f8b416"} err="failed to get container status \"2d7d0e564c8b8a31e63160aa69eba9da27910f88d4e6e998d994db3817f8b416\": rpc error: code = NotFound desc = could not find container \"2d7d0e564c8b8a31e63160aa69eba9da27910f88d4e6e998d994db3817f8b416\": container with ID starting with 2d7d0e564c8b8a31e63160aa69eba9da27910f88d4e6e998d994db3817f8b416 not found: ID does not exist" Feb 24 02:20:53.897885 master-0 kubenswrapper[7864]: I0224 02:20:53.897734 7864 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="687e92a6cecf1e2beeef16a0b322ad08" path="/var/lib/kubelet/pods/687e92a6cecf1e2beeef16a0b322ad08/volumes" Feb 24 02:20:53.898600 master-0 kubenswrapper[7864]: I0224 02:20:53.898326 7864 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 24 02:20:54.085010 master-0 kubenswrapper[7864]: I0224 02:20:54.084929 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:54.085010 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:54.085010 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:54.085010 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:54.085352 master-0 kubenswrapper[7864]: I0224 02:20:54.085049 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:55.085433 master-0 kubenswrapper[7864]: I0224 02:20:55.085354 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:55.085433 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:55.085433 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:55.085433 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:55.086527 master-0 kubenswrapper[7864]: I0224 02:20:55.085443 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:55.237132 master-0 kubenswrapper[7864]: I0224 02:20:55.237064 7864 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:55.287794 master-0 kubenswrapper[7864]: W0224 02:20:55.287738 7864 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95806c9442ee27c355bfbf25ba6f70f0.slice/crio-a97c301937f1a0e25ebd74de8f7b7dfda3c088599ac5506143f0a1006a2bb044 WatchSource:0}: Error finding container a97c301937f1a0e25ebd74de8f7b7dfda3c088599ac5506143f0a1006a2bb044: Status 404 returned error can't find the container with id a97c301937f1a0e25ebd74de8f7b7dfda3c088599ac5506143f0a1006a2bb044 Feb 24 02:20:56.085686 master-0 kubenswrapper[7864]: I0224 02:20:56.085563 7864 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:20:56.085686 master-0 kubenswrapper[7864]: [-]has-synced failed: reason withheld Feb 24 02:20:56.085686 master-0 kubenswrapper[7864]: [+]process-running ok Feb 24 02:20:56.085686 master-0 kubenswrapper[7864]: healthz check failed Feb 24 02:20:56.086820 master-0 kubenswrapper[7864]: I0224 02:20:56.085706 7864 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:20:56.607376 master-0 systemd[1]: Stopping Kubernetes Kubelet... Feb 24 02:20:56.638356 master-0 systemd[1]: kubelet.service: Deactivated successfully. Feb 24 02:20:56.638895 master-0 systemd[1]: Stopped Kubernetes Kubelet. Feb 24 02:20:56.645984 master-0 systemd[1]: kubelet.service: Consumed 2min 54.254s CPU time. Feb 24 02:20:56.673163 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 24 02:20:56.854988 master-0 kubenswrapper[31411]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 02:20:56.854988 master-0 kubenswrapper[31411]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 24 02:20:56.854988 master-0 kubenswrapper[31411]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 02:20:56.854988 master-0 kubenswrapper[31411]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 02:20:56.854988 master-0 kubenswrapper[31411]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 24 02:20:56.854988 master-0 kubenswrapper[31411]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 02:20:56.854988 master-0 kubenswrapper[31411]: I0224 02:20:56.854692 31411 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 24 02:20:56.859956 master-0 kubenswrapper[31411]: W0224 02:20:56.859909 31411 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 02:20:56.859956 master-0 kubenswrapper[31411]: W0224 02:20:56.859942 31411 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 02:20:56.859956 master-0 kubenswrapper[31411]: W0224 02:20:56.859952 31411 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 02:20:56.859956 master-0 kubenswrapper[31411]: W0224 02:20:56.859962 31411 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 02:20:56.860713 master-0 kubenswrapper[31411]: W0224 02:20:56.859971 31411 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 02:20:56.860713 master-0 kubenswrapper[31411]: W0224 02:20:56.859981 31411 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 02:20:56.860713 master-0 kubenswrapper[31411]: W0224 02:20:56.859991 31411 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 02:20:56.860713 master-0 kubenswrapper[31411]: W0224 02:20:56.860001 31411 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 02:20:56.860713 master-0 kubenswrapper[31411]: W0224 02:20:56.860010 31411 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 02:20:56.860713 master-0 kubenswrapper[31411]: W0224 02:20:56.860019 31411 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 02:20:56.860713 master-0 kubenswrapper[31411]: W0224 02:20:56.860027 31411 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 02:20:56.860713 master-0 kubenswrapper[31411]: W0224 02:20:56.860036 31411 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 02:20:56.860713 master-0 kubenswrapper[31411]: W0224 02:20:56.860044 31411 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 02:20:56.860713 master-0 kubenswrapper[31411]: W0224 02:20:56.860052 31411 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 02:20:56.860713 master-0 kubenswrapper[31411]: W0224 02:20:56.860059 31411 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 24 02:20:56.860713 master-0 kubenswrapper[31411]: W0224 02:20:56.860068 31411 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 02:20:56.860713 master-0 kubenswrapper[31411]: W0224 02:20:56.860076 31411 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 02:20:56.860713 master-0 kubenswrapper[31411]: W0224 02:20:56.860085 31411 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 02:20:56.860713 master-0 kubenswrapper[31411]: W0224 02:20:56.860093 31411 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 02:20:56.860713 master-0 kubenswrapper[31411]: W0224 02:20:56.860101 31411 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 02:20:56.860713 master-0 kubenswrapper[31411]: W0224 02:20:56.860109 31411 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 02:20:56.860713 master-0 kubenswrapper[31411]: W0224 02:20:56.860117 31411 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 02:20:56.860713 master-0 kubenswrapper[31411]: W0224 02:20:56.860125 31411 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 02:20:56.860713 master-0 kubenswrapper[31411]: W0224 02:20:56.860135 31411 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 02:20:56.862514 master-0 kubenswrapper[31411]: W0224 02:20:56.860143 31411 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 02:20:56.862514 master-0 kubenswrapper[31411]: W0224 02:20:56.860151 31411 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 02:20:56.862514 master-0 kubenswrapper[31411]: W0224 02:20:56.860159 31411 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 02:20:56.862514 master-0 kubenswrapper[31411]: W0224 02:20:56.860168 31411 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 02:20:56.862514 master-0 kubenswrapper[31411]: W0224 02:20:56.860176 31411 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 02:20:56.862514 master-0 kubenswrapper[31411]: W0224 02:20:56.860184 31411 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 02:20:56.862514 master-0 kubenswrapper[31411]: W0224 02:20:56.860192 31411 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 02:20:56.862514 master-0 kubenswrapper[31411]: W0224 02:20:56.860200 31411 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 02:20:56.862514 master-0 kubenswrapper[31411]: W0224 02:20:56.860207 31411 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 02:20:56.862514 master-0 kubenswrapper[31411]: W0224 02:20:56.860220 31411 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 02:20:56.862514 master-0 kubenswrapper[31411]: W0224 02:20:56.860231 31411 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 02:20:56.862514 master-0 kubenswrapper[31411]: W0224 02:20:56.860239 31411 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 02:20:56.862514 master-0 kubenswrapper[31411]: W0224 02:20:56.860248 31411 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 02:20:56.862514 master-0 kubenswrapper[31411]: W0224 02:20:56.860257 31411 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 02:20:56.862514 master-0 kubenswrapper[31411]: W0224 02:20:56.860265 31411 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 02:20:56.862514 master-0 kubenswrapper[31411]: W0224 02:20:56.860273 31411 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 02:20:56.862514 master-0 kubenswrapper[31411]: W0224 02:20:56.860281 31411 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 02:20:56.862514 master-0 kubenswrapper[31411]: W0224 02:20:56.860289 31411 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 02:20:56.862514 master-0 kubenswrapper[31411]: W0224 02:20:56.860297 31411 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 02:20:56.862514 master-0 kubenswrapper[31411]: W0224 02:20:56.860305 31411 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 02:20:56.863730 master-0 kubenswrapper[31411]: W0224 02:20:56.860313 31411 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 02:20:56.863730 master-0 kubenswrapper[31411]: W0224 02:20:56.860320 31411 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 02:20:56.863730 master-0 kubenswrapper[31411]: W0224 02:20:56.860328 31411 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 02:20:56.863730 master-0 kubenswrapper[31411]: W0224 02:20:56.860338 31411 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 02:20:56.863730 master-0 kubenswrapper[31411]: W0224 02:20:56.860351 31411 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 02:20:56.863730 master-0 kubenswrapper[31411]: W0224 02:20:56.860361 31411 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 02:20:56.863730 master-0 kubenswrapper[31411]: W0224 02:20:56.860370 31411 feature_gate.go:330] unrecognized feature gate: Example Feb 24 02:20:56.863730 master-0 kubenswrapper[31411]: W0224 02:20:56.860379 31411 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 02:20:56.863730 master-0 kubenswrapper[31411]: W0224 02:20:56.860388 31411 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 02:20:56.863730 master-0 kubenswrapper[31411]: W0224 02:20:56.860398 31411 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 02:20:56.863730 master-0 kubenswrapper[31411]: W0224 02:20:56.860409 31411 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 02:20:56.863730 master-0 kubenswrapper[31411]: W0224 02:20:56.860421 31411 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 02:20:56.863730 master-0 kubenswrapper[31411]: W0224 02:20:56.860432 31411 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 02:20:56.863730 master-0 kubenswrapper[31411]: W0224 02:20:56.860442 31411 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 02:20:56.863730 master-0 kubenswrapper[31411]: W0224 02:20:56.860450 31411 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 02:20:56.863730 master-0 kubenswrapper[31411]: W0224 02:20:56.860459 31411 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 02:20:56.863730 master-0 kubenswrapper[31411]: W0224 02:20:56.860467 31411 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 02:20:56.863730 master-0 kubenswrapper[31411]: W0224 02:20:56.860475 31411 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 02:20:56.863730 master-0 kubenswrapper[31411]: W0224 02:20:56.860484 31411 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 02:20:56.864801 master-0 kubenswrapper[31411]: W0224 02:20:56.860492 31411 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 02:20:56.864801 master-0 kubenswrapper[31411]: W0224 02:20:56.860500 31411 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 02:20:56.864801 master-0 kubenswrapper[31411]: W0224 02:20:56.860508 31411 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 02:20:56.864801 master-0 kubenswrapper[31411]: W0224 02:20:56.860516 31411 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 02:20:56.864801 master-0 kubenswrapper[31411]: W0224 02:20:56.860524 31411 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 02:20:56.864801 master-0 kubenswrapper[31411]: W0224 02:20:56.860532 31411 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 02:20:56.864801 master-0 kubenswrapper[31411]: W0224 02:20:56.860540 31411 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 02:20:56.864801 master-0 kubenswrapper[31411]: W0224 02:20:56.860548 31411 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 02:20:56.864801 master-0 kubenswrapper[31411]: W0224 02:20:56.860556 31411 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 02:20:56.864801 master-0 kubenswrapper[31411]: I0224 02:20:56.860774 31411 flags.go:64] FLAG: --address="0.0.0.0" Feb 24 02:20:56.864801 master-0 kubenswrapper[31411]: I0224 02:20:56.860792 31411 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 24 02:20:56.864801 master-0 kubenswrapper[31411]: I0224 02:20:56.860810 31411 flags.go:64] FLAG: --anonymous-auth="true" Feb 24 02:20:56.864801 master-0 kubenswrapper[31411]: I0224 02:20:56.860823 31411 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 24 02:20:56.864801 master-0 kubenswrapper[31411]: I0224 02:20:56.860835 31411 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 24 02:20:56.864801 master-0 kubenswrapper[31411]: I0224 02:20:56.860846 31411 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 24 02:20:56.864801 master-0 kubenswrapper[31411]: I0224 02:20:56.860860 31411 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 24 02:20:56.864801 master-0 kubenswrapper[31411]: I0224 02:20:56.860872 31411 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 24 02:20:56.864801 master-0 kubenswrapper[31411]: I0224 02:20:56.860883 31411 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 24 02:20:56.864801 master-0 kubenswrapper[31411]: I0224 02:20:56.860893 31411 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 24 02:20:56.864801 master-0 kubenswrapper[31411]: I0224 02:20:56.860904 31411 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 24 02:20:56.864801 master-0 kubenswrapper[31411]: I0224 02:20:56.860914 31411 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 24 02:20:56.864801 master-0 kubenswrapper[31411]: I0224 02:20:56.860923 31411 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 24 02:20:56.866059 master-0 kubenswrapper[31411]: I0224 02:20:56.860933 31411 flags.go:64] FLAG: --cgroup-root="" Feb 24 02:20:56.866059 master-0 kubenswrapper[31411]: I0224 02:20:56.860942 31411 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 24 02:20:56.866059 master-0 kubenswrapper[31411]: I0224 02:20:56.860952 31411 flags.go:64] FLAG: --client-ca-file="" Feb 24 02:20:56.866059 master-0 kubenswrapper[31411]: I0224 02:20:56.860962 31411 flags.go:64] FLAG: --cloud-config="" Feb 24 02:20:56.866059 master-0 kubenswrapper[31411]: I0224 02:20:56.860971 31411 flags.go:64] FLAG: --cloud-provider="" Feb 24 02:20:56.866059 master-0 kubenswrapper[31411]: I0224 02:20:56.860980 31411 flags.go:64] FLAG: --cluster-dns="[]" Feb 24 02:20:56.866059 master-0 kubenswrapper[31411]: I0224 02:20:56.860991 31411 flags.go:64] FLAG: --cluster-domain="" Feb 24 02:20:56.866059 master-0 kubenswrapper[31411]: I0224 02:20:56.861000 31411 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 24 02:20:56.866059 master-0 kubenswrapper[31411]: I0224 02:20:56.861010 31411 flags.go:64] FLAG: --config-dir="" Feb 24 02:20:56.866059 master-0 kubenswrapper[31411]: I0224 02:20:56.861127 31411 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 24 02:20:56.866059 master-0 kubenswrapper[31411]: I0224 02:20:56.861141 31411 flags.go:64] FLAG: --container-log-max-files="5" Feb 24 02:20:56.866059 master-0 kubenswrapper[31411]: I0224 02:20:56.861154 31411 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 24 02:20:56.866059 master-0 kubenswrapper[31411]: I0224 02:20:56.861164 31411 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 24 02:20:56.866059 master-0 kubenswrapper[31411]: I0224 02:20:56.861174 31411 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 24 02:20:56.866059 master-0 kubenswrapper[31411]: I0224 02:20:56.861184 31411 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 24 02:20:56.866059 master-0 kubenswrapper[31411]: I0224 02:20:56.861194 31411 flags.go:64] FLAG: --contention-profiling="false" Feb 24 02:20:56.866059 master-0 kubenswrapper[31411]: I0224 02:20:56.861204 31411 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 24 02:20:56.866059 master-0 kubenswrapper[31411]: I0224 02:20:56.861213 31411 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 24 02:20:56.866059 master-0 kubenswrapper[31411]: I0224 02:20:56.861223 31411 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 24 02:20:56.866059 master-0 kubenswrapper[31411]: I0224 02:20:56.861233 31411 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 24 02:20:56.866059 master-0 kubenswrapper[31411]: I0224 02:20:56.861244 31411 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 24 02:20:56.866059 master-0 kubenswrapper[31411]: I0224 02:20:56.861254 31411 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 24 02:20:56.866059 master-0 kubenswrapper[31411]: I0224 02:20:56.861263 31411 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 24 02:20:56.866059 master-0 kubenswrapper[31411]: I0224 02:20:56.861272 31411 flags.go:64] FLAG: --enable-load-reader="false" Feb 24 02:20:56.866059 master-0 kubenswrapper[31411]: I0224 02:20:56.861282 31411 flags.go:64] FLAG: --enable-server="true" Feb 24 02:20:56.867408 master-0 kubenswrapper[31411]: I0224 02:20:56.861292 31411 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 24 02:20:56.867408 master-0 kubenswrapper[31411]: I0224 02:20:56.861305 31411 flags.go:64] FLAG: --event-burst="100" Feb 24 02:20:56.867408 master-0 kubenswrapper[31411]: I0224 02:20:56.861315 31411 flags.go:64] FLAG: --event-qps="50" Feb 24 02:20:56.867408 master-0 kubenswrapper[31411]: I0224 02:20:56.861324 31411 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 24 02:20:56.867408 master-0 kubenswrapper[31411]: I0224 02:20:56.861334 31411 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 24 02:20:56.867408 master-0 kubenswrapper[31411]: I0224 02:20:56.861343 31411 flags.go:64] FLAG: --eviction-hard="" Feb 24 02:20:56.867408 master-0 kubenswrapper[31411]: I0224 02:20:56.861355 31411 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 24 02:20:56.867408 master-0 kubenswrapper[31411]: I0224 02:20:56.861367 31411 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 24 02:20:56.867408 master-0 kubenswrapper[31411]: I0224 02:20:56.861376 31411 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 24 02:20:56.867408 master-0 kubenswrapper[31411]: I0224 02:20:56.861386 31411 flags.go:64] FLAG: --eviction-soft="" Feb 24 02:20:56.867408 master-0 kubenswrapper[31411]: I0224 02:20:56.861395 31411 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 24 02:20:56.867408 master-0 kubenswrapper[31411]: I0224 02:20:56.861405 31411 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 24 02:20:56.867408 master-0 kubenswrapper[31411]: I0224 02:20:56.861415 31411 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 24 02:20:56.867408 master-0 kubenswrapper[31411]: I0224 02:20:56.861424 31411 flags.go:64] FLAG: --experimental-mounter-path="" Feb 24 02:20:56.867408 master-0 kubenswrapper[31411]: I0224 02:20:56.861433 31411 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 24 02:20:56.867408 master-0 kubenswrapper[31411]: I0224 02:20:56.861443 31411 flags.go:64] FLAG: --fail-swap-on="true" Feb 24 02:20:56.867408 master-0 kubenswrapper[31411]: I0224 02:20:56.861453 31411 flags.go:64] FLAG: --feature-gates="" Feb 24 02:20:56.867408 master-0 kubenswrapper[31411]: I0224 02:20:56.861465 31411 flags.go:64] FLAG: --file-check-frequency="20s" Feb 24 02:20:56.867408 master-0 kubenswrapper[31411]: I0224 02:20:56.861474 31411 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 24 02:20:56.867408 master-0 kubenswrapper[31411]: I0224 02:20:56.861484 31411 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 24 02:20:56.867408 master-0 kubenswrapper[31411]: I0224 02:20:56.861494 31411 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 24 02:20:56.867408 master-0 kubenswrapper[31411]: I0224 02:20:56.861504 31411 flags.go:64] FLAG: --healthz-port="10248" Feb 24 02:20:56.867408 master-0 kubenswrapper[31411]: I0224 02:20:56.861514 31411 flags.go:64] FLAG: --help="false" Feb 24 02:20:56.867408 master-0 kubenswrapper[31411]: I0224 02:20:56.861523 31411 flags.go:64] FLAG: --hostname-override="" Feb 24 02:20:56.867408 master-0 kubenswrapper[31411]: I0224 02:20:56.861532 31411 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 24 02:20:56.867408 master-0 kubenswrapper[31411]: I0224 02:20:56.861543 31411 flags.go:64] FLAG: --http-check-frequency="20s" Feb 24 02:20:56.868855 master-0 kubenswrapper[31411]: I0224 02:20:56.861552 31411 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 24 02:20:56.868855 master-0 kubenswrapper[31411]: I0224 02:20:56.861561 31411 flags.go:64] FLAG: --image-credential-provider-config="" Feb 24 02:20:56.868855 master-0 kubenswrapper[31411]: I0224 02:20:56.861570 31411 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 24 02:20:56.868855 master-0 kubenswrapper[31411]: I0224 02:20:56.861608 31411 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 24 02:20:56.868855 master-0 kubenswrapper[31411]: I0224 02:20:56.861618 31411 flags.go:64] FLAG: --image-service-endpoint="" Feb 24 02:20:56.868855 master-0 kubenswrapper[31411]: I0224 02:20:56.861627 31411 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 24 02:20:56.868855 master-0 kubenswrapper[31411]: I0224 02:20:56.861636 31411 flags.go:64] FLAG: --kube-api-burst="100" Feb 24 02:20:56.868855 master-0 kubenswrapper[31411]: I0224 02:20:56.861646 31411 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 24 02:20:56.868855 master-0 kubenswrapper[31411]: I0224 02:20:56.861656 31411 flags.go:64] FLAG: --kube-api-qps="50" Feb 24 02:20:56.868855 master-0 kubenswrapper[31411]: I0224 02:20:56.861665 31411 flags.go:64] FLAG: --kube-reserved="" Feb 24 02:20:56.868855 master-0 kubenswrapper[31411]: I0224 02:20:56.861675 31411 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 24 02:20:56.868855 master-0 kubenswrapper[31411]: I0224 02:20:56.861684 31411 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 24 02:20:56.868855 master-0 kubenswrapper[31411]: I0224 02:20:56.861694 31411 flags.go:64] FLAG: --kubelet-cgroups="" Feb 24 02:20:56.868855 master-0 kubenswrapper[31411]: I0224 02:20:56.861703 31411 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 24 02:20:56.868855 master-0 kubenswrapper[31411]: I0224 02:20:56.861712 31411 flags.go:64] FLAG: --lock-file="" Feb 24 02:20:56.868855 master-0 kubenswrapper[31411]: I0224 02:20:56.861722 31411 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 24 02:20:56.868855 master-0 kubenswrapper[31411]: I0224 02:20:56.861734 31411 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 24 02:20:56.868855 master-0 kubenswrapper[31411]: I0224 02:20:56.861743 31411 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 24 02:20:56.868855 master-0 kubenswrapper[31411]: I0224 02:20:56.861758 31411 flags.go:64] FLAG: --log-json-split-stream="false" Feb 24 02:20:56.868855 master-0 kubenswrapper[31411]: I0224 02:20:56.861767 31411 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 24 02:20:56.868855 master-0 kubenswrapper[31411]: I0224 02:20:56.861777 31411 flags.go:64] FLAG: --log-text-split-stream="false" Feb 24 02:20:56.868855 master-0 kubenswrapper[31411]: I0224 02:20:56.861787 31411 flags.go:64] FLAG: --logging-format="text" Feb 24 02:20:56.868855 master-0 kubenswrapper[31411]: I0224 02:20:56.861796 31411 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 24 02:20:56.868855 master-0 kubenswrapper[31411]: I0224 02:20:56.861806 31411 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 24 02:20:56.868855 master-0 kubenswrapper[31411]: I0224 02:20:56.861815 31411 flags.go:64] FLAG: --manifest-url="" Feb 24 02:20:56.870248 master-0 kubenswrapper[31411]: I0224 02:20:56.861825 31411 flags.go:64] FLAG: --manifest-url-header="" Feb 24 02:20:56.870248 master-0 kubenswrapper[31411]: I0224 02:20:56.861837 31411 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 24 02:20:56.870248 master-0 kubenswrapper[31411]: I0224 02:20:56.861847 31411 flags.go:64] FLAG: --max-open-files="1000000" Feb 24 02:20:56.870248 master-0 kubenswrapper[31411]: I0224 02:20:56.861859 31411 flags.go:64] FLAG: --max-pods="110" Feb 24 02:20:56.870248 master-0 kubenswrapper[31411]: I0224 02:20:56.861869 31411 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 24 02:20:56.870248 master-0 kubenswrapper[31411]: I0224 02:20:56.861878 31411 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 24 02:20:56.870248 master-0 kubenswrapper[31411]: I0224 02:20:56.861889 31411 flags.go:64] FLAG: --memory-manager-policy="None" Feb 24 02:20:56.870248 master-0 kubenswrapper[31411]: I0224 02:20:56.861898 31411 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 24 02:20:56.870248 master-0 kubenswrapper[31411]: I0224 02:20:56.861908 31411 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 24 02:20:56.870248 master-0 kubenswrapper[31411]: I0224 02:20:56.861918 31411 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 24 02:20:56.870248 master-0 kubenswrapper[31411]: I0224 02:20:56.861984 31411 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 24 02:20:56.870248 master-0 kubenswrapper[31411]: I0224 02:20:56.862007 31411 flags.go:64] FLAG: --node-status-max-images="50" Feb 24 02:20:56.870248 master-0 kubenswrapper[31411]: I0224 02:20:56.862017 31411 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 24 02:20:56.870248 master-0 kubenswrapper[31411]: I0224 02:20:56.862026 31411 flags.go:64] FLAG: --oom-score-adj="-999" Feb 24 02:20:56.870248 master-0 kubenswrapper[31411]: I0224 02:20:56.862035 31411 flags.go:64] FLAG: --pod-cidr="" Feb 24 02:20:56.870248 master-0 kubenswrapper[31411]: I0224 02:20:56.862045 31411 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5001a555eb05eef7f23d64667303c2b4db8343ee900c265f7613c40c1db229" Feb 24 02:20:56.870248 master-0 kubenswrapper[31411]: I0224 02:20:56.862058 31411 flags.go:64] FLAG: --pod-manifest-path="" Feb 24 02:20:56.870248 master-0 kubenswrapper[31411]: I0224 02:20:56.862068 31411 flags.go:64] FLAG: --pod-max-pids="-1" Feb 24 02:20:56.870248 master-0 kubenswrapper[31411]: I0224 02:20:56.862077 31411 flags.go:64] FLAG: --pods-per-core="0" Feb 24 02:20:56.870248 master-0 kubenswrapper[31411]: I0224 02:20:56.862087 31411 flags.go:64] FLAG: --port="10250" Feb 24 02:20:56.870248 master-0 kubenswrapper[31411]: I0224 02:20:56.862097 31411 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 24 02:20:56.870248 master-0 kubenswrapper[31411]: I0224 02:20:56.862106 31411 flags.go:64] FLAG: --provider-id="" Feb 24 02:20:56.870248 master-0 kubenswrapper[31411]: I0224 02:20:56.862115 31411 flags.go:64] FLAG: --qos-reserved="" Feb 24 02:20:56.870248 master-0 kubenswrapper[31411]: I0224 02:20:56.862124 31411 flags.go:64] FLAG: --read-only-port="10255" Feb 24 02:20:56.871543 master-0 kubenswrapper[31411]: I0224 02:20:56.862134 31411 flags.go:64] FLAG: --register-node="true" Feb 24 02:20:56.871543 master-0 kubenswrapper[31411]: I0224 02:20:56.862145 31411 flags.go:64] FLAG: --register-schedulable="true" Feb 24 02:20:56.871543 master-0 kubenswrapper[31411]: I0224 02:20:56.862154 31411 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 24 02:20:56.871543 master-0 kubenswrapper[31411]: I0224 02:20:56.862170 31411 flags.go:64] FLAG: --registry-burst="10" Feb 24 02:20:56.871543 master-0 kubenswrapper[31411]: I0224 02:20:56.862180 31411 flags.go:64] FLAG: --registry-qps="5" Feb 24 02:20:56.871543 master-0 kubenswrapper[31411]: I0224 02:20:56.862189 31411 flags.go:64] FLAG: --reserved-cpus="" Feb 24 02:20:56.871543 master-0 kubenswrapper[31411]: I0224 02:20:56.862199 31411 flags.go:64] FLAG: --reserved-memory="" Feb 24 02:20:56.871543 master-0 kubenswrapper[31411]: I0224 02:20:56.862211 31411 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 24 02:20:56.871543 master-0 kubenswrapper[31411]: I0224 02:20:56.862221 31411 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 24 02:20:56.871543 master-0 kubenswrapper[31411]: I0224 02:20:56.862231 31411 flags.go:64] FLAG: --rotate-certificates="false" Feb 24 02:20:56.871543 master-0 kubenswrapper[31411]: I0224 02:20:56.862241 31411 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 24 02:20:56.871543 master-0 kubenswrapper[31411]: I0224 02:20:56.862251 31411 flags.go:64] FLAG: --runonce="false" Feb 24 02:20:56.871543 master-0 kubenswrapper[31411]: I0224 02:20:56.862261 31411 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 24 02:20:56.871543 master-0 kubenswrapper[31411]: I0224 02:20:56.862270 31411 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 24 02:20:56.871543 master-0 kubenswrapper[31411]: I0224 02:20:56.862281 31411 flags.go:64] FLAG: --seccomp-default="false" Feb 24 02:20:56.871543 master-0 kubenswrapper[31411]: I0224 02:20:56.862291 31411 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 24 02:20:56.871543 master-0 kubenswrapper[31411]: I0224 02:20:56.862300 31411 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 24 02:20:56.871543 master-0 kubenswrapper[31411]: I0224 02:20:56.862310 31411 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 24 02:20:56.871543 master-0 kubenswrapper[31411]: I0224 02:20:56.862320 31411 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 24 02:20:56.871543 master-0 kubenswrapper[31411]: I0224 02:20:56.862330 31411 flags.go:64] FLAG: --storage-driver-password="root" Feb 24 02:20:56.871543 master-0 kubenswrapper[31411]: I0224 02:20:56.862339 31411 flags.go:64] FLAG: --storage-driver-secure="false" Feb 24 02:20:56.871543 master-0 kubenswrapper[31411]: I0224 02:20:56.862349 31411 flags.go:64] FLAG: --storage-driver-table="stats" Feb 24 02:20:56.871543 master-0 kubenswrapper[31411]: I0224 02:20:56.862358 31411 flags.go:64] FLAG: --storage-driver-user="root" Feb 24 02:20:56.871543 master-0 kubenswrapper[31411]: I0224 02:20:56.862368 31411 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 24 02:20:56.871543 master-0 kubenswrapper[31411]: I0224 02:20:56.862378 31411 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 24 02:20:56.872976 master-0 kubenswrapper[31411]: I0224 02:20:56.862387 31411 flags.go:64] FLAG: --system-cgroups="" Feb 24 02:20:56.872976 master-0 kubenswrapper[31411]: I0224 02:20:56.862397 31411 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 24 02:20:56.872976 master-0 kubenswrapper[31411]: I0224 02:20:56.862411 31411 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 24 02:20:56.872976 master-0 kubenswrapper[31411]: I0224 02:20:56.862420 31411 flags.go:64] FLAG: --tls-cert-file="" Feb 24 02:20:56.872976 master-0 kubenswrapper[31411]: I0224 02:20:56.862429 31411 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 24 02:20:56.872976 master-0 kubenswrapper[31411]: I0224 02:20:56.862441 31411 flags.go:64] FLAG: --tls-min-version="" Feb 24 02:20:56.872976 master-0 kubenswrapper[31411]: I0224 02:20:56.862449 31411 flags.go:64] FLAG: --tls-private-key-file="" Feb 24 02:20:56.872976 master-0 kubenswrapper[31411]: I0224 02:20:56.862459 31411 flags.go:64] FLAG: --topology-manager-policy="none" Feb 24 02:20:56.872976 master-0 kubenswrapper[31411]: I0224 02:20:56.862468 31411 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 24 02:20:56.872976 master-0 kubenswrapper[31411]: I0224 02:20:56.862478 31411 flags.go:64] FLAG: --topology-manager-scope="container" Feb 24 02:20:56.872976 master-0 kubenswrapper[31411]: I0224 02:20:56.862487 31411 flags.go:64] FLAG: --v="2" Feb 24 02:20:56.872976 master-0 kubenswrapper[31411]: I0224 02:20:56.862500 31411 flags.go:64] FLAG: --version="false" Feb 24 02:20:56.872976 master-0 kubenswrapper[31411]: I0224 02:20:56.862512 31411 flags.go:64] FLAG: --vmodule="" Feb 24 02:20:56.872976 master-0 kubenswrapper[31411]: I0224 02:20:56.862525 31411 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 24 02:20:56.872976 master-0 kubenswrapper[31411]: I0224 02:20:56.862536 31411 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 24 02:20:56.872976 master-0 kubenswrapper[31411]: W0224 02:20:56.862818 31411 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 02:20:56.872976 master-0 kubenswrapper[31411]: W0224 02:20:56.862831 31411 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 02:20:56.872976 master-0 kubenswrapper[31411]: W0224 02:20:56.862840 31411 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 02:20:56.872976 master-0 kubenswrapper[31411]: W0224 02:20:56.862850 31411 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 02:20:56.872976 master-0 kubenswrapper[31411]: W0224 02:20:56.862859 31411 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 02:20:56.872976 master-0 kubenswrapper[31411]: W0224 02:20:56.862868 31411 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 02:20:56.872976 master-0 kubenswrapper[31411]: W0224 02:20:56.862876 31411 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 02:20:56.872976 master-0 kubenswrapper[31411]: W0224 02:20:56.862886 31411 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 02:20:56.874462 master-0 kubenswrapper[31411]: W0224 02:20:56.862897 31411 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 02:20:56.874462 master-0 kubenswrapper[31411]: W0224 02:20:56.862908 31411 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 02:20:56.874462 master-0 kubenswrapper[31411]: W0224 02:20:56.862919 31411 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 02:20:56.874462 master-0 kubenswrapper[31411]: W0224 02:20:56.862928 31411 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 02:20:56.874462 master-0 kubenswrapper[31411]: W0224 02:20:56.862937 31411 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 02:20:56.874462 master-0 kubenswrapper[31411]: W0224 02:20:56.862946 31411 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 02:20:56.874462 master-0 kubenswrapper[31411]: W0224 02:20:56.862965 31411 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 02:20:56.874462 master-0 kubenswrapper[31411]: W0224 02:20:56.862974 31411 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 02:20:56.874462 master-0 kubenswrapper[31411]: W0224 02:20:56.862982 31411 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 02:20:56.874462 master-0 kubenswrapper[31411]: W0224 02:20:56.862991 31411 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 02:20:56.874462 master-0 kubenswrapper[31411]: W0224 02:20:56.863001 31411 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 02:20:56.874462 master-0 kubenswrapper[31411]: W0224 02:20:56.863011 31411 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 02:20:56.874462 master-0 kubenswrapper[31411]: W0224 02:20:56.863019 31411 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 02:20:56.874462 master-0 kubenswrapper[31411]: W0224 02:20:56.863028 31411 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 02:20:56.874462 master-0 kubenswrapper[31411]: W0224 02:20:56.863037 31411 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 02:20:56.874462 master-0 kubenswrapper[31411]: W0224 02:20:56.863046 31411 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 02:20:56.874462 master-0 kubenswrapper[31411]: W0224 02:20:56.863055 31411 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 02:20:56.874462 master-0 kubenswrapper[31411]: W0224 02:20:56.863063 31411 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 02:20:56.874462 master-0 kubenswrapper[31411]: W0224 02:20:56.863073 31411 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 02:20:56.875699 master-0 kubenswrapper[31411]: W0224 02:20:56.863082 31411 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 02:20:56.875699 master-0 kubenswrapper[31411]: W0224 02:20:56.863090 31411 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 02:20:56.875699 master-0 kubenswrapper[31411]: W0224 02:20:56.863099 31411 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 02:20:56.875699 master-0 kubenswrapper[31411]: W0224 02:20:56.863107 31411 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 02:20:56.875699 master-0 kubenswrapper[31411]: W0224 02:20:56.863123 31411 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 02:20:56.875699 master-0 kubenswrapper[31411]: W0224 02:20:56.863131 31411 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 02:20:56.875699 master-0 kubenswrapper[31411]: W0224 02:20:56.863139 31411 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 02:20:56.875699 master-0 kubenswrapper[31411]: W0224 02:20:56.863147 31411 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 02:20:56.875699 master-0 kubenswrapper[31411]: W0224 02:20:56.863155 31411 feature_gate.go:330] unrecognized feature gate: Example Feb 24 02:20:56.875699 master-0 kubenswrapper[31411]: W0224 02:20:56.863163 31411 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 02:20:56.875699 master-0 kubenswrapper[31411]: W0224 02:20:56.863171 31411 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 02:20:56.875699 master-0 kubenswrapper[31411]: W0224 02:20:56.863179 31411 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 02:20:56.875699 master-0 kubenswrapper[31411]: W0224 02:20:56.863188 31411 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 02:20:56.875699 master-0 kubenswrapper[31411]: W0224 02:20:56.863196 31411 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 02:20:56.875699 master-0 kubenswrapper[31411]: W0224 02:20:56.863204 31411 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 02:20:56.875699 master-0 kubenswrapper[31411]: W0224 02:20:56.863212 31411 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 02:20:56.875699 master-0 kubenswrapper[31411]: W0224 02:20:56.863221 31411 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 02:20:56.875699 master-0 kubenswrapper[31411]: W0224 02:20:56.863228 31411 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 02:20:56.875699 master-0 kubenswrapper[31411]: W0224 02:20:56.863237 31411 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 02:20:56.875699 master-0 kubenswrapper[31411]: W0224 02:20:56.863245 31411 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 02:20:56.876854 master-0 kubenswrapper[31411]: W0224 02:20:56.863253 31411 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 02:20:56.876854 master-0 kubenswrapper[31411]: W0224 02:20:56.863261 31411 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 02:20:56.876854 master-0 kubenswrapper[31411]: W0224 02:20:56.863269 31411 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 02:20:56.876854 master-0 kubenswrapper[31411]: W0224 02:20:56.863281 31411 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 02:20:56.876854 master-0 kubenswrapper[31411]: W0224 02:20:56.863292 31411 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 02:20:56.876854 master-0 kubenswrapper[31411]: W0224 02:20:56.863302 31411 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 02:20:56.876854 master-0 kubenswrapper[31411]: W0224 02:20:56.863312 31411 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 02:20:56.876854 master-0 kubenswrapper[31411]: W0224 02:20:56.863322 31411 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 02:20:56.876854 master-0 kubenswrapper[31411]: W0224 02:20:56.863331 31411 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 02:20:56.876854 master-0 kubenswrapper[31411]: W0224 02:20:56.863340 31411 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 02:20:56.876854 master-0 kubenswrapper[31411]: W0224 02:20:56.863348 31411 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 02:20:56.876854 master-0 kubenswrapper[31411]: W0224 02:20:56.863357 31411 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 02:20:56.876854 master-0 kubenswrapper[31411]: W0224 02:20:56.863366 31411 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 02:20:56.876854 master-0 kubenswrapper[31411]: W0224 02:20:56.863375 31411 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 02:20:56.876854 master-0 kubenswrapper[31411]: W0224 02:20:56.863384 31411 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 02:20:56.876854 master-0 kubenswrapper[31411]: W0224 02:20:56.863392 31411 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 02:20:56.876854 master-0 kubenswrapper[31411]: W0224 02:20:56.863404 31411 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 02:20:56.876854 master-0 kubenswrapper[31411]: W0224 02:20:56.863412 31411 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 02:20:56.876854 master-0 kubenswrapper[31411]: W0224 02:20:56.863420 31411 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 24 02:20:56.877955 master-0 kubenswrapper[31411]: W0224 02:20:56.863428 31411 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 02:20:56.877955 master-0 kubenswrapper[31411]: W0224 02:20:56.863437 31411 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 02:20:56.877955 master-0 kubenswrapper[31411]: W0224 02:20:56.863445 31411 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 02:20:56.877955 master-0 kubenswrapper[31411]: W0224 02:20:56.863453 31411 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 02:20:56.877955 master-0 kubenswrapper[31411]: W0224 02:20:56.863461 31411 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 02:20:56.877955 master-0 kubenswrapper[31411]: W0224 02:20:56.863471 31411 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 02:20:56.877955 master-0 kubenswrapper[31411]: I0224 02:20:56.863497 31411 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 02:20:56.877955 master-0 kubenswrapper[31411]: I0224 02:20:56.873404 31411 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 24 02:20:56.877955 master-0 kubenswrapper[31411]: I0224 02:20:56.873453 31411 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 24 02:20:56.877955 master-0 kubenswrapper[31411]: W0224 02:20:56.874858 31411 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 02:20:56.877955 master-0 kubenswrapper[31411]: W0224 02:20:56.874890 31411 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 02:20:56.877955 master-0 kubenswrapper[31411]: W0224 02:20:56.874903 31411 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 02:20:56.877955 master-0 kubenswrapper[31411]: W0224 02:20:56.874923 31411 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 02:20:56.877955 master-0 kubenswrapper[31411]: W0224 02:20:56.874934 31411 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 02:20:56.877955 master-0 kubenswrapper[31411]: W0224 02:20:56.874945 31411 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 02:20:56.878967 master-0 kubenswrapper[31411]: W0224 02:20:56.874955 31411 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 02:20:56.878967 master-0 kubenswrapper[31411]: W0224 02:20:56.874966 31411 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 02:20:56.878967 master-0 kubenswrapper[31411]: W0224 02:20:56.874975 31411 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 02:20:56.878967 master-0 kubenswrapper[31411]: W0224 02:20:56.875002 31411 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 02:20:56.878967 master-0 kubenswrapper[31411]: W0224 02:20:56.875012 31411 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 02:20:56.878967 master-0 kubenswrapper[31411]: W0224 02:20:56.875022 31411 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 02:20:56.878967 master-0 kubenswrapper[31411]: W0224 02:20:56.875030 31411 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 02:20:56.878967 master-0 kubenswrapper[31411]: W0224 02:20:56.875040 31411 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 02:20:56.878967 master-0 kubenswrapper[31411]: W0224 02:20:56.875049 31411 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 24 02:20:56.878967 master-0 kubenswrapper[31411]: W0224 02:20:56.875060 31411 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 02:20:56.878967 master-0 kubenswrapper[31411]: W0224 02:20:56.875070 31411 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 02:20:56.878967 master-0 kubenswrapper[31411]: W0224 02:20:56.875079 31411 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 02:20:56.878967 master-0 kubenswrapper[31411]: W0224 02:20:56.875088 31411 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 02:20:56.878967 master-0 kubenswrapper[31411]: W0224 02:20:56.875098 31411 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 02:20:56.878967 master-0 kubenswrapper[31411]: W0224 02:20:56.875106 31411 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 02:20:56.878967 master-0 kubenswrapper[31411]: W0224 02:20:56.875131 31411 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 02:20:56.878967 master-0 kubenswrapper[31411]: W0224 02:20:56.875139 31411 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 02:20:56.878967 master-0 kubenswrapper[31411]: W0224 02:20:56.875148 31411 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 02:20:56.878967 master-0 kubenswrapper[31411]: W0224 02:20:56.875157 31411 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 02:20:56.878967 master-0 kubenswrapper[31411]: W0224 02:20:56.875166 31411 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 02:20:56.880144 master-0 kubenswrapper[31411]: W0224 02:20:56.875175 31411 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 02:20:56.880144 master-0 kubenswrapper[31411]: W0224 02:20:56.875184 31411 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 02:20:56.880144 master-0 kubenswrapper[31411]: W0224 02:20:56.875193 31411 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 02:20:56.880144 master-0 kubenswrapper[31411]: W0224 02:20:56.875305 31411 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 02:20:56.880144 master-0 kubenswrapper[31411]: W0224 02:20:56.875316 31411 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 02:20:56.880144 master-0 kubenswrapper[31411]: W0224 02:20:56.875325 31411 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 02:20:56.880144 master-0 kubenswrapper[31411]: W0224 02:20:56.875334 31411 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 02:20:56.880144 master-0 kubenswrapper[31411]: W0224 02:20:56.875345 31411 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 02:20:56.880144 master-0 kubenswrapper[31411]: W0224 02:20:56.875364 31411 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 02:20:56.880144 master-0 kubenswrapper[31411]: W0224 02:20:56.875375 31411 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 02:20:56.880144 master-0 kubenswrapper[31411]: W0224 02:20:56.875384 31411 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 02:20:56.880144 master-0 kubenswrapper[31411]: W0224 02:20:56.875393 31411 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 02:20:56.880144 master-0 kubenswrapper[31411]: W0224 02:20:56.875401 31411 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 02:20:56.880144 master-0 kubenswrapper[31411]: W0224 02:20:56.875409 31411 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 02:20:56.880144 master-0 kubenswrapper[31411]: W0224 02:20:56.875419 31411 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 02:20:56.880144 master-0 kubenswrapper[31411]: W0224 02:20:56.875428 31411 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 02:20:56.880144 master-0 kubenswrapper[31411]: W0224 02:20:56.875439 31411 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 02:20:56.880144 master-0 kubenswrapper[31411]: W0224 02:20:56.875450 31411 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 02:20:56.880144 master-0 kubenswrapper[31411]: W0224 02:20:56.875459 31411 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 02:20:56.881358 master-0 kubenswrapper[31411]: W0224 02:20:56.875470 31411 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 02:20:56.881358 master-0 kubenswrapper[31411]: W0224 02:20:56.875485 31411 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 02:20:56.881358 master-0 kubenswrapper[31411]: W0224 02:20:56.875495 31411 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 02:20:56.881358 master-0 kubenswrapper[31411]: W0224 02:20:56.875504 31411 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 02:20:56.881358 master-0 kubenswrapper[31411]: W0224 02:20:56.875514 31411 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 02:20:56.881358 master-0 kubenswrapper[31411]: W0224 02:20:56.875523 31411 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 02:20:56.881358 master-0 kubenswrapper[31411]: W0224 02:20:56.875533 31411 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 02:20:56.881358 master-0 kubenswrapper[31411]: W0224 02:20:56.875541 31411 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 02:20:56.881358 master-0 kubenswrapper[31411]: W0224 02:20:56.875550 31411 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 02:20:56.881358 master-0 kubenswrapper[31411]: W0224 02:20:56.875558 31411 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 02:20:56.881358 master-0 kubenswrapper[31411]: W0224 02:20:56.875567 31411 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 02:20:56.881358 master-0 kubenswrapper[31411]: W0224 02:20:56.875601 31411 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 02:20:56.881358 master-0 kubenswrapper[31411]: W0224 02:20:56.875686 31411 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 02:20:56.881358 master-0 kubenswrapper[31411]: W0224 02:20:56.875696 31411 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 02:20:56.881358 master-0 kubenswrapper[31411]: W0224 02:20:56.875715 31411 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 02:20:56.881358 master-0 kubenswrapper[31411]: W0224 02:20:56.875726 31411 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 02:20:56.881358 master-0 kubenswrapper[31411]: W0224 02:20:56.875735 31411 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 02:20:56.881358 master-0 kubenswrapper[31411]: W0224 02:20:56.875745 31411 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 02:20:56.881358 master-0 kubenswrapper[31411]: W0224 02:20:56.875756 31411 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 02:20:56.881358 master-0 kubenswrapper[31411]: W0224 02:20:56.875766 31411 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 02:20:56.881358 master-0 kubenswrapper[31411]: W0224 02:20:56.875918 31411 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 02:20:56.882517 master-0 kubenswrapper[31411]: W0224 02:20:56.876537 31411 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 02:20:56.882517 master-0 kubenswrapper[31411]: W0224 02:20:56.876626 31411 feature_gate.go:330] unrecognized feature gate: Example Feb 24 02:20:56.882517 master-0 kubenswrapper[31411]: W0224 02:20:56.876639 31411 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 02:20:56.882517 master-0 kubenswrapper[31411]: W0224 02:20:56.877085 31411 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 02:20:56.882517 master-0 kubenswrapper[31411]: W0224 02:20:56.877144 31411 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 02:20:56.882517 master-0 kubenswrapper[31411]: W0224 02:20:56.877162 31411 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 02:20:56.882517 master-0 kubenswrapper[31411]: I0224 02:20:56.877181 31411 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 02:20:56.882517 master-0 kubenswrapper[31411]: W0224 02:20:56.877648 31411 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 02:20:56.882517 master-0 kubenswrapper[31411]: W0224 02:20:56.877669 31411 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 02:20:56.882517 master-0 kubenswrapper[31411]: W0224 02:20:56.877678 31411 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 02:20:56.882517 master-0 kubenswrapper[31411]: W0224 02:20:56.877688 31411 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 02:20:56.882517 master-0 kubenswrapper[31411]: W0224 02:20:56.877697 31411 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 02:20:56.882517 master-0 kubenswrapper[31411]: W0224 02:20:56.877708 31411 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 24 02:20:56.882517 master-0 kubenswrapper[31411]: W0224 02:20:56.877717 31411 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 02:20:56.882517 master-0 kubenswrapper[31411]: W0224 02:20:56.877729 31411 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 02:20:56.885351 master-0 kubenswrapper[31411]: W0224 02:20:56.877738 31411 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 02:20:56.885351 master-0 kubenswrapper[31411]: W0224 02:20:56.877747 31411 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 02:20:56.885351 master-0 kubenswrapper[31411]: W0224 02:20:56.877756 31411 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 02:20:56.885351 master-0 kubenswrapper[31411]: W0224 02:20:56.877764 31411 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 02:20:56.885351 master-0 kubenswrapper[31411]: W0224 02:20:56.877775 31411 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 02:20:56.885351 master-0 kubenswrapper[31411]: W0224 02:20:56.877788 31411 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 02:20:56.885351 master-0 kubenswrapper[31411]: W0224 02:20:56.877796 31411 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 02:20:56.885351 master-0 kubenswrapper[31411]: W0224 02:20:56.877807 31411 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 02:20:56.885351 master-0 kubenswrapper[31411]: W0224 02:20:56.877815 31411 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 02:20:56.885351 master-0 kubenswrapper[31411]: W0224 02:20:56.877827 31411 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 02:20:56.885351 master-0 kubenswrapper[31411]: W0224 02:20:56.877838 31411 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 02:20:56.885351 master-0 kubenswrapper[31411]: W0224 02:20:56.877847 31411 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 02:20:56.885351 master-0 kubenswrapper[31411]: W0224 02:20:56.877856 31411 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 02:20:56.885351 master-0 kubenswrapper[31411]: W0224 02:20:56.877865 31411 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 02:20:56.885351 master-0 kubenswrapper[31411]: W0224 02:20:56.877873 31411 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 02:20:56.885351 master-0 kubenswrapper[31411]: W0224 02:20:56.877881 31411 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 02:20:56.885351 master-0 kubenswrapper[31411]: W0224 02:20:56.877891 31411 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 02:20:56.885351 master-0 kubenswrapper[31411]: W0224 02:20:56.877899 31411 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 02:20:56.885351 master-0 kubenswrapper[31411]: W0224 02:20:56.877908 31411 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 02:20:56.886702 master-0 kubenswrapper[31411]: W0224 02:20:56.877916 31411 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 02:20:56.886702 master-0 kubenswrapper[31411]: W0224 02:20:56.877948 31411 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 02:20:56.886702 master-0 kubenswrapper[31411]: W0224 02:20:56.877959 31411 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 02:20:56.886702 master-0 kubenswrapper[31411]: W0224 02:20:56.877968 31411 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 02:20:56.886702 master-0 kubenswrapper[31411]: W0224 02:20:56.877980 31411 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 02:20:56.886702 master-0 kubenswrapper[31411]: W0224 02:20:56.877988 31411 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 02:20:56.886702 master-0 kubenswrapper[31411]: W0224 02:20:56.877997 31411 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 02:20:56.886702 master-0 kubenswrapper[31411]: W0224 02:20:56.878006 31411 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 02:20:56.886702 master-0 kubenswrapper[31411]: W0224 02:20:56.878014 31411 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 02:20:56.886702 master-0 kubenswrapper[31411]: W0224 02:20:56.878023 31411 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 02:20:56.886702 master-0 kubenswrapper[31411]: W0224 02:20:56.878034 31411 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 02:20:56.886702 master-0 kubenswrapper[31411]: W0224 02:20:56.878043 31411 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 02:20:56.886702 master-0 kubenswrapper[31411]: W0224 02:20:56.878051 31411 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 02:20:56.886702 master-0 kubenswrapper[31411]: W0224 02:20:56.878060 31411 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 02:20:56.886702 master-0 kubenswrapper[31411]: W0224 02:20:56.878068 31411 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 02:20:56.886702 master-0 kubenswrapper[31411]: W0224 02:20:56.878077 31411 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 02:20:56.886702 master-0 kubenswrapper[31411]: W0224 02:20:56.878085 31411 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 02:20:56.886702 master-0 kubenswrapper[31411]: W0224 02:20:56.878093 31411 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 02:20:56.886702 master-0 kubenswrapper[31411]: W0224 02:20:56.878101 31411 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 02:20:56.886702 master-0 kubenswrapper[31411]: W0224 02:20:56.878110 31411 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 02:20:56.889432 master-0 kubenswrapper[31411]: W0224 02:20:56.878119 31411 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 02:20:56.889432 master-0 kubenswrapper[31411]: W0224 02:20:56.878127 31411 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 02:20:56.889432 master-0 kubenswrapper[31411]: W0224 02:20:56.878138 31411 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 02:20:56.889432 master-0 kubenswrapper[31411]: W0224 02:20:56.878148 31411 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 02:20:56.889432 master-0 kubenswrapper[31411]: W0224 02:20:56.878156 31411 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 02:20:56.889432 master-0 kubenswrapper[31411]: W0224 02:20:56.878165 31411 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 02:20:56.889432 master-0 kubenswrapper[31411]: W0224 02:20:56.878173 31411 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 02:20:56.889432 master-0 kubenswrapper[31411]: W0224 02:20:56.878181 31411 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 02:20:56.889432 master-0 kubenswrapper[31411]: W0224 02:20:56.878191 31411 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 02:20:56.889432 master-0 kubenswrapper[31411]: W0224 02:20:56.878199 31411 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 02:20:56.889432 master-0 kubenswrapper[31411]: W0224 02:20:56.878208 31411 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 02:20:56.889432 master-0 kubenswrapper[31411]: W0224 02:20:56.878216 31411 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 02:20:56.889432 master-0 kubenswrapper[31411]: W0224 02:20:56.878225 31411 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 02:20:56.889432 master-0 kubenswrapper[31411]: W0224 02:20:56.878237 31411 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 02:20:56.889432 master-0 kubenswrapper[31411]: W0224 02:20:56.878247 31411 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 02:20:56.889432 master-0 kubenswrapper[31411]: W0224 02:20:56.878256 31411 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 02:20:56.889432 master-0 kubenswrapper[31411]: W0224 02:20:56.878266 31411 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 02:20:56.889432 master-0 kubenswrapper[31411]: W0224 02:20:56.878275 31411 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 02:20:56.889432 master-0 kubenswrapper[31411]: W0224 02:20:56.878284 31411 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 02:20:56.890825 master-0 kubenswrapper[31411]: W0224 02:20:56.878293 31411 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 02:20:56.890825 master-0 kubenswrapper[31411]: W0224 02:20:56.878301 31411 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 02:20:56.890825 master-0 kubenswrapper[31411]: W0224 02:20:56.878310 31411 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 02:20:56.890825 master-0 kubenswrapper[31411]: W0224 02:20:56.878318 31411 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 02:20:56.890825 master-0 kubenswrapper[31411]: W0224 02:20:56.878326 31411 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 02:20:56.890825 master-0 kubenswrapper[31411]: W0224 02:20:56.878335 31411 feature_gate.go:330] unrecognized feature gate: Example Feb 24 02:20:56.890825 master-0 kubenswrapper[31411]: I0224 02:20:56.878349 31411 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 02:20:56.890825 master-0 kubenswrapper[31411]: I0224 02:20:56.878876 31411 server.go:940] "Client rotation is on, will bootstrap in background" Feb 24 02:20:56.890825 master-0 kubenswrapper[31411]: I0224 02:20:56.882485 31411 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 24 02:20:56.890825 master-0 kubenswrapper[31411]: I0224 02:20:56.882692 31411 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 24 02:20:56.890825 master-0 kubenswrapper[31411]: I0224 02:20:56.883188 31411 server.go:997] "Starting client certificate rotation" Feb 24 02:20:56.890825 master-0 kubenswrapper[31411]: I0224 02:20:56.883210 31411 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 24 02:20:56.890825 master-0 kubenswrapper[31411]: I0224 02:20:56.884684 31411 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 02:20:56.891550 master-0 kubenswrapper[31411]: I0224 02:20:56.885508 31411 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-25 01:54:04 +0000 UTC, rotation deadline is 2026-02-24 20:07:48.863799903 +0000 UTC Feb 24 02:20:56.891550 master-0 kubenswrapper[31411]: I0224 02:20:56.885719 31411 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 17h46m51.978087484s for next certificate rotation Feb 24 02:20:56.891550 master-0 kubenswrapper[31411]: I0224 02:20:56.888135 31411 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 02:20:56.892921 master-0 kubenswrapper[31411]: I0224 02:20:56.892875 31411 log.go:25] "Validated CRI v1 runtime API" Feb 24 02:20:56.900203 master-0 kubenswrapper[31411]: I0224 02:20:56.900142 31411 log.go:25] "Validated CRI v1 image API" Feb 24 02:20:56.902100 master-0 kubenswrapper[31411]: I0224 02:20:56.902050 31411 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 24 02:20:56.923548 master-0 kubenswrapper[31411]: I0224 02:20:56.923464 31411 fs.go:135] Filesystem UUIDs: map[19c17b43-4715-4d15-ba6d-72e795fc4d8f:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Feb 24 02:20:56.925518 master-0 kubenswrapper[31411]: I0224 02:20:56.923517 31411 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0034746b398351f91b0a88e97985b40bb4895c122d618141a5cb5cca87941d23/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0034746b398351f91b0a88e97985b40bb4895c122d618141a5cb5cca87941d23/userdata/shm major:0 minor:808 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0a122f4d5531f3489b5545a54ec94812a3a4adf1ceb59316f98f88f87840e7dc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0a122f4d5531f3489b5545a54ec94812a3a4adf1ceb59316f98f88f87840e7dc/userdata/shm major:0 minor:1125 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0e94bb6d8da81f692c353aed9041e8cea1ef96da518c0c68ab1453f8b2183856/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0e94bb6d8da81f692c353aed9041e8cea1ef96da518c0c68ab1453f8b2183856/userdata/shm major:0 minor:619 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0fcf5ec505a8c60fa755fbe1033404d9a2bfa8dd51c8a8904db6a212bec7d594/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0fcf5ec505a8c60fa755fbe1033404d9a2bfa8dd51c8a8904db6a212bec7d594/userdata/shm major:0 minor:640 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/14bae3b4a416d85f1092a8f00a7f0b630edcc978d596d53dd23dbb652596ecd8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/14bae3b4a416d85f1092a8f00a7f0b630edcc978d596d53dd23dbb652596ecd8/userdata/shm major:0 minor:1297 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1684de33a1b99214450be6d5f8c060f5a9f1a7c517e642d15f1d667f8a119c75/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1684de33a1b99214450be6d5f8c060f5a9f1a7c517e642d15f1d667f8a119c75/userdata/shm major:0 minor:1252 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1902deada2be96bfa5d915252c7df17f18da007080acab9c3aa02ba85365b1cc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1902deada2be96bfa5d915252c7df17f18da007080acab9c3aa02ba85365b1cc/userdata/shm major:0 minor:481 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1cc1996551692c223eb12edcadd4f14bef06fed859ebb6d00f4391944783b38d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1cc1996551692c223eb12edcadd4f14bef06fed859ebb6d00f4391944783b38d/userdata/shm major:0 minor:280 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1cf9c3623efa047b3c733a0c601bd847d659d71e97fdf999c590347704f0d5c3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1cf9c3623efa047b3c733a0c601bd847d659d71e97fdf999c590347704f0d5c3/userdata/shm major:0 minor:1014 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/200fe8533b917a93531075b1165c1c0e535a0cad7b646b4108d64e256389832b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/200fe8533b917a93531075b1165c1c0e535a0cad7b646b4108d64e256389832b/userdata/shm major:0 minor:818 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/21aa7b4dfda40f1610fd6b64e23f1c617ce7b50ea96960fc42e2a8aaa9a792b2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/21aa7b4dfda40f1610fd6b64e23f1c617ce7b50ea96960fc42e2a8aaa9a792b2/userdata/shm major:0 minor:589 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/229e86eaf4e88e77ef5e6c4bc8577da2618b97072b856ff2c58bb725165574ff/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/229e86eaf4e88e77ef5e6c4bc8577da2618b97072b856ff2c58bb725165574ff/userdata/shm major:0 minor:442 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2335a36c0dbf2382a55062b41bd9de9b70220499140428315aa61fdfd6dde11f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2335a36c0dbf2382a55062b41bd9de9b70220499140428315aa61fdfd6dde11f/userdata/shm major:0 minor:973 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/276e47463c76f9595550735ca5a2eb97f44bfa685298a20ea61ee705f8a41bd4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/276e47463c76f9595550735ca5a2eb97f44bfa685298a20ea61ee705f8a41bd4/userdata/shm major:0 minor:1130 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/29621914ea05b7d9aefb3ef92742f6212ca05bc6251d28674ae45265f66276a1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/29621914ea05b7d9aefb3ef92742f6212ca05bc6251d28674ae45265f66276a1/userdata/shm major:0 minor:325 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/29c6111030d71a276fc5ae8422a3897c52faae1bbf5d2f44516c595b0829852b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/29c6111030d71a276fc5ae8422a3897c52faae1bbf5d2f44516c595b0829852b/userdata/shm major:0 minor:811 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2dab12c36fbca650a107bc58df00044fd6561209f9c466f04a4c8ce72b69201d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2dab12c36fbca650a107bc58df00044fd6561209f9c466f04a4c8ce72b69201d/userdata/shm major:0 minor:1121 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/35a3cac3cfce9496c0f221e8539970cdcedf87aabbcb92ba9a5c445596750d49/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/35a3cac3cfce9496c0f221e8539970cdcedf87aabbcb92ba9a5c445596750d49/userdata/shm major:0 minor:267 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/35cb29197091c21cf559145587727d1cb31b46813a4d0aded5d8409120c45182/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/35cb29197091c21cf559145587727d1cb31b46813a4d0aded5d8409120c45182/userdata/shm major:0 minor:634 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/39f5130f968afbc399fff9d4193b2dc7547fc4010b24b675d8cfe4f908871554/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/39f5130f968afbc399fff9d4193b2dc7547fc4010b24b675d8cfe4f908871554/userdata/shm major:0 minor:632 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3d94304059d808624e692a18999e46c1ed32aa07c16bb3ea5a63de6a687dd377/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3d94304059d808624e692a18999e46c1ed32aa07c16bb3ea5a63de6a687dd377/userdata/shm major:0 minor:338 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/45681e7db0a00432167c0ceb01dfa150d4182b397673d5a0da048e4b9054ffea/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/45681e7db0a00432167c0ceb01dfa150d4182b397673d5a0da048e4b9054ffea/userdata/shm major:0 minor:771 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/46f23e74184a869450a53e076049b086fc11c3d08fab3acc813aa63061b356f3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/46f23e74184a869450a53e076049b086fc11c3d08fab3acc813aa63061b356f3/userdata/shm major:0 minor:70 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/513f9261949841afe139d9cdba0a1314c71b8cc3ca522e4a37e97a5c0f7cd056/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/513f9261949841afe139d9cdba0a1314c71b8cc3ca522e4a37e97a5c0f7cd056/userdata/shm major:0 minor:1168 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5365291f917df72d79e3f7635bb96352fde98df3b379b1b1c5331b7f5952d294/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5365291f917df72d79e3f7635bb96352fde98df3b379b1b1c5331b7f5952d294/userdata/shm major:0 minor:638 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/560cc2fc6affd50d504fd0043ec0076b50148137946f51caca10417e2832ae2a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/560cc2fc6affd50d504fd0043ec0076b50148137946f51caca10417e2832ae2a/userdata/shm major:0 minor:822 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/56b5dc5b3e9740ae05d95dc7b2a84307e363cddd956bef52b197b1f840f462b7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/56b5dc5b3e9740ae05d95dc7b2a84307e363cddd956bef52b197b1f840f462b7/userdata/shm major:0 minor:479 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/583b21f55c4eaab72f3731e41c571ee4872bace9012dd0a496219d9d98220f85/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/583b21f55c4eaab72f3731e41c571ee4872bace9012dd0a496219d9d98220f85/userdata/shm major:0 minor:485 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/586631f1005e0eec9e04637dd3347ca45f3a799902b12b7ee4c09257fef0aee9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/586631f1005e0eec9e04637dd3347ca45f3a799902b12b7ee4c09257fef0aee9/userdata/shm major:0 minor:424 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5961c2c1ae4747bec6388a9fbe96dacb27b6a52832bcc7c5d12c3091d629abab/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5961c2c1ae4747bec6388a9fbe96dacb27b6a52832bcc7c5d12c3091d629abab/userdata/shm major:0 minor:143 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5ba2f6486b90f665f4193dee37876ce40336ba0c3b009bf85c911f6014a84585/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5ba2f6486b90f665f4193dee37876ce40336ba0c3b009bf85c911f6014a84585/userdata/shm major:0 minor:142 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5c3398a6c263edc9332a777f898c18bf8d4d5354af4bc2396e80f920a1e77f07/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5c3398a6c263edc9332a777f898c18bf8d4d5354af4bc2396e80f920a1e77f07/userdata/shm major:0 minor:751 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/61e5e77001e1e5b4b53f6c82868401419bbcf0e5600dbe4c283c403c8bc8a720/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/61e5e77001e1e5b4b53f6c82868401419bbcf0e5600dbe4c283c403c8bc8a720/userdata/shm major:0 minor:837 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/627eb8f17d5fd787312a09b22ed574b8b738c499ce476e336267fe5d3546a7b9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/627eb8f17d5fd787312a09b22ed574b8b738c499ce476e336267fe5d3546a7b9/userdata/shm major:0 minor:998 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/64c3913ef0868e964da24e47fde7afcb2edc5db0527066ac2d8451806802e649/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/64c3913ef0868e964da24e47fde7afcb2edc5db0527066ac2d8451806802e649/userdata/shm major:0 minor:296 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/68a30a55ed2f979625e18a77c39c55f0bd820b511f058e5d010e556725054ded/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/68a30a55ed2f979625e18a77c39c55f0bd820b511f058e5d010e556725054ded/userdata/shm major:0 minor:487 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6d1c4a7e4e4241cdd4f673e537ec599a9ec1bd539d78669446c1a36b609a7a02/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6d1c4a7e4e4241cdd4f673e537ec599a9ec1bd539d78669446c1a36b609a7a02/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6dc4ae2fbc88ea5c43de5d695fea8a7c8829343138c8856dcfeff187994c5c0f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6dc4ae2fbc88ea5c43de5d695fea8a7c8829343138c8856dcfeff187994c5c0f/userdata/shm major:0 minor:583 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/77149d9718de6b23dc52d1b4901db52831b3adbf959e9b29aeeec05c6d2db97e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/77149d9718de6b23dc52d1b4901db52831b3adbf959e9b29aeeec05c6d2db97e/userdata/shm major:0 minor:630 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7a26b2c8abf7070791144c3b808d314860b4cb305adbb9095d34928ddbac7f4a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7a26b2c8abf7070791144c3b808d314860b4cb305adbb9095d34928ddbac7f4a/userdata/shm major:0 minor:1020 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7b8c524be621d3b232cfcc53d4958e12d26da68f7931a17964ec87b85eee7bba/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7b8c524be621d3b232cfcc53d4958e12d26da68f7931a17964ec87b85eee7bba/userdata/shm major:0 minor:105 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7ed7554e0b6eb88f1840ed2eee6ab3bddc21f230340186b0439d63f9a885eb31/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7ed7554e0b6eb88f1840ed2eee6ab3bddc21f230340186b0439d63f9a885eb31/userdata/shm major:0 minor:478 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/824e46bd462387249603454ca45d03ebca612478cdeb336ee057821a4d25b262/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/824e46bd462387249603454ca45d03ebca612478cdeb336ee057821a4d25b262/userdata/shm major:0 minor:1062 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/83490c1a955fe6b943eda48c6b81b0120dda14df023aa9b81ab0e80b7e90cadf/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/83490c1a955fe6b943eda48c6b81b0120dda14df023aa9b81ab0e80b7e90cadf/userdata/shm major:0 minor:263 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/87721a77ed25537b4640a4bb3b51b25ccc9baed9db06541dd0ac651dd7e4b7bb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/87721a77ed25537b4640a4bb3b51b25ccc9baed9db06541dd0ac651dd7e4b7bb/userdata/shm major:0 minor:1319 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/89a7efb88fa53095b161b71e1b8530a4c4c20e49713a6786f3a59609c9325838/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/89a7efb88fa53095b161b71e1b8530a4c4c20e49713a6786f3a59609c9325838/userdata/shm major:0 minor:636 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/89afab5096911f8752c791dc8598fa3869a80370cd36dec7298c9b9d91c19d81/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/89afab5096911f8752c791dc8598fa3869a80370cd36dec7298c9b9d91c19d81/userdata/shm major:0 minor:440 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8f97b854deca962131113e1e495c21e96d538f802eb3f5de41bedce3ba1452e3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8f97b854deca962131113e1e495c21e96d538f802eb3f5de41bedce3ba1452e3/userdata/shm major:0 minor:1021 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/90a0d6bae4f861a78e1bdfe5f47cce060c508d92cdd797bd0fed3982c351779c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/90a0d6bae4f861a78e1bdfe5f47cce060c508d92cdd797bd0fed3982c351779c/userdata/shm major:0 minor:403 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9538eb885cdee2fa9a588e118eb2c741ad080c47591f7c5e45f680a3f6d76460/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9538eb885cdee2fa9a588e118eb2c741ad080c47591f7c5e45f680a3f6d76460/userdata/shm major:0 minor:1192 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9867de2597cef8ea21b27065bc6c5ebb42eda67f6c0e55aa21a2e92ed8089c54/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9867de2597cef8ea21b27065bc6c5ebb42eda67f6c0e55aa21a2e92ed8089c54/userdata/shm major:0 minor:813 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9b9766c83ab547d93c665b0d79f8c94f21cf677d4157ff5e1bc24f519048fa91/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9b9766c83ab547d93c665b0d79f8c94f21cf677d4157ff5e1bc24f519048fa91/userdata/shm major:0 minor:287 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a547b5d4267673c7d0d24b2e2ba4109ca6066121198db068b1fa1a5a39df064e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a547b5d4267673c7d0d24b2e2ba4109ca6066121198db068b1fa1a5a39df064e/userdata/shm major:0 minor:1194 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a6e4933443321f6f827221301b84c881881ea51343c84ac3ad457e15891f86d0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a6e4933443321f6f827221301b84c881881ea51343c84ac3ad457e15891f86d0/userdata/shm major:0 minor:301 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a75855ac22ad61c526e140082a63e50802db589f96d5c1f8fe72f371e5c93069/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a75855ac22ad61c526e140082a63e50802db589f96d5c1f8fe72f371e5c93069/userdata/shm major:0 minor:270 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a8543f0f38e0eb6ba966eb5116372824fd9c0ed337e28520e9ed982545e8ec8c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a8543f0f38e0eb6ba966eb5116372824fd9c0ed337e28520e9ed982545e8ec8c/userdata/shm major:0 minor:1198 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a97c301937f1a0e25ebd74de8f7b7dfda3c088599ac5506143f0a1006a2bb044/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a97c301937f1a0e25ebd74de8f7b7dfda3c088599ac5506143f0a1006a2bb044/userdata/shm major:0 minor:80 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b9883c9e4f2305d2772b0fcadf9ca3936959b05b72ec07b2a03edcd3558e0737/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b9883c9e4f2305d2772b0fcadf9ca3936959b05b72ec07b2a03edcd3558e0737/userdata/shm major:0 minor:1330 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bb7daa4c061606545ddec9122a80563e4f785ed9c98cafaa54bb7196f126bd02/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bb7daa4c061606545ddec9122a80563e4f785ed9c98cafaa54bb7196f126bd02/userdata/shm major:0 minor:543 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c28e00882d5ec6f229538a77e2756ba9244f00b81361306d5103b5a9571bb19f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c28e00882d5ec6f229538a77e2756ba9244f00b81361306d5103b5a9571bb19f/userdata/shm major:0 minor:176 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c8a44d641739b0edde589e3cc2ab82e120d1f854cda8b41d7ab46952d705c4b9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c8a44d641739b0edde589e3cc2ab82e120d1f854cda8b41d7ab46952d705c4b9/userdata/shm major:0 minor:933 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ca4a08102d80addfcc85dbdd564f6e40965982eca8126d325ae121c2e1c48c40/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ca4a08102d80addfcc85dbdd564f6e40965982eca8126d325ae121c2e1c48c40/userdata/shm major:0 minor:122 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cee49e60dc37b41a9c1559a523e9c4e3b09f5f3e76df27a36cc4a9d63ff6bee9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cee49e60dc37b41a9c1559a523e9c4e3b09f5f3e76df27a36cc4a9d63ff6bee9/userdata/shm major:0 minor:503 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d789b7d1d1c624f3c1461f3405b95a301ab5f66347a0727135e2339f341d9052/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d789b7d1d1c624f3c1461f3405b95a301ab5f66347a0727135e2339f341d9052/userdata/shm major:0 minor:628 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d7b6636edfdbd08e77deab1053c053904d89a5b39e0804954f8c80e56f1c9467/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d7b6636edfdbd08e77deab1053c053904d89a5b39e0804954f8c80e56f1c9467/userdata/shm major:0 minor:47 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d884f8a9271a3be209f8b517c106210cb5d535a1b46d052e9c8de84e6be62441/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d884f8a9271a3be209f8b517c106210cb5d535a1b46d052e9c8de84e6be62441/userdata/shm major:0 minor:311 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/da0959fc5c7a27175270ce726463fd3e9e8da5aff2a8a6bf45a477613fc17349/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/da0959fc5c7a27175270ce726463fd3e9e8da5aff2a8a6bf45a477613fc17349/userdata/shm major:0 minor:974 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/db3e2d765c6b8a0f8e83a15ab78326f0bd14411e923e027c42dbca04e32ebad8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/db3e2d765c6b8a0f8e83a15ab78326f0bd14411e923e027c42dbca04e32ebad8/userdata/shm major:0 minor:129 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e54e467fec77344a591689afeb76ae49385e45cfe4c4aeb2a94eec65da6cdce5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e54e467fec77344a591689afeb76ae49385e45cfe4c4aeb2a94eec65da6cdce5/userdata/shm major:0 minor:284 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e6d28a4266f3905d697e133577d4e67e6ee815cccb7f5ef59b536b8c0d26cb94/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e6d28a4266f3905d697e133577d4e67e6ee815cccb7f5ef59b536b8c0d26cb94/userdata/shm major:0 minor:315 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e7c778232fad4af52f47c31c73f233a65718cb5d7849085291ba01455710c481/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e7c778232fad4af52f47c31c73f233a65718cb5d7849085291ba01455710c481/userdata/shm major:0 minor:360 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f0367a6433cb34322b034ae4858460e50d6150d575d08858a19734f838b6527f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f0367a6433cb34322b034ae4858460e50d6150d575d08858a19734f838b6527f/userdata/shm major:0 minor:896 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f571f0a4aeeadbbb146ec437860c7a57c1e485b485fb0691d9981c6a2b22a120/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f571f0a4aeeadbbb146ec437860c7a57c1e485b485fb0691d9981c6a2b22a120/userdata/shm major:0 minor:641 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f5861a89c1b826c96f8d7eb1735da2b4cdf59be101852d074de82cd32893d879/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f5861a89c1b826c96f8d7eb1735da2b4cdf59be101852d074de82cd32893d879/userdata/shm major:0 minor:637 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f64a1a9e81543288c082ba54493b536ca2db47fef63c0b6ea8e2ecd8d4fc6a3b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f64a1a9e81543288c082ba54493b536ca2db47fef63c0b6ea8e2ecd8d4fc6a3b/userdata/shm major:0 minor:299 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fb524201fadda92a97019a1e36f215d113e21212244e9e77433e72e6adcfc793/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fb524201fadda92a97019a1e36f215d113e21212244e9e77433e72e6adcfc793/userdata/shm major:0 minor:44 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fb67bb4fcbc0cf30dc19aad2f8b3b13f31473c855e7d30010f86d687f8822d44/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fb67bb4fcbc0cf30dc19aad2f8b3b13f31473c855e7d30010f86d687f8822d44/userdata/shm major:0 minor:1127 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fe31ec10252333ed40b830b2aacce6de4e895210a7d0b0aebae765349ccfa670/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fe31ec10252333ed40b830b2aacce6de4e895210a7d0b0aebae765349ccfa670/userdata/shm major:0 minor:168 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ff39808811189af69f67503d76fa167bb97add817a078f10dcf74a7660201e4e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ff39808811189af69f67503d76fa167bb97add817a078f10dcf74a7660201e4e/userdata/shm major:0 minor:277 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/011c6603-d533-4449-b409-f6f698a3bd50/volumes/kubernetes.io~projected/kube-api-access-xh4wr:{mountpoint:/var/lib/kubelet/pods/011c6603-d533-4449-b409-f6f698a3bd50/volumes/kubernetes.io~projected/kube-api-access-xh4wr major:0 minor:972 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/011c6603-d533-4449-b409-f6f698a3bd50/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/011c6603-d533-4449-b409-f6f698a3bd50/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:971 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0127e0d5-9961-4ff6-851d-884e71e1dcf2/volumes/kubernetes.io~projected/kube-api-access-nbc5w:{mountpoint:/var/lib/kubelet/pods/0127e0d5-9961-4ff6-851d-884e71e1dcf2/volumes/kubernetes.io~projected/kube-api-access-nbc5w major:0 minor:922 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0127e0d5-9961-4ff6-851d-884e71e1dcf2/volumes/kubernetes.io~secret/samples-operator-tls:{mountpoint:/var/lib/kubelet/pods/0127e0d5-9961-4ff6-851d-884e71e1dcf2/volumes/kubernetes.io~secret/samples-operator-tls major:0 minor:914 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/02f1d753-983a-4c4a-b1a0-560de173859a/volumes/kubernetes.io~projected/kube-api-access-mb52w:{mountpoint:/var/lib/kubelet/pods/02f1d753-983a-4c4a-b1a0-560de173859a/volumes/kubernetes.io~projected/kube-api-access-mb52w major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/02f1d753-983a-4c4a-b1a0-560de173859a/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/02f1d753-983a-4c4a-b1a0-560de173859a/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/02f1d753-983a-4c4a-b1a0-560de173859a/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/02f1d753-983a-4c4a-b1a0-560de173859a/volumes/kubernetes.io~secret/srv-cert major:0 minor:620 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/070ebb2d-57a2-4c76-8c93-e09d398f3b73/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/070ebb2d-57a2-4c76-8c93-e09d398f3b73/volumes/kubernetes.io~projected/kube-api-access major:0 minor:1325 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0cb042de-c873-408c-a4c4-ef9f7e546a08/volumes/kubernetes.io~projected/kube-api-access-p9ngc:{mountpoint:/var/lib/kubelet/pods/0cb042de-c873-408c-a4c4-ef9f7e546a08/volumes/kubernetes.io~projected/kube-api-access-p9ngc major:0 minor:576 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0ce6dd93-084c-4e15-8b7c-e0829a6df14e/volumes/kubernetes.io~projected/kube-api-access-q8msx:{mountpoint:/var/lib/kubelet/pods/0ce6dd93-084c-4e15-8b7c-e0829a6df14e/volumes/kubernetes.io~projected/kube-api-access-q8msx major:0 minor:1013 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0ce6dd93-084c-4e15-8b7c-e0829a6df14e/volumes/kubernetes.io~secret/machine-api-operator-tls:{mountpoint:/var/lib/kubelet/pods/0ce6dd93-084c-4e15-8b7c-e0829a6df14e/volumes/kubernetes.io~secret/machine-api-operator-tls major:0 minor:1012 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/12b89e05-a503-47aa-90b2-4d741e015b19/volumes/kubernetes.io~projected/kube-api-access-twgrj:{mountpoint:/var/lib/kubelet/pods/12b89e05-a503-47aa-90b2-4d741e015b19/volumes/kubernetes.io~projected/kube-api-access-twgrj major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/12b89e05-a503-47aa-90b2-4d741e015b19/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/12b89e05-a503-47aa-90b2-4d741e015b19/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/12b89e05-a503-47aa-90b2-4d741e015b19/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/12b89e05-a503-47aa-90b2-4d741e015b19/volumes/kubernetes.io~secret/srv-cert major:0 minor:623 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22a83952-32ec-48f7-85cd-209b62362ae2/volumes/kubernetes.io~secret/tls-certificates:{mountpoint:/var/lib/kubelet/pods/22a83952-32ec-48f7-85cd-209b62362ae2/volumes/kubernetes.io~secret/tls-certificates major:0 minor:1111 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/24765ff1-5e7d-4100-ad81-8f73555fc0a2/volumes/kubernetes.io~projected/kube-api-access-98725:{mountpoint:/var/lib/kubelet/pods/24765ff1-5e7d-4100-ad81-8f73555fc0a2/volumes/kubernetes.io~projected/kube-api-access-98725 major:0 minor:1189 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/24765ff1-5e7d-4100-ad81-8f73555fc0a2/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/24765ff1-5e7d-4100-ad81-8f73555fc0a2/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:1188 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/24765ff1-5e7d-4100-ad81-8f73555fc0a2/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/24765ff1-5e7d-4100-ad81-8f73555fc0a2/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:1180 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/25190a18-bdac-479b-b526-840d28636be3/volumes/kubernetes.io~projected/kube-api-access-bsb4q:{mountpoint:/var/lib/kubelet/pods/25190a18-bdac-479b-b526-840d28636be3/volumes/kubernetes.io~projected/kube-api-access-bsb4q major:0 minor:468 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/25190a18-bdac-479b-b526-840d28636be3/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/25190a18-bdac-479b-b526-840d28636be3/volumes/kubernetes.io~secret/encryption-config major:0 minor:466 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/25190a18-bdac-479b-b526-840d28636be3/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/25190a18-bdac-479b-b526-840d28636be3/volumes/kubernetes.io~secret/etcd-client major:0 minor:467 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/25190a18-bdac-479b-b526-840d28636be3/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/25190a18-bdac-479b-b526-840d28636be3/volumes/kubernetes.io~secret/serving-cert major:0 minor:508 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2cb764f6-40f8-4e87-8be0-b9d7b0364201/volumes/kubernetes.io~projected/kube-api-access-sp8hv:{mountpoint:/var/lib/kubelet/pods/2cb764f6-40f8-4e87-8be0-b9d7b0364201/volumes/kubernetes.io~projected/kube-api-access-sp8hv major:0 minor:286 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2cb764f6-40f8-4e87-8be0-b9d7b0364201/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/2cb764f6-40f8-4e87-8be0-b9d7b0364201/volumes/kubernetes.io~secret/metrics-tls major:0 minor:472 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/303d5058-84df-40d1-a941-896b093ae470/volumes/kubernetes.io~projected/kube-api-access-79bl6:{mountpoint:/var/lib/kubelet/pods/303d5058-84df-40d1-a941-896b093ae470/volumes/kubernetes.io~projected/kube-api-access-79bl6 major:0 minor:291 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/303d5058-84df-40d1-a941-896b093ae470/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/303d5058-84df-40d1-a941-896b093ae470/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3332acec-1553-4594-a903-a322399f6d9d/volumes/kubernetes.io~projected/kube-api-access-x6qs2:{mountpoint:/var/lib/kubelet/pods/3332acec-1553-4594-a903-a322399f6d9d/volumes/kubernetes.io~projected/kube-api-access-x6qs2 major:0 minor:69 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3332acec-1553-4594-a903-a322399f6d9d/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/3332acec-1553-4594-a903-a322399f6d9d/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/390a7aa5-c7f7-4baf-a2d2-e6da9a465042/volumes/kubernetes.io~projected/kube-api-access-dkpjn:{mountpoint:/var/lib/kubelet/pods/390a7aa5-c7f7-4baf-a2d2-e6da9a465042/volumes/kubernetes.io~projected/kube-api-access-dkpjn major:0 minor:587 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3e36c9eb-0368-46dc-af84-9c602a15555d/volumes/kubernetes.io~projected/kube-api-access-lbzsl:{mountpoint:/var/lib/kubelet/pods/3e36c9eb-0368-46dc-af84-9c602a15555d/volumes/kubernetes.io~projected/kube-api-access-lbzsl major:0 minor:953 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3e36c9eb-0368-46dc-af84-9c602a15555d/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/3e36c9eb-0368-46dc-af84-9c602a15555d/volumes/kubernetes.io~secret/cert major:0 minor:954 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4a2d8ef6-14ac-490d-a931-7082344d3f46/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/4a2d8ef6-14ac-490d-a931-7082344d3f46/volumes/kubernetes.io~projected/ca-certs major:0 minor:614 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4a2d8ef6-14ac-490d-a931-7082344d3f46/volumes/kubernetes.io~projected/kube-api-access-ddtsj:{mountpoint:/var/lib/kubelet/pods/4a2d8ef6-14ac-490d-a931-7082344d3f46/volumes/kubernetes.io~projected/kube-api-access-ddtsj major:0 minor:617 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4f5b3b93-a59d-495c-a311-8913fa6000fc/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/4f5b3b93-a59d-495c-a311-8913fa6000fc/volumes/kubernetes.io~projected/ca-certs major:0 minor:615 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4f5b3b93-a59d-495c-a311-8913fa6000fc/volumes/kubernetes.io~projected/kube-api-access-2tszx:{mountpoint:/var/lib/kubelet/pods/4f5b3b93-a59d-495c-a311-8913fa6000fc/volumes/kubernetes.io~projected/kube-api-access-2tszx major:0 minor:618 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4f5b3b93-a59d-495c-a311-8913fa6000fc/volumes/kubernetes.io~secret/catalogserver-certs:{mountpoint:/var/lib/kubelet/pods/4f5b3b93-a59d-495c-a311-8913fa6000fc/volumes/kubernetes.io~secret/catalogserver-certs major:0 minor:616 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/523033b8-4101-4a55-8320-55bef04ddaaf/volumes/kubernetes.io~projected/kube-api-access-dlg2j:{mountpoint:/var/lib/kubelet/pods/523033b8-4101-4a55-8320-55bef04ddaaf/volumes/kubernetes.io~projected/kube-api-access-dlg2j major:0 minor:139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/523033b8-4101-4a55-8320-55bef04ddaaf/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/523033b8-4101-4a55-8320-55bef04ddaaf/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/55a2662a-d672-4a46-9b81-bfcaf334eedb/volumes/kubernetes.io~projected/kube-api-access-gzghr:{mountpoint:/var/lib/kubelet/pods/55a2662a-d672-4a46-9b81-bfcaf334eedb/volumes/kubernetes.io~projected/kube-api-access-gzghr major:0 minor:749 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/55a2662a-d672-4a46-9b81-bfcaf334eedb/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/55a2662a-d672-4a46-9b81-bfcaf334eedb/volumes/kubernetes.io~secret/serving-cert major:0 minor:405 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/57811d07-ae8a-44b7-8efb-dafc5afad31e/volumes/kubernetes.io~projected/kube-api-access-vrmsh:{mountpoint:/var/lib/kubelet/pods/57811d07-ae8a-44b7-8efb-dafc5afad31e/volumes/kubernetes.io~projected/kube-api-access-vrmsh major:0 minor:128 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd/volumes/kubernetes.io~projected/kube-api-access major:0 minor:1315 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b/volumes/kubernetes.io~projected/kube-api-access-bbnd2:{mountpoint:/var/lib/kubelet/pods/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b/volumes/kubernetes.io~projected/kube-api-access-bbnd2 major:0 minor:118 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/608a8a56-daee-4fa1-8300-42155217c68b/volumes/kubernetes.io~projected/kube-api-access-px2vd:{mountpoint:/var/lib/kubelet/pods/608a8a56-daee-4fa1-8300-42155217c68b/volumes/kubernetes.io~projected/kube-api-access-px2vd major:0 minor:1190 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/608a8a56-daee-4fa1-8300-42155217c68b/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/608a8a56-daee-4fa1-8300-42155217c68b/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config major:0 minor:1184 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/608a8a56-daee-4fa1-8300-42155217c68b/volumes/kubernetes.io~secret/openshift-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/608a8a56-daee-4fa1-8300-42155217c68b/volumes/kubernetes.io~secret/openshift-state-metrics-tls major:0 minor:1186 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6320dbb5-b84d-4a57-8c65-fbed8421f84a/volumes/kubernetes.io~projected/kube-api-access-pgjlz:{mountpoint:/var/lib/kubelet/pods/6320dbb5-b84d-4a57-8c65-fbed8421f84a/volumes/kubernetes.io~projected/kube-api-access-pgjlz major:0 minor:279 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6320dbb5-b84d-4a57-8c65-fbed8421f84a/volumes/kubernetes.io~secret/package-server-manager-serving-cert:{mountpoint:/var/lib/kubelet/pods/6320dbb5-b84d-4a57-8c65-fbed8421f84a/volumes/kubernetes.io~secret/package-server-manager-serving-cert major:0 minor:624 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/638b3f88-0386-4f30-8ca5-6255e8f936fc/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/638b3f88-0386-4f30-8ca5-6255e8f936fc/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:567 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/638b3f88-0386-4f30-8ca5-6255e8f936fc/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/638b3f88-0386-4f30-8ca5-6255e8f936fc/volumes/kubernetes.io~empty-dir/tmp major:0 minor:570 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/638b3f88-0386-4f30-8ca5-6255e8f936fc/volumes/kubernetes.io~projected/kube-api-access-996wg:{mountpoint:/var/lib/kubelet/pods/638b3f88-0386-4f30-8ca5-6255e8f936fc/volumes/kubernetes.io~projected/kube-api-access-996wg major:0 minor:571 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6a08a1e4-cf92-4733-a8af-c7ac5b21e925/volumes/kubernetes.io~projected/kube-api-access-qc5kx:{mountpoint:/var/lib/kubelet/pods/6a08a1e4-cf92-4733-a8af-c7ac5b21e925/volumes/kubernetes.io~projected/kube-api-access-qc5kx major:0 minor:1113 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6a08a1e4-cf92-4733-a8af-c7ac5b21e925/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/6a08a1e4-cf92-4733-a8af-c7ac5b21e925/volumes/kubernetes.io~secret/default-certificate major:0 minor:1106 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6a08a1e4-cf92-4733-a8af-c7ac5b21e925/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/6a08a1e4-cf92-4733-a8af-c7ac5b21e925/volumes/kubernetes.io~secret/metrics-certs major:0 minor:1108 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6a08a1e4-cf92-4733-a8af-c7ac5b21e925/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/6a08a1e4-cf92-4733-a8af-c7ac5b21e925/volumes/kubernetes.io~secret/stats-auth major:0 minor:1110 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6a9ccd8e-d964-4c03-8ffc-51b464030c25/volumes/kubernetes.io~projected/kube-api-access-ssz8p:{mountpoint:/var/lib/kubelet/pods/6a9ccd8e-d964-4c03-8ffc-51b464030c25/volumes/kubernetes.io~projected/kube-api-access-ssz8p major:0 minor:276 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6a9ccd8e-d964-4c03-8ffc-51b464030c25/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/6a9ccd8e-d964-4c03-8ffc-51b464030c25/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:476 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6a9ccd8e-d964-4c03-8ffc-51b464030c25/volumes/kubernetes.io~secret/node-tuning-operator-tls:{mountpoint:/var/lib/kubelet/pods/6a9ccd8e-d964-4c03-8ffc-51b464030c25/volumes/kubernetes.io~secret/node-tuning-operator-tls major:0 minor:471 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/70e2ba24-4871-4d1d-9935-156fdbeb2810/volumes/kubernetes.io~projected/kube-api-access-4nmd6:{mountpoint:/var/lib/kubelet/pods/70e2ba24-4871-4d1d-9935-156fdbeb2810/volumes/kubernetes.io~projected/kube-api-access-4nmd6 major:0 minor:135 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/70e2ba24-4871-4d1d-9935-156fdbeb2810/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/70e2ba24-4871-4d1d-9935-156fdbeb2810/volumes/kubernetes.io~secret/metrics-certs major:0 minor:627 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/732a3831-20e0-47dc-a29a-8bb4659541b7/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/732a3831-20e0-47dc-a29a-8bb4659541b7/volumes/kubernetes.io~projected/kube-api-access major:0 minor:475 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/732a3831-20e0-47dc-a29a-8bb4659541b7/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/732a3831-20e0-47dc-a29a-8bb4659541b7/volumes/kubernetes.io~secret/serving-cert major:0 minor:110 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74a7801b-b7a4-4292-91b3-6285c239aeb7/volumes/kubernetes.io~projected/kube-api-access-pdmhx:{mountpoint:/var/lib/kubelet/pods/74a7801b-b7a4-4292-91b3-6285c239aeb7/volumes/kubernetes.io~projected/kube-api-access-pdmhx major:0 minor:520 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74a7801b-b7a4-4292-91b3-6285c239aeb7/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/74a7801b-b7a4-4292-91b3-6285c239aeb7/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert major:0 minor:439 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7b098bd4-5751-4b01-8409-0688fd29233e/volumes/kubernetes.io~projected/kube-api-access-86pcb:{mountpoint:/var/lib/kubelet/pods/7b098bd4-5751-4b01-8409-0688fd29233e/volumes/kubernetes.io~projected/kube-api-access-86pcb major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7b4e3ba0-5194-4e20-8f12-dea4b67504fe/volumes/kubernetes.io~projected/kube-api-access-dqqkv:{mountpoint:/var/lib/kubelet/pods/7b4e3ba0-5194-4e20-8f12-dea4b67504fe/volumes/kubernetes.io~projected/kube-api-access-dqqkv major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7b4e3ba0-5194-4e20-8f12-dea4b67504fe/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/7b4e3ba0-5194-4e20-8f12-dea4b67504fe/volumes/kubernetes.io~secret/cert major:0 minor:470 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7b4e3ba0-5194-4e20-8f12-dea4b67504fe/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls:{mountpoint:/var/lib/kubelet/pods/7b4e3ba0-5194-4e20-8f12-dea4b67504fe/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls major:0 minor:474 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c/volumes/kubernetes.io~projected/kube-api-access-n2b65:{mountpoint:/var/lib/kubelet/pods/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c/volumes/kubernetes.io~projected/kube-api-access-n2b65 major:0 minor:294 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c/volumes/kubernetes.io~secret/serving-cert major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c396c41-c617-4631-9700-a7052af5a276/volumes/kubernetes.io~projected/kube-api-access-n4grf:{mountpoint:/var/lib/kubelet/pods/8c396c41-c617-4631-9700-a7052af5a276/volumes/kubernetes.io~projected/kube-api-access-n4grf major:0 minor:1251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c396c41-c617-4631-9700-a7052af5a276/volumes/kubernetes.io~secret/client-ca-bundle:{mountpoint:/var/lib/kubelet/pods/8c396c41-c617-4631-9700-a7052af5a276/volumes/kubernetes.io~secret/client-ca-bundle major:0 minor:1249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c396c41-c617-4631-9700-a7052af5a276/volumes/kubernetes.io~secret/secret-metrics-client-certs:{mountpoint:/var/lib/kubelet/pods/8c396c41-c617-4631-9700-a7052af5a276/volumes/kubernetes.io~secret/secret-metrics-client-certs major:0 minor:1250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c396c41-c617-4631-9700-a7052af5a276/volumes/kubernetes.io~secret/secret-metrics-server-tls:{mountpoint:/var/lib/kubelet/pods/8c396c41-c617-4631-9700-a7052af5a276/volumes/kubernetes.io~secret/secret-metrics-server-tls major:0 minor:1238 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8e0c87ae-6387-4c00-b03d-582566907fb6/volumes/kubernetes.io~projected/kube-api-access-5dcvb:{mountpoint:/var/lib/kubelet/pods/8e0c87ae-6387-4c00-b03d-582566907fb6/volumes/kubernetes.io~projected/kube-api-access-5dcvb major:0 minor:979 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8e0c87ae-6387-4c00-b03d-582566907fb6/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/8e0c87ae-6387-4c00-b03d-582566907fb6/volumes/kubernetes.io~secret/serving-cert major:0 minor:983 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8e70a9f5-1154-40e9-a487-21e36e7f420a/volumes/kubernetes.io~projected/kube-api-access-mb7jb:{mountpoint:/var/lib/kubelet/pods/8e70a9f5-1154-40e9-a487-21e36e7f420a/volumes/kubernetes.io~projected/kube-api-access-mb7jb major:0 minor:1011 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8e70a9f5-1154-40e9-a487-21e36e7f420a/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/8e70a9f5-1154-40e9-a487-21e36e7f420a/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:1010 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c/volumes/kubernetes.io~projected/kube-api-access-qpg44:{mountpoint:/var/lib/kubelet/pods/8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c/volumes/kubernetes.io~projected/kube-api-access-qpg44 major:0 minor:582 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c/volumes/kubernetes.io~secret/metrics-tls major:0 minor:581 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8ebd1a97-ff7b-4a10-a1b5-956e427478a8/volumes/kubernetes.io~projected/kube-api-access-gckc2:{mountpoint:/var/lib/kubelet/pods/8ebd1a97-ff7b-4a10-a1b5-956e427478a8/volumes/kubernetes.io~projected/kube-api-access-gckc2 major:0 minor:873 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8ebd1a97-ff7b-4a10-a1b5-956e427478a8/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/8ebd1a97-ff7b-4a10-a1b5-956e427478a8/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:872 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/91168f3d-70eb-4351-bb83-5411a96ad29d/volumes/kubernetes.io~projected/kube-api-access-rt2q4:{mountpoint:/var/lib/kubelet/pods/91168f3d-70eb-4351-bb83-5411a96ad29d/volumes/kubernetes.io~projected/kube-api-access-rt2q4 major:0 minor:970 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/91168f3d-70eb-4351-bb83-5411a96ad29d/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/91168f3d-70eb-4351-bb83-5411a96ad29d/volumes/kubernetes.io~secret/cert major:0 minor:956 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/91d16f7b-390a-4d9d-99d6-cc8e210801d1/volumes/kubernetes.io~projected/kube-api-access-b8rjx:{mountpoint:/var/lib/kubelet/pods/91d16f7b-390a-4d9d-99d6-cc8e210801d1/volumes/kubernetes.io~projected/kube-api-access-b8rjx major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/91d16f7b-390a-4d9d-99d6-cc8e210801d1/volumes/kubernetes.io~secret/marketplace-operator-metrics:{mountpoint:/var/lib/kubelet/pods/91d16f7b-390a-4d9d-99d6-cc8e210801d1/volumes/kubernetes.io~secret/marketplace-operator-metrics major:0 minor:626 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9b5620d6-a5fe-45d7-b39e-8bed7f602a17/volumes/kubernetes.io~projected/kube-api-access-jtf52:{mountpoint:/var/lib/kubelet/pods/9b5620d6-a5fe-45d7-b39e-8bed7f602a17/volumes/kubernetes.io~projected/kube-api-access-jtf52 major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9b5620d6-a5fe-45d7-b39e-8bed7f602a17/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/9b5620d6-a5fe-45d7-b39e-8bed7f602a17/volumes/kubernetes.io~secret/serving-cert major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9cad383a-cb69-41a8-aec8-23ee1c930430/volumes/kubernetes.io~projected/kube-api-access-svc78:{mountpoint:/var/lib/kubelet/pods/9cad383a-cb69-41a8-aec8-23ee1c930430/volumes/kubernetes.io~projected/kube-api-access-svc78 major:0 minor:839 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9cad383a-cb69-41a8-aec8-23ee1c930430/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/9cad383a-cb69-41a8-aec8-23ee1c930430/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:830 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9cad383a-cb69-41a8-aec8-23ee1c930430/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/9cad383a-cb69-41a8-aec8-23ee1c930430/volumes/kubernetes.io~secret/webhook-cert major:0 minor:829 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1/volumes/kubernetes.io~projecte Feb 24 02:20:56.926139 master-0 kubenswrapper[31411]: d/kube-api-access-rv6zq:{mountpoint:/var/lib/kubelet/pods/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1/volumes/kubernetes.io~projected/kube-api-access-rv6zq major:0 minor:925 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1/volumes/kubernetes.io~secret/certs major:0 minor:924 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:909 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a02536a3-7d3e-4e74-9625-aefed518ec35/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/a02536a3-7d3e-4e74-9625-aefed518ec35/volumes/kubernetes.io~projected/kube-api-access major:0 minor:265 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a02536a3-7d3e-4e74-9625-aefed518ec35/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a02536a3-7d3e-4e74-9625-aefed518ec35/volumes/kubernetes.io~secret/serving-cert major:0 minor:256 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a4267e3a-aaaf-4b2f-a37c-0f097a35783f/volumes/kubernetes.io~projected/kube-api-access-5pc72:{mountpoint:/var/lib/kubelet/pods/a4267e3a-aaaf-4b2f-a37c-0f097a35783f/volumes/kubernetes.io~projected/kube-api-access-5pc72 major:0 minor:815 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a4cea44a-1c6e-465f-97df-2c951056cb85/volumes/kubernetes.io~projected/kube-api-access-57x9m:{mountpoint:/var/lib/kubelet/pods/a4cea44a-1c6e-465f-97df-2c951056cb85/volumes/kubernetes.io~projected/kube-api-access-57x9m major:0 minor:457 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a4cea44a-1c6e-465f-97df-2c951056cb85/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls:{mountpoint:/var/lib/kubelet/pods/a4cea44a-1c6e-465f-97df-2c951056cb85/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls major:0 minor:417 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a5305004-5311-4bc4-ad7c-6670f97c89cb/volumes/kubernetes.io~projected/kube-api-access-kznmr:{mountpoint:/var/lib/kubelet/pods/a5305004-5311-4bc4-ad7c-6670f97c89cb/volumes/kubernetes.io~projected/kube-api-access-kznmr major:0 minor:1191 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a5305004-5311-4bc4-ad7c-6670f97c89cb/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/a5305004-5311-4bc4-ad7c-6670f97c89cb/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config major:0 minor:1185 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a5305004-5311-4bc4-ad7c-6670f97c89cb/volumes/kubernetes.io~secret/kube-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/a5305004-5311-4bc4-ad7c-6670f97c89cb/volumes/kubernetes.io~secret/kube-state-metrics-tls major:0 minor:1187 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/adc1097b-c1ab-4f09-965d-1c819671475b/volumes/kubernetes.io~projected/kube-api-access-nqtld:{mountpoint:/var/lib/kubelet/pods/adc1097b-c1ab-4f09-965d-1c819671475b/volumes/kubernetes.io~projected/kube-api-access-nqtld major:0 minor:163 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/adc1097b-c1ab-4f09-965d-1c819671475b/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/adc1097b-c1ab-4f09-965d-1c819671475b/volumes/kubernetes.io~secret/webhook-cert major:0 minor:164 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b085f760-0e24-41a8-af09-538396aad935/volumes/kubernetes.io~projected/kube-api-access-q86gx:{mountpoint:/var/lib/kubelet/pods/b085f760-0e24-41a8-af09-538396aad935/volumes/kubernetes.io~projected/kube-api-access-q86gx major:0 minor:713 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b176946a-c056-441c-9145-b88ca4d75758/volumes/kubernetes.io~projected/kube-api-access-kcq24:{mountpoint:/var/lib/kubelet/pods/b176946a-c056-441c-9145-b88ca4d75758/volumes/kubernetes.io~projected/kube-api-access-kcq24 major:0 minor:613 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b176946a-c056-441c-9145-b88ca4d75758/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/b176946a-c056-441c-9145-b88ca4d75758/volumes/kubernetes.io~secret/encryption-config major:0 minor:610 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b176946a-c056-441c-9145-b88ca4d75758/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/b176946a-c056-441c-9145-b88ca4d75758/volumes/kubernetes.io~secret/etcd-client major:0 minor:612 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b176946a-c056-441c-9145-b88ca4d75758/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b176946a-c056-441c-9145-b88ca4d75758/volumes/kubernetes.io~secret/serving-cert major:0 minor:611 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b36d8451-0fda-4d9d-a850-d05c8f847016/volumes/kubernetes.io~projected/kube-api-access-njjq8:{mountpoint:/var/lib/kubelet/pods/b36d8451-0fda-4d9d-a850-d05c8f847016/volumes/kubernetes.io~projected/kube-api-access-njjq8 major:0 minor:266 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b36d8451-0fda-4d9d-a850-d05c8f847016/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b36d8451-0fda-4d9d-a850-d05c8f847016/volumes/kubernetes.io~secret/serving-cert major:0 minor:254 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:261 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/volumes/kubernetes.io~projected/kube-api-access-qph4g:{mountpoint:/var/lib/kubelet/pods/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/volumes/kubernetes.io~projected/kube-api-access-qph4g major:0 minor:257 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/volumes/kubernetes.io~secret/metrics-tls major:0 minor:473 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c6153510-452b-4726-8b63-8cc894daa168/volumes/kubernetes.io~projected/kube-api-access-lxz8j:{mountpoint:/var/lib/kubelet/pods/c6153510-452b-4726-8b63-8cc894daa168/volumes/kubernetes.io~projected/kube-api-access-lxz8j major:0 minor:419 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c6153510-452b-4726-8b63-8cc894daa168/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/c6153510-452b-4726-8b63-8cc894daa168/volumes/kubernetes.io~secret/signing-key major:0 minor:418 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c84dc269-43ae-4083-9998-a0b3c90bb681/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/c84dc269-43ae-4083-9998-a0b3c90bb681/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:258 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c84dc269-43ae-4083-9998-a0b3c90bb681/volumes/kubernetes.io~projected/kube-api-access-9sp95:{mountpoint:/var/lib/kubelet/pods/c84dc269-43ae-4083-9998-a0b3c90bb681/volumes/kubernetes.io~projected/kube-api-access-9sp95 major:0 minor:282 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c84dc269-43ae-4083-9998-a0b3c90bb681/volumes/kubernetes.io~secret/image-registry-operator-tls:{mountpoint:/var/lib/kubelet/pods/c84dc269-43ae-4083-9998-a0b3c90bb681/volumes/kubernetes.io~secret/image-registry-operator-tls major:0 minor:477 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c92835f0-7f32-4584-8304-843d7979392a/volumes/kubernetes.io~projected/kube-api-access-6nwzm:{mountpoint:/var/lib/kubelet/pods/c92835f0-7f32-4584-8304-843d7979392a/volumes/kubernetes.io~projected/kube-api-access-6nwzm major:0 minor:273 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c92835f0-7f32-4584-8304-843d7979392a/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c92835f0-7f32-4584-8304-843d7979392a/volumes/kubernetes.io~secret/serving-cert major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ca1250a6-30f0-4cc0-b9b0-eabde42aefcf/volumes/kubernetes.io~projected/kube-api-access-fqqwj:{mountpoint:/var/lib/kubelet/pods/ca1250a6-30f0-4cc0-b9b0-eabde42aefcf/volumes/kubernetes.io~projected/kube-api-access-fqqwj major:0 minor:1112 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cabdddba-5507-4e47-98ef-a00c6d0f305d/volumes/kubernetes.io~projected/kube-api-access-h6f7j:{mountpoint:/var/lib/kubelet/pods/cabdddba-5507-4e47-98ef-a00c6d0f305d/volumes/kubernetes.io~projected/kube-api-access-h6f7j major:0 minor:272 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cabdddba-5507-4e47-98ef-a00c6d0f305d/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/cabdddba-5507-4e47-98ef-a00c6d0f305d/volumes/kubernetes.io~secret/serving-cert major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d8e20d47-aeb6-41bf-9715-c437beb8e9e4/volumes/kubernetes.io~projected/kube-api-access-qv6t5:{mountpoint:/var/lib/kubelet/pods/d8e20d47-aeb6-41bf-9715-c437beb8e9e4/volumes/kubernetes.io~projected/kube-api-access-qv6t5 major:0 minor:298 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/db8d6627-394c-4087-bfa4-bf7580f6bb4b/volumes/kubernetes.io~projected/kube-api-access-x6lsp:{mountpoint:/var/lib/kubelet/pods/db8d6627-394c-4087-bfa4-bf7580f6bb4b/volumes/kubernetes.io~projected/kube-api-access-x6lsp major:0 minor:283 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/db8d6627-394c-4087-bfa4-bf7580f6bb4b/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/db8d6627-394c-4087-bfa4-bf7580f6bb4b/volumes/kubernetes.io~secret/proxy-tls major:0 minor:621 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/df2b8111-41c6-4333-b473-4c08fb836f70/volumes/kubernetes.io~projected/kube-api-access-cd796:{mountpoint:/var/lib/kubelet/pods/df2b8111-41c6-4333-b473-4c08fb836f70/volumes/kubernetes.io~projected/kube-api-access-cd796 major:0 minor:1167 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/df2b8111-41c6-4333-b473-4c08fb836f70/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/df2b8111-41c6-4333-b473-4c08fb836f70/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config major:0 minor:1166 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/df2b8111-41c6-4333-b473-4c08fb836f70/volumes/kubernetes.io~secret/prometheus-operator-tls:{mountpoint:/var/lib/kubelet/pods/df2b8111-41c6-4333-b473-4c08fb836f70/volumes/kubernetes.io~secret/prometheus-operator-tls major:0 minor:1162 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/df42c69b-1a0e-41f5-9006-17540369b9ad/volumes/kubernetes.io~projected/kube-api-access-f4q7n:{mountpoint:/var/lib/kubelet/pods/df42c69b-1a0e-41f5-9006-17540369b9ad/volumes/kubernetes.io~projected/kube-api-access-f4q7n major:0 minor:1016 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/df42c69b-1a0e-41f5-9006-17540369b9ad/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/df42c69b-1a0e-41f5-9006-17540369b9ad/volumes/kubernetes.io~secret/proxy-tls major:0 minor:1015 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e3a675b9-feaa-4456-b7b4-0cd3afc42a42/volumes/kubernetes.io~projected/kube-api-access-nn8hz:{mountpoint:/var/lib/kubelet/pods/e3a675b9-feaa-4456-b7b4-0cd3afc42a42/volumes/kubernetes.io~projected/kube-api-access-nn8hz major:0 minor:331 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e56a17d6-d740-4349-833e-b5279f7db2d4/volumes/kubernetes.io~projected/kube-api-access-gg7sb:{mountpoint:/var/lib/kubelet/pods/e56a17d6-d740-4349-833e-b5279f7db2d4/volumes/kubernetes.io~projected/kube-api-access-gg7sb major:0 minor:804 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e68b3061-c9d2-469d-babf-7ccac0ad9b14/volumes/kubernetes.io~projected/kube-api-access-6xz68:{mountpoint:/var/lib/kubelet/pods/e68b3061-c9d2-469d-babf-7ccac0ad9b14/volumes/kubernetes.io~projected/kube-api-access-6xz68 major:0 minor:1296 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e68b3061-c9d2-469d-babf-7ccac0ad9b14/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/e68b3061-c9d2-469d-babf-7ccac0ad9b14/volumes/kubernetes.io~secret/webhook-certs major:0 minor:1291 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d/volumes/kubernetes.io~projected/kube-api-access-w6wvl:{mountpoint:/var/lib/kubelet/pods/e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d/volumes/kubernetes.io~projected/kube-api-access-w6wvl major:0 minor:1063 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d/volumes/kubernetes.io~secret/proxy-tls major:0 minor:1061 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e8d6a6c0-b944-4206-9178-9a9930b303b9/volumes/kubernetes.io~projected/kube-api-access-zxjtb:{mountpoint:/var/lib/kubelet/pods/e8d6a6c0-b944-4206-9178-9a9930b303b9/volumes/kubernetes.io~projected/kube-api-access-zxjtb major:0 minor:736 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e8d6a6c0-b944-4206-9178-9a9930b303b9/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/e8d6a6c0-b944-4206-9178-9a9930b303b9/volumes/kubernetes.io~secret/serving-cert major:0 minor:404 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2e9cdff-8c15-43df-b8df-7fe3a73fda86/volumes/kubernetes.io~projected/kube-api-access-82hfh:{mountpoint:/var/lib/kubelet/pods/f2e9cdff-8c15-43df-b8df-7fe3a73fda86/volumes/kubernetes.io~projected/kube-api-access-82hfh major:0 minor:275 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2e9cdff-8c15-43df-b8df-7fe3a73fda86/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls:{mountpoint:/var/lib/kubelet/pods/f2e9cdff-8c15-43df-b8df-7fe3a73fda86/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls major:0 minor:625 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7/volumes/kubernetes.io~projected/kube-api-access major:0 minor:262 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7/volumes/kubernetes.io~secret/serving-cert major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f6e7b773-7ecd-4a5c-8bef-d672f371e7e5/volumes/kubernetes.io~projected/kube-api-access-q7lsb:{mountpoint:/var/lib/kubelet/pods/f6e7b773-7ecd-4a5c-8bef-d672f371e7e5/volumes/kubernetes.io~projected/kube-api-access-q7lsb major:0 minor:350 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f807f33c-8132-48a8-ab12-4b54c1cd2b10/volumes/kubernetes.io~projected/kube-api-access-g8742:{mountpoint:/var/lib/kubelet/pods/f807f33c-8132-48a8-ab12-4b54c1cd2b10/volumes/kubernetes.io~projected/kube-api-access-g8742 major:0 minor:359 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f85222bf-f51a-4232-8db1-1e6ee593617b/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/f85222bf-f51a-4232-8db1-1e6ee593617b/volumes/kubernetes.io~projected/kube-api-access major:0 minor:274 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f85222bf-f51a-4232-8db1-1e6ee593617b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f85222bf-f51a-4232-8db1-1e6ee593617b/volumes/kubernetes.io~secret/serving-cert major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fb39fcc8-beb4-410e-b2a4-0b3e150719cc/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/fb39fcc8-beb4-410e-b2a4-0b3e150719cc/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fb39fcc8-beb4-410e-b2a4-0b3e150719cc/volumes/kubernetes.io~projected/kube-api-access-rc8jx:{mountpoint:/var/lib/kubelet/pods/fb39fcc8-beb4-410e-b2a4-0b3e150719cc/volumes/kubernetes.io~projected/kube-api-access-rc8jx major:0 minor:141 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fb39fcc8-beb4-410e-b2a4-0b3e150719cc/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/fb39fcc8-beb4-410e-b2a4-0b3e150719cc/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:140 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2/volumes/kubernetes.io~projected/kube-api-access-pwjpw:{mountpoint:/var/lib/kubelet/pods/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2/volumes/kubernetes.io~projected/kube-api-access-pwjpw major:0 minor:259 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2/volumes/kubernetes.io~secret/etcd-client major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2/volumes/kubernetes.io~secret/serving-cert major:0 minor:253 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fcbda577-b943-4b5c-b041-948aece8e40f/volumes/kubernetes.io~projected/kube-api-access-vpg26:{mountpoint:/var/lib/kubelet/pods/fcbda577-b943-4b5c-b041-948aece8e40f/volumes/kubernetes.io~projected/kube-api-access-vpg26 major:0 minor:295 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fcbda577-b943-4b5c-b041-948aece8e40f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/fcbda577-b943-4b5c-b041-948aece8e40f/volumes/kubernetes.io~secret/serving-cert major:0 minor:247 fsType:tmpfs blockSize:0} overlay_0-100:{mountpoint:/var/lib/containers/storage/overlay/3ef6d8cf34ea021cdbf9ea24a7acb76816511cb8cd61f4b60d8fb1218080f2a3/merged major:0 minor:100 fsType:overlay blockSize:0} overlay_0-1000:{mountpoint:/var/lib/containers/storage/overlay/86e1d07d009c2c7887f1387df707bba533c11bb89076b38f4cfa0c3ed88403f8/merged major:0 minor:1000 fsType:overlay blockSize:0} overlay_0-1002:{mountpoint:/var/lib/containers/storage/overlay/a799945602fb98dd25c1ee8342e93ab1e690fa5b793b71a96ea66291ad4a5f08/merged major:0 minor:1002 fsType:overlay blockSize:0} overlay_0-1004:{mountpoint:/var/lib/containers/storage/overlay/fd8fdf52e94dc1f91d06b609999d5d050cf50a3ac8dfe96201d6faab608f276a/merged major:0 minor:1004 fsType:overlay blockSize:0} overlay_0-1018:{mountpoint:/var/lib/containers/storage/overlay/470535c1a4cbd648ac85818a6762af451a828e9424e7941d4a2a8110693eec09/merged major:0 minor:1018 fsType:overlay blockSize:0} overlay_0-102:{mountpoint:/var/lib/containers/storage/overlay/80ea6991ee3011b1dca8f881a1a2dbdd99460f332e0759d29a0601671ad153b1/merged major:0 minor:102 fsType:overlay blockSize:0} overlay_0-1022:{mountpoint:/var/lib/containers/storage/overlay/270118921bcb18bfa7c16de00057050e2878da4e1f6f77a007fbed6b10ebb791/merged major:0 minor:1022 fsType:overlay blockSize:0} overlay_0-1026:{mountpoint:/var/lib/containers/storage/overlay/e172615b7047f199f2ed7b8df9731e29ca985572ce0f2f2e70162cb553177c7f/merged major:0 minor:1026 fsType:overlay blockSize:0} overlay_0-1028:{mountpoint:/var/lib/containers/storage/overlay/42a84e2c6d84ab5dec57bd1e9e194566a7b2f7ad92bb9ccd9e42e887a5ef2f35/merged major:0 minor:1028 fsType:overlay blockSize:0} overlay_0-103:{mountpoint:/var/lib/containers/storage/overlay/97896ea6c2eb1cc1126eab3c0e3b3eee7d2fcaa006a598ba6a4b5a807bef6f0b/merged major:0 minor:103 fsType:overlay blockSize:0} overlay_0-1030:{mountpoint:/var/lib/containers/storage/overlay/d5a8787ff42e00cdc6e4fc98d76e6259fe213e0134022e597c564e876fc3646d/merged major:0 minor:1030 fsType:overlay blockSize:0} overlay_0-1032:{mountpoint:/var/lib/containers/storage/overlay/33e4c87ef0494e1c7ebe47ccc001ea158577f64c37686ec53f2f905d92bbcfc2/merged major:0 minor:1032 fsType:overlay blockSize:0} overlay_0-1034:{mountpoint:/var/lib/containers/storage/overlay/3ffa535ab4d0d8d898f90988f5a3ff495cb83aeefb1296131e70cd0422f03c6e/merged major:0 minor:1034 fsType:overlay blockSize:0} overlay_0-1036:{mountpoint:/var/lib/containers/storage/overlay/5a600b2813451b4fd17f4d581110856b9e552d693637369692a407c0354645c5/merged major:0 minor:1036 fsType:overlay blockSize:0} overlay_0-1038:{mountpoint:/var/lib/containers/storage/overlay/39126c4e1deeee807d90661879ac2df56d3338bd3ee35e4c92ce76acb7d79fef/merged major:0 minor:1038 fsType:overlay blockSize:0} overlay_0-1040:{mountpoint:/var/lib/containers/storage/overlay/f24dc0dc4a5da2005a1a6fd98e6b488b77a949e7656dfde8b389f64df7a08049/merged major:0 minor:1040 fsType:overlay blockSize:0} overlay_0-1042:{mountpoint:/var/lib/containers/storage/overlay/01159ca4214946bf96caa35a4108f26d5bc615c7ee739ff972170d0127ce1e67/merged major:0 minor:1042 fsType:overlay blockSize:0} overlay_0-1071:{mountpoint:/var/lib/containers/storage/overlay/26822767f9f57f4bd0917c33fefcc5bb6510453394260b90453ff1ba85779d83/merged major:0 minor:1071 fsType:overlay blockSize:0} overlay_0-1073:{mountpoint:/var/lib/containers/storage/overlay/945d44841b33d437ddedbd07701dac740066df86c56dd098a96c0ba8fbd6d73b/merged major:0 minor:1073 fsType:overlay blockSize:0} overlay_0-1075:{mountpoint:/var/lib/containers/storage/overlay/da24e2711804995c4b7fd196df67a5c6e4f363fad084a3a4867b2b7562122e4a/merged major:0 minor:1075 fsType:overlay blockSize:0} overlay_0-108:{mountpoint:/var/lib/containers/storage/overlay/308c9143338255a2d781821e28724bda81f721e6ae5622f8a472a85e2ff54d30/merged major:0 minor:108 fsType:overlay blockSize:0} overlay_0-1084:{mountpoint:/var/lib/containers/storage/overlay/c81f84bd52f125dbb6f4d3f9be837e11b8dbe4233ba7b59d15a282885b1713a5/merged major:0 minor:1084 fsType:overlay blockSize:0} overlay_0-1086:{mountpoint:/var/lib/containers/storage/overlay/0e85c8bc1d7eff665cc5ba054226cb54892ca7d697a2667cb048a2727cf358cb/merged major:0 minor:1086 fsType:overlay blockSize:0} overlay_0-1088:{mountpoint:/var/lib/containers/storage/overlay/563ac88f20d027a29b6b24efc8231dd1e99f399816886c43aef6a2876f8604bd/merged major:0 minor:1088 fsType:overlay blockSize:0} overlay_0-1097:{mountpoint:/var/lib/containers/storage/overlay/10c54bc8094e200cfe17b80e5676c603a6f3de0ac0ae819b693536f1c095e678/merged major:0 minor:1097 fsType:overlay blockSize:0} overlay_0-1103:{mountpoint:/var/lib/containers/storage/overlay/c9ec25e0f1ae47cecadbe7e2c0838f1cf86c343479b77505f64beb9897454fb1/merged major:0 minor:1103 fsType:overlay blockSize:0} overlay_0-112:{mountpoint:/var/lib/containers/storage/overlay/a16328d4ff532a243002f1003d6b898fe3dce57805097994386a440bd00db46e/merged major:0 minor:112 fsType:overlay blockSize:0} overlay_0-1120:{mountpoint:/var/lib/containers/storage/overlay/e9a128996399cc1e8dca3240026551276ef28ffdd9b3a47269f5372ed3c54fb0/merged major:0 minor:1120 fsType:overlay blockSize:0} overlay_0-1123:{mountpoint:/var/lib/containers/storage/overlay/0132f3a5ba369cfe5d30fdc3a5bc2c8ddf1b8097723f0c5bd5b49a4b87103e35/merged major:0 minor:1123 fsType:overlay blockSize:0} overlay_0-113:{mountpoint:/var/lib/containers/storage/overlay/be67b62786fcab8e0b5a3bda7d5d3546a6251f36272f3ec2972ec914b996c1e8/merged major:0 minor:113 fsType:overlay blockSize:0} overlay_0-1131:{mountpoint:/var/lib/containers/storage/overlay/a267c585a4cbaafdc272806c140d271c965831009056926a60cba6a06ce9d2a8/merged major:0 minor:1131 fsType:overlay blockSize:0} overlay_0-1134:{mountpoint:/var/lib/containers/storage/overlay/97bdd479d907e1b6fef3c26b2594773a44241f2cffc236bbb218bb0c785054c6/merged major:0 minor:1134 fsType:overlay blockSize:0} overlay_0-1136:{mountpoint:/var/lib/containers/storage/overlay/351a75b29a97136dff0b0577d38e19ae9a5aa2da4eefecb3a9c378c4e41af217/merged major:0 minor:1136 fsType:overlay blockSize:0} overlay_0-1140:{mountpoint:/var/lib/containers/storage/overlay/46344f02251809c6ae87a95a673010ec6aa390418d096991068dff57525d82f1/merged major:0 minor:1140 fsType:overlay blockSize:0} overlay_0-1141:{mountpoint:/var/lib/containers/storage/overlay/3c5610be03cab111b0cbb74e2df5d265c0d6f132ebecd022d732854c82a7ded1/merged major:0 minor:1141 fsType:overlay blockSize:0} overlay_0-115:{mountpoint:/var/lib/containers/storage/overlay/698cb0643a1bbcb0ba1aa4d96e06086378d31d8a40cdb25943a87237a34928e7/merged major:0 minor:115 fsType:overlay blockSize:0} overlay_0-1154:{mountpoint:/var/lib/containers/storage/overlay/900ff2d2e92cab821a22a191a532e6fcbdf0c3f34db80c67b150efd8e0fd4574/merged major:0 minor:1154 fsType:overlay blockSize:0} overlay_0-1157:{mountpoint:/var/lib/containers/storage/overlay/069b5df576f1528e44fbaea8a947052006c89cf7c7a0f09c1ef1412d70bd2aba/merged major:0 minor:1157 fsType:overlay blockSize:0} overlay_0-1158:{mountpoint:/var/lib/containers/storage/overlay/da3889891afe5ded1aa6a0869cb8b6df0106fcbd46133089dd219ed2651528f4/merged major:0 minor:1158 fsType:overlay blockSize:0} overlay_0-1170:{mountpoint:/var/lib/containers/storage/overlay/f936b3a8206bf65df02360381a5593b120de46688d9cdd8b8db6e2ef26a36a10/merged major:0 minor:1170 fsType:overlay blockSize:0} overlay_0-1174:{mountpoint:/var/lib/containers/storage/overlay/aa6de63293dbdedaa35cefca0f2f1db60f4a9a997e545798ec9a3c534137e6e5/merged major:0 minor:1174 fsType:overlay blockSize:0} overlay_0-1196:{mountpoint:/var/lib/containers/storage/overlay/9834fecde98b602d30c488db4a83961b76b68885d7621ce0f05ced5feca5ae99/merged major:0 minor:1196 fsType:overlay blockSize:0} overlay_0-1200:{mountpoint:/var/lib/containers/storage/overlay/2a230f50f4c1b83a637fcb06c5cdd3f03fe8030e1ecaeb5261e7bcc17cd5eef9/merged major:0 minor:1200 fsType:overlay blockSize:0} overlay_0-1202:{mountpoint:/var/lib/containers/storage/overlay/c93e3123cf9a1e067e5c63612e832a96938096dbf1d4e4af624307ff9e600d44/merged major:0 minor:1202 fsType:overlay blockSize:0} overlay_0-1204:{mountpoint:/var/lib/containers/storage/overlay/fc56312a550831267e989f4e0333b239e359e97a9247a61d0410fc196d1f0b91/merged major:0 minor:1204 fsType:overlay blockSize:0} overlay_0-1206:{mountpoint:/var/lib/containers/storage/overlay/70324139a99800f3e50dad3d31cf3d0edb235c7b720bb3f3a783613803241f92/merged major:0 minor:1206 fsType:overlay blockSize:0} overlay_0-1212:{mountpoint:/var/lib/containers/storage/overlay/71f1f178083e5aae41cd5187c7d8a0cc34933b04533c04f0ebcfb3d8daf90454/merged major:0 minor:1212 fsType:overlay blockSize:0} overlay_0-1217:{mountpoint:/var/lib/containers/storage/overlay/a8b9487752c876ccbe54d0dedd5a1047c54b25618ba8003537af613acedf1147/merged major:0 minor:1217 fsType:overlay blockSize:0} overlay_0-1219:{mountpoint:/var/lib/containers/storage/overlay/438a369209333ff30b207f70a21c2f109c730e5785afc93d98e195b784b64693/merged major:0 minor:1219 fsType:overlay blockSize:0} overlay_0-1221:{mountpoint:/var/lib/containers/storage/overlay/1d1704530e0d2420781ce10db797f630b6f1a7b6172448be54969313fbda9f39/merged major:0 minor:1221 fsType:overlay blockSize:0} overlay_0-1223:{mountpoint:/var/lib/containers/storage/overlay/70988c97dc3a63eca178b01b62a366912d64ee4b5dc089f577e15c4c374d67c9/merged major:0 minor:1223 fsType:overlay blockSize:0} overlay_0-1225:{mountpoint:/var/lib/containers/storage/overlay/6eb72ff1aff252ec826d1b19540a62c2884e0063f29703be041e6e7390be5429/merged major:0 minor:1225 fsType:overlay blockSize:0} overlay_0-1237:{mountpoint:/var/lib/containers/storage/overlay/5a88fa990f7987ae05fcde128436cea6a184c522572895816ddd2a6dc1359bd2/merged major:0 minor:1237 fsType:overlay blockSize:0} overlay_0-124:{mountpoint:/var/lib/containers/storage/overlay/a00d1e591f800be5890ddc494c1cd97e74511bf6d2ded156a3829ee39e1d0f50/merged major:0 minor:124 fsType:overlay blockSize:0} overlay_0-1254:{mountpoint:/var/lib/containers/storage/overlay/e17504a6a1de9ec29aa3ecfb0d394a427db27c1440656f8355a5d1485201fcee/merged major:0 minor:1254 fsType:overlay blockSize:0} overlay_0-1256:{mountpoint:/var/lib/containers/storage/overlay/a17ca9d98fcb3437692c540359d50b784ef5c3a9027d64953bce48d3bce1f681/merged major:0 minor:1256 fsType:overlay blockSize:0} overlay_0-126:{mountpoint:/var/lib/containers/storage/overlay/40564801473c16d5d6d6a93efdd44136f51778ea4fbe993bf95dbeccbb69cb78/merged major:0 minor:126 fsType:overlay blockSize:0} overlay_0-1263:{mountpoint:/var/lib/containers/storage/overlay/2e2f5665b5a42f1fee2378da5566ec41cb71b96b47bfff05f685f89566577f8c/merged major:0 minor:1263 fsType:overlay blockSize:0} overlay_0-1276:{mountpoint:/var/lib/containers/storage/overlay/b1696e9abcaef5b2d4b581b4f654672734c3d4913744d982f4f8fbe23105a1b1/merged major:0 minor:1276 fsType:overlay blockSize:0} overlay_0-1284:{mountpoint:/var/lib/containers/storage/overlay/1e0537e093cabd8965bf745270787022234de0e4015bc5f5eb27c3486c2bd977/merged major:0 minor:1284 fsType:overlay blockSize:0} overlay_0-1286:{mountpoint:/var/lib/containers/storage/overlay/862082bd463c5b559d382d7bcb4da56723d861f4dae7366ee2df28cfa1462827/merged major:0 minor:1286 fsType:overlay blockSize:0} overlay_0-1289:{mountpoint:/var/lib/containers/storage/overlay/e8dcf9d2674b10aad0ea96a4ac5efaf745cfc631bd67d0b9a76350ccd272db30/merged major:0 minor:1289 fsType:overlay blockSize:0} overlay_0-1299:{mountpoint:/var/lib/containers/storage/overlay/77327dae0b3644e975bc984efaa520c9d0d62a57b2b9a204b9a156d3d5a737a8/merged major:0 minor:1299 fsType:overlay blockSize:0} overlay_0-1301:{mountpoint:/var/lib/containers/storage/overlay/1c529986eef5e87baeeaa094c67ef16a7a0767aa3375ece4dad8c7164af5e706/merged major:0 minor:1301 fsType:overlay blockSize:0} overlay_0-1303:{mountpoint:/var/lib/containers/storage/overlay/19b3e9b35dc6d94903f4d4827c21ecb97ec712042e13a4a9b449cb4295459b72/merged major:0 minor:1303 fsType:overlay blockSize:0} overlay_0-1313:{mountpoint:/var/lib/containers/storage/overlay/75c650360d6e620c6f836f567aa5ce40ce3fd30fefebcc308b54a3ba83cb59e5/merged major:0 minor:1313 fsType:overlay blockSize:0} overlay_0-1321:{mountpoint:/var/lib/containers/storage/overlay/3e5a1292e5ce65d50edfd5b5c5e2ad282e69c11a1a630dfabea5ba71633f0433/merged major:0 minor:1321 fsType:overlay blockSize:0} overlay_0-1323:{mountpoint:/var/lib/containers/storage/overlay/6a714fc9338e9926ba924611dc601dd3f2d7ec23f2b333402d87b1c88efc7fa2/merged major:0 minor:1323 fsType:overlay blockSize:0} overlay_0-133:{mountpoint:/var/lib/containers/storage/overlay/da7667320cc5bea12973bff2f7edd7090ee5428b7f583e90b3e4d031915e98fe/merged major:0 minor:133 fsType:overlay blockSize:0} overlay_0-1334:{mountpoint:/var/lib/containers/storage/overlay/c17b9a1713478bdce4576edb866edb664b36c64ede10d372433bf28aaca4e77c/merged major:0 minor:1334 fsType:overlay blockSize:0} overlay_0-1336:{mountpoint:/var/lib/containers/storage/overlay/481ca488d6585402cb527639e62012376996406b84b7f5182075d8de49f21cd7/merged major:0 minor:1336 fsType:overlay blockSize:0} overlay_0-1339:{mountpoint:/var/lib/containers/storage/overlay/6843709268bc39a518983ba1399b759b4bca07dd832236f194ab9c1f3550aa7e/merged major:0 minor:1339 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/799211e2bc3ccf15eddecfcf8891bb54b6be6784d440865391cc3a7ed8a3c8ad/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-146:{mountpoint:/var/lib/containers/storage/overlay/5037d156baea2600dcbd12b26fbcf1ea397d7de610fc348c4f436d074ce07a87/merged major:0 minor:146 fsType:overlay blockSize:0} overlay_0-148:{mountpoint:/var/lib/containers/storage/overlay/4f6f0481619d9ce8fdd81b1a71f7dd42be6d0fe097e556b3dacc28bd5986f444/merged major:0 minor:148 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/a7e735f04439ecc0f7df8008813b7c2b25a3b428aafc85b5f7a58ce279f1024f/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/1b7aed5ada281a842722cde2200e5cf4f8793fcd606ce02a7cba945eebdae018/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/fc3ea121c41bfec68b325768f6f90cc367e1a6eac103fd21e7a05898744bfff4/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/ad8660e33ed3bb92daa8ab78168d98e7c4a86ba929fdbbb9fea44d9e9a20c570/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-161:{mountpoint:/var/lib/containers/storage/overlay/9fdb5e6e2fffcd87a6925276df5ec8cf25e36cf7c1f353722f5971fe887c4561/merged major:0 minor:161 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/ae5c80abbbfd6f0980efc8baed3ed024676bd0ee548d0a2f51f46603d3a977f4/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/47efc9d6e25b8424bdc25fd0d853abbbbede36b15483a38387b9ad389b01db56/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/2a5f84e8de8a56ac584bf1270459762747fd63944364a2d4d71955af6d2039ed/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-177:{mountpoint:/var/lib/containers/storage/overlay/a4560f2439aa68ab1648da9d7fb151a3f7ad398949dfa6ea82222d7e29d98a18/merged major:0 minor:177 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/b78398c1c620898efbb7eb92bfc5080113a5c7093a99e2c16c287a58fc2319d8/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-182:{mountpoint:/var/lib/containers/storage/overlay/b701f6b0fca02b0ef327895202e0521e27b6532fe892c554dc100952708e8473/merged major:0 minor:182 fsType:overlay blockSize:0} overlay_0-183:{mountpoint:/var/lib/containers/storage/overlay/fb1d2a57a8a98c45e1426332bf0a0eaa9e6a3f02a8f7c2327ce6617968193710/merged major:0 minor:183 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/a48eef0f416d57233b5b8f52ea746975d881c4a781fb62e6b30882b48cf2ee32/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-187:{mountpoint:/var/lib/containers/storage/overlay/ba25f8038f57ec1d0490903af258e0e9c5eb32d4618e5d217a1f4a85d5d36d58/merged major:0 minor:187 fsType:overlay blockSize:0} overlay_0-192:{mountpoint:/var/lib/containers/storage/overlay/d33296bb59436234745f3b78ba35c9eb93589235ccd711ee08f404373b4b282c/merged major:0 minor:192 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/de0dccaea7028d62d6d867675af32dc5ff0126af8e48771ce450116de674c700/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/92ee17d746458be41e2614197f92d1c003800ae000a336e123fd8c15078b6e57/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-197:{mountpoint:/var/lib/containers/storage/overlay/e17c24b072f1bd8f98da69b5ceca83e0c2eb90c2d29e811b36076df4cf9a81cd/merged major:0 minor:197 fsType:overlay blockSize:0} overlay_0-205:{mountpoint:/var/lib/containers/storage/overlay/3e780453bf5b256d8750695debe3906b7c5fe1c86eea09ae56e2594278876b1b/merged major:0 minor:205 fsType:overlay blockSize:0} overlay_0-210:{mountpoint:/var/lib/containers/storage/overlay/cd818539fd064bef04c67a80bf1649c669b34a82fbaba2b18aa6ab2696be1885/merged major:0 minor:210 fsType:overlay blockSize:0} overlay_0-215:{mountpoint:/var/lib/containers/storage/overlay/4be79f89cc80f16351628aa40eba4bdd39f0e1bc98929f15a52ba8182e958b5c/merged major:0 minor:215 fsType:overlay blockSize:0} overlay_0-220:{mountpoint:/var/lib/containers/storage/overlay/126db6a05fc8da76433f2532cbd478b25ac8cbdcdbbac760b50c1796da2cf1f9/merged major:0 minor:220 fsType:overlay blockSize:0} overlay_0-221:{mountpoint:/var/lib/containers/storage/overlay/9f80212e517418678c8d00c551abb0c27d5409cfcd5c3c81ee688eb8061b61c4/merged major:0 minor:221 fsType:overlay blockSize:0} overlay_0-230:{mountpoint:/var/lib/containers/storage/overlay/03e5dada94badedb386ca7f5fc57737a01bb05acf8b206cd556eafab6b09eca3/merged major:0 minor:230 fsType:overlay blockSize:0} overlay_0-289:{mountpoint:/var/lib/containers/storage/overlay/194782edf342f65e2be81a4ea666f2c1d9554de829aeee4a2b636875da0ea34e/merged major:0 minor:289 fsType:overlay blockSize:0} overlay_0-292:{mountpoint:/var/lib/containers/storage/overlay/4283d28af102c923f119c2a591ce9f8de2e63cf08a0720388af4959dbf37c911/merged major:0 minor:292 fsType:overlay blockSize:0} overlay_0-303:{mountpoint:/var/lib/containers/storage/overlay/6eaf58e5425d3360cd421d9b6e58088f11e72ae4dcb6015181d2f7e44c28526f/merged major:0 minor:303 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/8743293ad202b0a94b82ac74715922407ef2df1697bb53bbdef1a4c6e6285a89/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-307:{mountpoint:/var/lib/containers/storage/overlay/56c3d2b2bc0d68dde18355cf1de922fc8bfabf7e3a8938e0b3a8f007055db113/merged major:0 minor:307 fsType:overlay blockSize:0} overlay_0-309:{mountpoint:/var/lib/containers/storage/overlay/4ff59adf238af09bfccd2172cfa75090e3e3e6e730e14fc3f2fd5178ec57ea53/merged major:0 minor:309 fsType:overlay blockSize:0} overlay_0-313:{mountpoint:/var/lib/containers/storage/overlay/6318cec2444ca3ca024c86c379a20a5e1d888a2fbef286b618c10bd70f88fda0/merged major:0 minor:313 fsType:overlay blockSize:0} overlay_0-318:{mountpoint:/var/lib/containers/storage/overlay/4f019a94493475940a5b632e7e570303e6ca5eee8f373c3ce6bce0ecae0f21ce/merged major:0 minor:318 fsType:overlay blockSize:0} overlay_0-319:{mountpoint:/var/lib/containers/storage/overlay/f166a85d420bf8c887847ab4710898fe476cfe20eccd6921f7c414ce248be584/merged major:0 minor:319 fsType:overlay blockSize:0} overlay_0-321:{mountpoint:/var/lib/containers/storage/overlay/6f269062d78f239e5837ccc3c68ba6ba30ebe652f257988df68b60771d978328/merged major:0 minor:321 fsType:overlay blockSize:0} overlay_0-323:{mountpoint:/var/lib/containers/storage/overlay/bcc3a485e9337fcae85b691f80b074f1aefb70626d8ee61c208f7efb29b33312/merged major:0 minor:323 fsType:overlay blockSize:0} overlay_0-327:{mountpoint:/var/lib/containers/storage/overlay/a0a366102d3ea86e9308feccf69b4a90ed92d6ffa5860868c19134f22e721f28/merged major:0 minor:327 fsType:overlay blockSize:0} overlay_0-329:{mountpoint:/var/lib/containers/storage/overlay/48fdd596850fb33dee39a08039a25b0e7aeaa2d8062363243e80f83de151b79e/merged major:0 minor:329 fsType:overlay blockSize:0} overlay_0-332:{mountpoint:/var/lib/containers/storage/overlay/e6649a6850233f601e7cdd2c8bb2b3af8f7835b1704352610abec5b53d3de44c/merged major:0 minor:332 fsType:overlay blockSize:0} overlay_0-340:{mountpoint:/var/lib/containers/storage/overlay/b5fdd67efd34605e19035e9da7fa973bee5f84465efe2884f9f9d8c230414d96/merged major:0 minor:340 fsType:overlay blockSize:0} overlay_0-342:{mountpoint:/var/lib/containers/storage/overlay/41203a186f94b1aea79459ae6b1186728710d3fd9551f57d68ffcef1922d5eab/merged major:0 minor:342 fsType:overlay blockSize:0} overlay_0-344:{mountpoint:/var/lib/containers/storage/overlay/18f1ac417a10e860d85371cd2781547f503257636e7bdc977114dd218c1f4753/merged major:0 minor:344 fsType:overlay blockSize:0} overlay_0-346:{mountpoint:/var/lib/containers/storage/overlay/64a61abc0238b7e79f92f95a4a5acfe154d2ce649a375f13cf21ce24f0a98988/merged major:0 minor:346 fsType:overlay blockSize:0} overlay_0-351:{mountpoint:/var/lib/containers/storage/overlay/01cab2865d48e423f4cf6135c9e8ff289b3d1ccb22b2b88ede19e10433c18a9f/merged major:0 minor:351 fsType:overlay blockSize:0} overlay_0-353:{mountpoint:/var/lib/containers/storage/overlay/9c0c02f585c160d3b1dbfa7a023e6812550752c8091fc3624fdeaa53c139a86a/merged major:0 minor:353 fsType:overlay blockSize:0} overlay_0-355:{mountpoint:/var/lib/containers/storage/overlay/bc0d3016cd7dafcbaa1fe80f05af8a02aefdf75270b944d3ca316ff5226f15ee/merged major:0 minor:355 fsType:overlay blockSize:0} overlay_0-357:{mountpoint:/var/lib/containers/storage/overlay/02be86d53aed5e75537ee918311a12f23e8e662ba9194134c3fd70af5c3baf98/merged major:0 minor:357 fsType:overlay blockSize:0} overlay_0-358:{mountpoint:/var/lib/containers/storage/overlay/317b9dbb44b7e59838835487d797d1506e93194ca173c94bae243ea710cd5ee0/merged major:0 minor:358 fsType:overlay blockSize:0} overlay_0-364:{mountpoint:/var/lib/containers/storage/overlay/306de849553ceb965ea940f9cb80026cb0af66df2d1adcac396c9e31f2b50914/merged major:0 minor:364 fsType:overlay blockSize:0} overlay_0-368:{mountpoint:/var/lib/containers/storage/overlay/48208d3e77efd966d93c0e0de715ea75c76c3fce4398e05576c88a1da40977d0/merged major:0 minor:368 fsType:overlay blockSize:0} overlay_0-379:{mountpoint:/var/lib/containers/storage/overlay/31ea8215e9373e4ab55c377b5d82b02b892e12931a4538207fdb6065d244408d/merged major:0 minor:379 fsType:overlay blockSize:0} overlay_0-385:{mountpoint:/var/lib/containers/storage/overlay/52eaa4d882f8b6b0b69af8e8c84ed89eb49a088608897f02712e94f834c9cea7/merged major:0 minor:385 fsType:overlay blockSize:0} overlay_0-399:{mountpoint:/var/lib/containers/storage/overlay/68d8abe9dc41e8ef0486cc2d28e30832cea84ed74bd5b2ca0df84cf3d38d346e/merged major:0 minor:399 fsType:overlay blockSize:0} overlay_0-401:{mountpoint:/var/lib/containers/storage/overlay/0230cb3ac927fbc84a19e962f504e7ba071b71d27a4a6a172de72576be089337/merged major:0 minor:401 fsType:overlay blockSize:0} overlay_0-407:{mountpoint:/var/lib/containers/storage/overlay/a066a5335c11b9211647d3f2495d06894fb42e4e00208457f77e13f42fd0d7a8/merged major:0 minor:407 fsType:overlay blockSize:0} overlay_0-409:{mountpoint:/var/lib/containers/storage/overlay/077893f54370afcc4a91d6464958ba0745525fda956d46bb2807d0b6472dc45a/merged major:0 minor:409 fsType:overlay blockSize:0} overlay_0-41:{mountpoint:/var/lib/containers/storage/overlay/b1f124d0027819d23dc19a8de3eddc922e21060e6dab468c02175ec61b20ffca/merged major:0 minor:41 fsType:overlay blockSize:0} overlay_0-412:{mountpoint:/var/lib/containers/storage/overlay/04cde26a856948c73131339f6d2782e5af56e4eb9f1707e2191aee9622c601ea/merged major:0 minor:412 fsType:overlay blockSize:0} overlay_0-414:{mountpoint:/var/lib/containers/storage/overlay/f229387b65b7fccdf7fa67bed0306b320eb6f0224c5512dd5ab568163b920ba9/merged major:0 minor:414 fsType:overlay blockSize:0} overlay_0-416:{mountpoint:/var/lib/containers/storage/overlay/3194b4d35ee852183f566f1b8e6846c3f6d4e1673e06156cae819fad63974316/merged major:0 minor:416 fsType:overlay blockSize:0} overlay_0-420:{mountpoint:/var/lib/containers/storage/overlay/9282225fbc8302de6d10c2ddb68083dddb7ecdc0e02f0bca44aa2f3cb0dbd5cf/merged major:0 minor:420 fsType:overlay blockSize:0} overlay_0-426:{mountpoint:/var/lib/containers/storage/overlay/675004135fe47d79f0dc0731659b24d59e83857507563ac9e0fa93a245eeb7e1/merged major:0 minor:426 fsType:overlay blockSize:0} overlay_0-428:{mountpoint:/var/lib/containers/storage/overlay/1fafd4b5e0a976775581d6a01581aefe69d713e0fadec2ec1f4e26b5256b1bfb/merged major:0 minor:428 fsType:overlay blockSize:0} overlay_0-430:{mountpoint:/var/lib/containers/storage/overlay/067089eaee5cef38f1a0526435d2ac6f35a68c4190816b5fbf3571b4120889ff/merged major:0 minor:430 fsType:overlay blockSize:0} overlay_0-432:{mountpoint:/var/lib/containers/storage/overlay/1d8482703642f85b1fe4bbceeb65e73af7a136cbda5c74df678ad0106edb0fac/merged major:0 minor:432 fsType:overlay blockSize:0} overlay_0-437:{mountpoint:/var/lib/containers/storage/overlay/c920d396f862ebd63b0699db694407771d1b36fe06dda7fcaadae68ccccc7b1e/merged major:0 minor:437 fsType:overlay blockSize:0} overlay_0-441:{mountpoint:/var/lib/containers/storage/overlay/bb69a8fb18acaabca9d3452223de95276217d87b1e792d6c90796ee82ab2b730/merged major:0 minor:441 fsType:overlay blockSize:0} overlay_0-444:{mountpoint:/var/lib/containers/storage/overlay/22a968384c75c5f97c588080e53d6c41521a3cd2342a7093cd03276706435734/merged major:0 minor:444 fsType:overlay blockSize:0} overlay_0-446:{mountpoint:/var/lib/containers/storage/overlay/76bc37e5e6b95d797a2825fabc79963344a5c540f0c5c522970887ef5c29e45c/merged major:0 minor:446 fsType:overlay blockSize:0} overlay_0-456:{mountpoint:/var/lib/containers/storage/overlay/ccf6430824a28d895366a8d8ee57cbb79192b2e59f5ca9178dddacfb5a58d116/merged major:0 minor:456 fsType:overlay blockSize:0} overlay_0-461:{mountpoint:/var/lib/containers/storage/overlay/142ac3d4cd22270eee2a56ff66e5f88b59acf3927ee8e0fb3a9d50872849b030/merged major:0 minor:461 fsType:overlay blockSize:0} overlay_0-463:{mountpoint:/var/lib/containers/storage/overlay/40d87c3a9a8018f1cf8820ecb21e89ad6ca6437e82af4cf63bd0a97184b235f9/merged major:0 minor:463 fsType:overlay blockSize:0} overlay_0-492:{mountpoint:/var/lib/containers/storage/overlay/880e078c2a93e31e5830ff143cf0b1530ff68dd23856b88efe20f6630f7df776/merged major:0 minor:492 fsType:overlay blockSize:0} overlay_0-494:{mountpoint:/var/lib/containers/storage/overlay/c6aeb3a7fd110febcf42257591572f91fcacc3bd711e857920a3573dde07df08/merged major:0 minor:494 fsType:overlay blockSize:0} overlay_0-496:{mountpoint:/var/lib/containers/storage/overlay/4a88403b28dfc618d1bbba022e86c908c1bdb41fbd75492ab337fc56b68c45e6/merged major:0 minor:496 fsType:overlay blockSize:0} overlay_0-498:{mountpoint:/var/lib/containers/storage/overlay/295e8ea66b8f4bfb20ef1932ed00d9b87f7de416f9de88c3554b8278f8ff1b65/merged major:0 minor:498 fsType:overlay blockSize:0} overlay_0-50:{mountpoint:/var/lib/containers/storage/overlay/43709e9e627b2e29b081d8085ba3b6f4a5cc10f309965a7d9d6f0587add4d769/merged major:0 minor:50 fsType:overlay blockSize:0} overlay_0-500:{mountpoint:/var/lib/containers/storage/overlay/c5ad517bd80838573a317266cfa2994fca5227b9552fe3b5162625873cfc8f4f/merged major:0 minor:500 fsType:overlay blockSize:0} overlay_0-511:{mountpoint:/var/lib/containers/storage/overlay/442e97f88fdfe6a57fbe5f48fd3a7f55e76202535a5c3183636f4471e1147618/merged major:0 minor:511 fsType:overlay blockSize:0} overlay_0-513:{mountpoint:/var/lib/containers/storage/overlay/45cb10e4f7040a5bdfdc9a013c700a365e4eb9a2c7e519cf2e74800887f1f9e0/merged major:0 minor:513 fsType:overlay blockSize:0} overlay_0-515:{mountpoint:/var/lib/containers/storage/overlay/abba9e27674500a89069a9b7952fc9d6b22d2868c8ea7c47a2feaead653f2f19/merged major:0 minor:515 fsType:overlay blockSize:0} overlay_0-517:{mountpoint:/var/lib/containers/storage/overlay/8ee83ed91da9513a30dbd9c5e03f073fa77a7b2ef5025974f63c4258bb1a6f23/merged major:0 minor:517 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/9ef3385c4bc7bdd2218e5488da246455c9c15dcdf785e27696dd1b76772b619f/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-521:{mountpoint:/var/lib/containers/storage/overlay/6db9c7afa67c3edc0a0d22a152bb4fe8a0e07ffc8c6be35e38b23af02aadeec5/merged major:0 minor:521 fsType:overlay blockSize:0} overlay_0-524:{mountpoint:/var/lib/containers/storage/overlay/ac7f1dcdb819dc15129489a07b431060df5f9c8bec92fced96078357c66752a2/merged major:0 minor:524 fsType:overlay blockSize:0} overlay_0-527:{mountpoint:/var/lib/containers/storage/overlay/fad3a864433300ae2eafbd0cb3c55899e8643facdc7983ecadd25704f4916816/merged major:0 minor:527 fsType:overlay blockSize:0} overlay_0-531:{mountpoint:/var/lib/containers/storage/overlay/988ea623337e07818d166932a4ecf022d3a368e5c52d0f6facf7f5b343de639e/merged major:0 minor:531 fsType:overlay blockSize:0} overlay_0-545:{mountpoint:/var/lib/containers/storage/overlay/2a4e3d2efcd06edaaae55ee6448157bac02ab03ea604558fcb6c49836a30e5de/merged major:0 minor:545 fsType:overlay blockSize:0} overlay_0-547:{mountpoint:/var/lib/containers/storage/overlay/ae51b4fdf51217696eee316ec567d4bfba17ff28b1896c7342753b7285514ae5/merged major:0 minor:547 fsType:overlay blockSize:0} overlay_0-548:{mountpoint:/var/lib/containers/storage/overlay/99b8b080a7a40f6d21e7a7bf77dc3042be6d48bb03b06fd54785bbb93a2345d8/merged major:0 minor:548 fsType:overlay blockSize:0} overlay_0-554:{mountpoint:/var/lib/containers/storage/overlay/2222d0c74bb062f248aaf91bfd814630acf2901e55b0b1b86a01eacd3f7f1a65/merged major:0 minor:554 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/714185c7203f6f6794260008cc5d0333e27866879ce95c5be26a31971f9599ba/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-564:{mountpoint:/var/lib/containers/storage/overlay/2f5d77e13fef497805a4a828e1c6a2259d4f1b512cfcf528159675661b49d811/merged major:0 minor:564 fsType:overlay blockSize:0} overlay_0-566:{mountpoint:/var/lib/containers/storage/overlay/1516f7783d5d80cfc8fc73e345a9232b971f6869ea095ac8a7aea931bceaa12e/merged major:0 minor:566 fsType:overlay blockSize:0} overlay_0-572:{mountpoint:/var/lib/containers/storage/overlay/f61139f75d7c2262fd6413ca4ea3605030cab629de8d47434e7e115e90c80c20/merged major:0 minor:572 fsType:overlay blockSize:0} overlay_0-574:{mountpoint:/var/lib/containers/storage/overlay/db76616eef01b93ad0b48b5fd21847f8512c95b635172d10c10b5f63f2099021/merged major:0 minor:574 fsType:overlay blockSize:0} overlay_0-58:{mountpoint:/var/lib/containers/storage/overlay/6f2dd621a160e01a4c3d64b4170b249d5266188446e667149b04490d9a5f6f6c/merged major:0 minor:58 fsType:overlay blockSize:0} overlay_0-585:{mountpoint:/var/lib/containers/storage/overlay/14ad5c5ba12e94731a866a254aa2d134d6aca0f7ffa883e0a523e0c1d4ccfa8c/merged major:0 minor:585 fsType:overlay blockSize:0} overlay_0-588:{mountpoint:/var/lib/containers/storage/overlay/99431b92902f959d838c10ab43d21e586cbbc49d20f864a99e19a232206aab25/merged major:0 minor:588 fsType:overlay blockSize:0} overlay_0-591:{mountpoint:/var/lib/containers/storage/overlay/d3621eada34ede2de69b70ae9323f6f4ce603ed868a637c71a5dab4227a71c72/merged major:0 minor:591 f Feb 24 02:20:56.926789 master-0 kubenswrapper[31411]: sType:overlay blockSize:0} overlay_0-594:{mountpoint:/var/lib/containers/storage/overlay/9ccdbcbd6f30a7ae3029e691f488b9df786ca5e88963b371a1a1d116834e3a79/merged major:0 minor:594 fsType:overlay blockSize:0} overlay_0-595:{mountpoint:/var/lib/containers/storage/overlay/cdfeb13620188f3b836a699f77e5f15467b5e2f9a9cd105ddc596f42cee7a007/merged major:0 minor:595 fsType:overlay blockSize:0} overlay_0-597:{mountpoint:/var/lib/containers/storage/overlay/da9b902c054542a20c4ba509961ee000df84597ec69e33557efe36b013d9529d/merged major:0 minor:597 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/e8554e60a368d2de1ab84b00c74959a162d432efa0c67a99df69d78d70b73417/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-603:{mountpoint:/var/lib/containers/storage/overlay/df614fc362d101bdcc839bba24681f0893e9088252b42f6583e0aaaad68e7a5d/merged major:0 minor:603 fsType:overlay blockSize:0} overlay_0-608:{mountpoint:/var/lib/containers/storage/overlay/898d92ecba668bc2254f1866b8a871916ea4f12dcda89aa95d6292fcfd51289d/merged major:0 minor:608 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/778aa1c44f55690ecae82358d03205d1cc72cd30c43d5b7e2cf9553ed7a0a60f/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/d8db675481efe8031d1130a8ab73b31f34804a80b59b6930fb472f138b2b2f25/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-649:{mountpoint:/var/lib/containers/storage/overlay/e7850cf24a4ddbc81807e95c8457ac49c45672d68a502395032461aff3b3ee12/merged major:0 minor:649 fsType:overlay blockSize:0} overlay_0-651:{mountpoint:/var/lib/containers/storage/overlay/3e2c933c2d569db87ece4487c40e24a5c62fa7b2654fe8cfe6ceebf503ea1d20/merged major:0 minor:651 fsType:overlay blockSize:0} overlay_0-653:{mountpoint:/var/lib/containers/storage/overlay/f3bb7e073ec7c23b652c1d0c052850b03ddb2858204aff4402a98d41e2d40067/merged major:0 minor:653 fsType:overlay blockSize:0} overlay_0-655:{mountpoint:/var/lib/containers/storage/overlay/a75738512a1b1fd7969af1164cd8a10dde483bfb5d72bc59805c7ad981dee312/merged major:0 minor:655 fsType:overlay blockSize:0} overlay_0-657:{mountpoint:/var/lib/containers/storage/overlay/aac5205230e6576c5750c175d83a56b1d6bb535f5c147dcbed71128e612efb39/merged major:0 minor:657 fsType:overlay blockSize:0} overlay_0-659:{mountpoint:/var/lib/containers/storage/overlay/a6bf35a1999a6de08c61afc27087f63596338e2d1ebca051efd04a6d974df562/merged major:0 minor:659 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/03da5873f43591be861b02562ca1d2790a2d787332dab1a161fc91dcd24b0ac0/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-667:{mountpoint:/var/lib/containers/storage/overlay/009e92e884db6f4270d89b408c3131149dadfbea0ef8f27d349f3b29b6a168c3/merged major:0 minor:667 fsType:overlay blockSize:0} overlay_0-669:{mountpoint:/var/lib/containers/storage/overlay/3d089ab0762325e1149a7d513962514e36c5d9fbcd6c293fdcfa6ef8ea49260f/merged major:0 minor:669 fsType:overlay blockSize:0} overlay_0-67:{mountpoint:/var/lib/containers/storage/overlay/abd4af0123c6ead6a5cc05704a01019865715ad1dd717b18d861132c4a0ce515/merged major:0 minor:67 fsType:overlay blockSize:0} overlay_0-671:{mountpoint:/var/lib/containers/storage/overlay/2cf7e2c19833a18279f21f382ee5793306a17a02aa488b76b05d8599b8349135/merged major:0 minor:671 fsType:overlay blockSize:0} overlay_0-672:{mountpoint:/var/lib/containers/storage/overlay/e87659173cfb187f5bb7667ce33ec74239ef15c6d4678bccb9218940f2f15551/merged major:0 minor:672 fsType:overlay blockSize:0} overlay_0-675:{mountpoint:/var/lib/containers/storage/overlay/e8f8209319d6470df9a19bfdcce66dec74b200e8e166f277d59fab0b35eebcd6/merged major:0 minor:675 fsType:overlay blockSize:0} overlay_0-676:{mountpoint:/var/lib/containers/storage/overlay/b0e695a0639a9317f4614c72c4b053730c932d54746c42e5578d55917c36bb0b/merged major:0 minor:676 fsType:overlay blockSize:0} overlay_0-68:{mountpoint:/var/lib/containers/storage/overlay/3b5c89aaad9fe6d35145bdc823725bf4170c64c0447e07bf7db0a175dd43c20d/merged major:0 minor:68 fsType:overlay blockSize:0} overlay_0-684:{mountpoint:/var/lib/containers/storage/overlay/79d7a7f887515faee6277c821d3dcb50fc3bf04002f3b5129e9ddea1d2fcb3a4/merged major:0 minor:684 fsType:overlay blockSize:0} overlay_0-686:{mountpoint:/var/lib/containers/storage/overlay/5becdf3eb31c8f3ecd98f778d6a18f734d613083720e89ad0b6f3e84e857d424/merged major:0 minor:686 fsType:overlay blockSize:0} overlay_0-688:{mountpoint:/var/lib/containers/storage/overlay/b6e86869ed4041af13bd231f8f3749217eade9f3d32049691ce963e13feab7e2/merged major:0 minor:688 fsType:overlay blockSize:0} overlay_0-690:{mountpoint:/var/lib/containers/storage/overlay/20220a3465b08028a5bea4bd6ac36eacb9e663ad073592818f8800b2dfa50d3f/merged major:0 minor:690 fsType:overlay blockSize:0} overlay_0-692:{mountpoint:/var/lib/containers/storage/overlay/d07e5e4b4def658923a1aef2ef9a3a318cd84a28fb78832b18d03437640991d7/merged major:0 minor:692 fsType:overlay blockSize:0} overlay_0-696:{mountpoint:/var/lib/containers/storage/overlay/c39315cccd95f8dadabcfe0d9fb90234ba32fec2aecc1a6cc71b0a71794d7c9f/merged major:0 minor:696 fsType:overlay blockSize:0} overlay_0-698:{mountpoint:/var/lib/containers/storage/overlay/bf1eb7c921497d38793e5e8995b48489aab3e9704c5af019d0ccd8df547241e7/merged major:0 minor:698 fsType:overlay blockSize:0} overlay_0-706:{mountpoint:/var/lib/containers/storage/overlay/fe0999bb7a2c1561625c096e8be726970b0fcefe2aea39beba6bfdaaea9efd19/merged major:0 minor:706 fsType:overlay blockSize:0} overlay_0-71:{mountpoint:/var/lib/containers/storage/overlay/1013b700f226af0a02bb966cb30f17a3864b6bd82c56189f987638f84c031eaa/merged major:0 minor:71 fsType:overlay blockSize:0} overlay_0-711:{mountpoint:/var/lib/containers/storage/overlay/507d32f0879d16f44ec147aac3675b570d2d14524217d4b12c1cd632f331caaf/merged major:0 minor:711 fsType:overlay blockSize:0} overlay_0-715:{mountpoint:/var/lib/containers/storage/overlay/b3e8c50105dd077ce2154e711131f94dd3409672e8882b1a345484e2eebd52b0/merged major:0 minor:715 fsType:overlay blockSize:0} overlay_0-723:{mountpoint:/var/lib/containers/storage/overlay/b087c9a6d40dc9e610114c46a133bf482941894921d7dccb9c8f9581b5c21247/merged major:0 minor:723 fsType:overlay blockSize:0} overlay_0-73:{mountpoint:/var/lib/containers/storage/overlay/50a5959b1f21d15a749604c7574ad3b44caf88cc0ca6198b70ac3dd224245365/merged major:0 minor:73 fsType:overlay blockSize:0} overlay_0-731:{mountpoint:/var/lib/containers/storage/overlay/1ea4057fa83b7023fb1ac2bd7c0dac31031dc9fbd09674711aedc07abd91ab58/merged major:0 minor:731 fsType:overlay blockSize:0} overlay_0-737:{mountpoint:/var/lib/containers/storage/overlay/15dce9c47a0acbca987d05656bc3366235e5299d2ea83fa6c340915e6762fcdb/merged major:0 minor:737 fsType:overlay blockSize:0} overlay_0-741:{mountpoint:/var/lib/containers/storage/overlay/5ffc56e59f24889a674a385fb5bff85e54ac0698ae2a0f561c7e0b602e4cbab4/merged major:0 minor:741 fsType:overlay blockSize:0} overlay_0-745:{mountpoint:/var/lib/containers/storage/overlay/16587fd4b9d44731d4abd19edd458937f2f18d3b3f6d4fe304c5f9d4c21ea0f3/merged major:0 minor:745 fsType:overlay blockSize:0} overlay_0-747:{mountpoint:/var/lib/containers/storage/overlay/ec8b21fda84e33003df3c52e28e45093b011a54e516e000dd89e3255a0f98c0f/merged major:0 minor:747 fsType:overlay blockSize:0} overlay_0-750:{mountpoint:/var/lib/containers/storage/overlay/8442b7b55ddd48ebf9ca563ab1074f9c2f4b5ad1ee485d8e5504f678d0369aa2/merged major:0 minor:750 fsType:overlay blockSize:0} overlay_0-763:{mountpoint:/var/lib/containers/storage/overlay/03f700e0898c67c665e232101ffc9c3b9922baba4a8bd924740fa5901af50cbb/merged major:0 minor:763 fsType:overlay blockSize:0} overlay_0-768:{mountpoint:/var/lib/containers/storage/overlay/5b3e37ac51af514057e5590dfbc34e185751ab691f73e0c63263c5c3885d9db4/merged major:0 minor:768 fsType:overlay blockSize:0} overlay_0-769:{mountpoint:/var/lib/containers/storage/overlay/bf0d896009fa60b68c5bfe3da1ed2d264c4d8aac681ff8a52eaff463e4d62210/merged major:0 minor:769 fsType:overlay blockSize:0} overlay_0-776:{mountpoint:/var/lib/containers/storage/overlay/0c9f24b591473b942c38f6c4905bb819ec191b7f8adfea7886aed469fe7b0b90/merged major:0 minor:776 fsType:overlay blockSize:0} overlay_0-78:{mountpoint:/var/lib/containers/storage/overlay/66cca17fa044bba0ea7aa8712f63c8ba8b6c39c38abafa6f8cc97c02852acb0f/merged major:0 minor:78 fsType:overlay blockSize:0} overlay_0-780:{mountpoint:/var/lib/containers/storage/overlay/b76adabf2a104abca032cf1f6a0c1358c93128d58c01fd43389f2d3a276e18d7/merged major:0 minor:780 fsType:overlay blockSize:0} overlay_0-782:{mountpoint:/var/lib/containers/storage/overlay/209bc03cf686878d7d7b7b451ccabd8f5369022b8eb3426444c68f513716a8d1/merged major:0 minor:782 fsType:overlay blockSize:0} overlay_0-784:{mountpoint:/var/lib/containers/storage/overlay/542358180cb408ebee32ac941156dba5f96c83a89dd25fb680aedbcf90a95fcf/merged major:0 minor:784 fsType:overlay blockSize:0} overlay_0-785:{mountpoint:/var/lib/containers/storage/overlay/fc588bac5a3f43a269dd5ed59d5e45af66cdc45bc9722f1f48087b9873fc29f0/merged major:0 minor:785 fsType:overlay blockSize:0} overlay_0-790:{mountpoint:/var/lib/containers/storage/overlay/9d8fd5adb827466f05a0a5fe80deef30f5e8d1668cd385365ed4c49296a91056/merged major:0 minor:790 fsType:overlay blockSize:0} overlay_0-794:{mountpoint:/var/lib/containers/storage/overlay/a046a354dd6a8ef42f8261ccc41d370d3dcbfee61e073b7b152508dc6a4ffa3d/merged major:0 minor:794 fsType:overlay blockSize:0} overlay_0-805:{mountpoint:/var/lib/containers/storage/overlay/23a22ff21e3a4c7b7b13813ce735dcae8ccb8355bc91d3615f6423b599c862f9/merged major:0 minor:805 fsType:overlay blockSize:0} overlay_0-807:{mountpoint:/var/lib/containers/storage/overlay/6d658abde78dbad4b72e474f4c87258f0b9865bd641f4215c5f0584e12c03af6/merged major:0 minor:807 fsType:overlay blockSize:0} overlay_0-821:{mountpoint:/var/lib/containers/storage/overlay/da4f946ddbd1e8c0b8b6ee9d7d4451a676c38e2926d96adc0369bc5f702efb43/merged major:0 minor:821 fsType:overlay blockSize:0} overlay_0-827:{mountpoint:/var/lib/containers/storage/overlay/b8db5ea8ea7e951f6bee7a24df814655f2d29cdae9d1b6048cd1402b676d7b0c/merged major:0 minor:827 fsType:overlay blockSize:0} overlay_0-831:{mountpoint:/var/lib/containers/storage/overlay/d921f497dde2bac13005fc9b3f4d6d7cc2accc6c197335adf6a80cc0c16728a6/merged major:0 minor:831 fsType:overlay blockSize:0} overlay_0-833:{mountpoint:/var/lib/containers/storage/overlay/2c87a2b3e285c1fee2b3ef465d6d63514d0d3e5da593bca16ee0506bd7b7d806/merged major:0 minor:833 fsType:overlay blockSize:0} overlay_0-836:{mountpoint:/var/lib/containers/storage/overlay/036ff0e0b7f796698f9b2c51d0f14c6258b6c9d41320841848cc1ca427305c34/merged major:0 minor:836 fsType:overlay blockSize:0} overlay_0-84:{mountpoint:/var/lib/containers/storage/overlay/0a3bc53c30e4517894d857227f2e579b0c174db8940e920ec4d79783f0560a94/merged major:0 minor:84 fsType:overlay blockSize:0} overlay_0-841:{mountpoint:/var/lib/containers/storage/overlay/acfee4afb6afaf9c6210eb84bcbb752925ffc6bb48a4eaed9a2c3f205f4b79e5/merged major:0 minor:841 fsType:overlay blockSize:0} overlay_0-843:{mountpoint:/var/lib/containers/storage/overlay/95c094a651ac70d9355c54d91557fcf22b268cff9bf24425d51e41cee443704e/merged major:0 minor:843 fsType:overlay blockSize:0} overlay_0-844:{mountpoint:/var/lib/containers/storage/overlay/8dd3177fba3584b5eb00efa2565181714a7cb0d58880134dbde90bfe13df702d/merged major:0 minor:844 fsType:overlay blockSize:0} overlay_0-846:{mountpoint:/var/lib/containers/storage/overlay/6f2c2e1288dbd68a6298775e4575bf56f2dd105491038860f6168b44e848cf56/merged major:0 minor:846 fsType:overlay blockSize:0} overlay_0-85:{mountpoint:/var/lib/containers/storage/overlay/09ab878d85706484c79ec8eb4e6e3afd650d29377f8177aac08bae9cb3a6fcfa/merged major:0 minor:85 fsType:overlay blockSize:0} overlay_0-853:{mountpoint:/var/lib/containers/storage/overlay/e570271ec6b02f36539b42da90a3b9d0b7251aa73bbbcf282603e2d6f1403cff/merged major:0 minor:853 fsType:overlay blockSize:0} overlay_0-854:{mountpoint:/var/lib/containers/storage/overlay/e9a06130d9278a48d5cc2b5b6853e8891bc89bf9e267e49a86e32b583122152a/merged major:0 minor:854 fsType:overlay blockSize:0} overlay_0-858:{mountpoint:/var/lib/containers/storage/overlay/bedbdf219d81b6113426a26f83040617e58336b85da2991d9c0284d68b1ffbf2/merged major:0 minor:858 fsType:overlay blockSize:0} overlay_0-86:{mountpoint:/var/lib/containers/storage/overlay/fe8e9f8b192a52130e24c9a052a5e8cbd3a0800b22b5a1390dfe7f4cd0c7ad48/merged major:0 minor:86 fsType:overlay blockSize:0} overlay_0-866:{mountpoint:/var/lib/containers/storage/overlay/78ea4f3216ce2f6f8a7ee0a1b84638078aa71632e170bb41d008cd2ba12281e7/merged major:0 minor:866 fsType:overlay blockSize:0} overlay_0-867:{mountpoint:/var/lib/containers/storage/overlay/edd55f9b80974b53ea832d2853cc50fbfec6b5bdea6a7fdecbf0405c5a017e56/merged major:0 minor:867 fsType:overlay blockSize:0} overlay_0-869:{mountpoint:/var/lib/containers/storage/overlay/4009e53f9273f8bcec94f301c31b685eed0fa8cdd103311e644a62dd64c01fed/merged major:0 minor:869 fsType:overlay blockSize:0} overlay_0-87:{mountpoint:/var/lib/containers/storage/overlay/f58c49352c42b515d9a59311d385e848983872424a459292768bd44d06adce9c/merged major:0 minor:87 fsType:overlay blockSize:0} overlay_0-874:{mountpoint:/var/lib/containers/storage/overlay/3d8f63530c3a371e4145b0d34fa85a69d4ff505094ca66eb0e3fc6b2b255ee9e/merged major:0 minor:874 fsType:overlay blockSize:0} overlay_0-880:{mountpoint:/var/lib/containers/storage/overlay/740a158649ae637198d7ed49c6533c351b6757a9fcd851431841595abf4cd66e/merged major:0 minor:880 fsType:overlay blockSize:0} overlay_0-883:{mountpoint:/var/lib/containers/storage/overlay/15cfa27562d55b8f7300da73b949200e0f205a929b3792c4bb837a74f65c2978/merged major:0 minor:883 fsType:overlay blockSize:0} overlay_0-890:{mountpoint:/var/lib/containers/storage/overlay/e89cbd8cf137f3712104f4f66f2d6ad59dec17b2a800f8c41b74152ef77e3b67/merged major:0 minor:890 fsType:overlay blockSize:0} overlay_0-891:{mountpoint:/var/lib/containers/storage/overlay/441c01a0e5993818f05113cc6b8a5b81e3419858742aa00e2cfe77918b68985a/merged major:0 minor:891 fsType:overlay blockSize:0} overlay_0-898:{mountpoint:/var/lib/containers/storage/overlay/ad50bd28c6fe9821c9a0028084a19bebd617a07e4f91f835e62fb157fae456e4/merged major:0 minor:898 fsType:overlay blockSize:0} overlay_0-899:{mountpoint:/var/lib/containers/storage/overlay/86a3e4ec48164d47b81307bd12b7d13ba454f093a3b09cc0a2f0a97b1ddebf2c/merged major:0 minor:899 fsType:overlay blockSize:0} overlay_0-905:{mountpoint:/var/lib/containers/storage/overlay/8431f98c4bcf0bd5194dc23033f1289ce8ff4cea6b627b998ccea068beee2d17/merged major:0 minor:905 fsType:overlay blockSize:0} overlay_0-91:{mountpoint:/var/lib/containers/storage/overlay/f1023b1b8de9213b44bcc096259cc7cd7c115c59f0cd4d2b1f47654bd2106f17/merged major:0 minor:91 fsType:overlay blockSize:0} overlay_0-912:{mountpoint:/var/lib/containers/storage/overlay/3342a3df2fbd2474c494efc7b8bc1deb2ebfe658ac4239823dff829de4b7207b/merged major:0 minor:912 fsType:overlay blockSize:0} overlay_0-926:{mountpoint:/var/lib/containers/storage/overlay/daf71ae27302c1bf49f48c8f407d68ab7e9316e8c3bd2fb8cf1fda18a605b18d/merged major:0 minor:926 fsType:overlay blockSize:0} overlay_0-929:{mountpoint:/var/lib/containers/storage/overlay/142593d8064ee4aa28a93b37c1b72f31807530767ce6c156913ecd95076856d4/merged major:0 minor:929 fsType:overlay blockSize:0} overlay_0-936:{mountpoint:/var/lib/containers/storage/overlay/d35f841c3b246efdca33c8675f2f430f857016284e03076b15503b2759cba54c/merged major:0 minor:936 fsType:overlay blockSize:0} overlay_0-94:{mountpoint:/var/lib/containers/storage/overlay/644f16db838b293b1075f77c0d5ab2432142a6fa80b0a9efb0d7098774c6a512/merged major:0 minor:94 fsType:overlay blockSize:0} overlay_0-940:{mountpoint:/var/lib/containers/storage/overlay/e49b075337f925126feccf362a92f83b35c931a6c465cbe134c923ff6fe17606/merged major:0 minor:940 fsType:overlay blockSize:0} overlay_0-945:{mountpoint:/var/lib/containers/storage/overlay/aa0c5d08037d1f51d8910c485d5eff3d83b831bf95c93d9366e8ab34fe481082/merged major:0 minor:945 fsType:overlay blockSize:0} overlay_0-951:{mountpoint:/var/lib/containers/storage/overlay/23234991f29773a705adef1a6fbc289d1dadf740e6b3580c8febd26039c02a87/merged major:0 minor:951 fsType:overlay blockSize:0} overlay_0-96:{mountpoint:/var/lib/containers/storage/overlay/3d19f417867c5d9779aa0e6782a0dd7c6332c32bce178ebb3c580e0862744145/merged major:0 minor:96 fsType:overlay blockSize:0} overlay_0-97:{mountpoint:/var/lib/containers/storage/overlay/3242bd410a7f5bc257006967722b623cbfc0019b31053d1ddd05a17bdf0bb947/merged major:0 minor:97 fsType:overlay blockSize:0} overlay_0-977:{mountpoint:/var/lib/containers/storage/overlay/fa968c05889129e0adf59b5adebe6ec58add025c42f0265cf566204ae71194f0/merged major:0 minor:977 fsType:overlay blockSize:0} overlay_0-980:{mountpoint:/var/lib/containers/storage/overlay/3d24dc24a63da49b68020e80cd16d2a34f4119ec61cb6d799a3ceed8766956c1/merged major:0 minor:980 fsType:overlay blockSize:0} overlay_0-982:{mountpoint:/var/lib/containers/storage/overlay/b724fd91168f88867cfec0fd14cbeebdfe08c03beadacf26e442214922e46837/merged major:0 minor:982 fsType:overlay blockSize:0} overlay_0-984:{mountpoint:/var/lib/containers/storage/overlay/39f184e5515514c1a8bfe83fc3e70ff371ceccab6f6a9ea8780dbacbd6b018d9/merged major:0 minor:984 fsType:overlay blockSize:0} overlay_0-986:{mountpoint:/var/lib/containers/storage/overlay/f1cf730ee07506be7035de8bffb98940447342b8db07f99582d426af85246799/merged major:0 minor:986 fsType:overlay blockSize:0} overlay_0-988:{mountpoint:/var/lib/containers/storage/overlay/90c393754af09371d10824978f8030a5acee463c99d329ff848251f73fdd47aa/merged major:0 minor:988 fsType:overlay blockSize:0} overlay_0-99:{mountpoint:/var/lib/containers/storage/overlay/504b349aa8734fd2baf65b11510f4bb99dcada21352ae6c55a0951e6f6b5d0cd/merged major:0 minor:99 fsType:overlay blockSize:0}] Feb 24 02:20:57.000094 master-0 kubenswrapper[31411]: I0224 02:20:56.997568 31411 manager.go:217] Machine: {Timestamp:2026-02-24 02:20:56.996185295 +0000 UTC m=+0.213383181 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514153472 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:1d448a69ed5349cda3229fbde6198537 SystemUUID:1d448a69-ed53-49cd-a322-9fbde6198537 BootID:db0156e3-cefa-4894-85d6-ad7931f79daa Filesystems:[{Device:overlay_0-340 DeviceMajor:0 DeviceMinor:340 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-737 DeviceMajor:0 DeviceMinor:737 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-846 DeviceMajor:0 DeviceMinor:846 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-715 DeviceMajor:0 DeviceMinor:715 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-936 DeviceMajor:0 DeviceMinor:936 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1086 DeviceMajor:0 DeviceMinor:1086 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/070ebb2d-57a2-4c76-8c93-e09d398f3b73/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:1325 Capacity:200003584 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-329 DeviceMajor:0 DeviceMinor:329 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5961c2c1ae4747bec6388a9fbe96dacb27b6a52832bcc7c5d12c3091d629abab/userdata/shm DeviceMajor:0 DeviceMinor:143 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d/volumes/kubernetes.io~projected/kube-api-access-w6wvl DeviceMajor:0 DeviceMinor:1063 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1256 DeviceMajor:0 DeviceMinor:1256 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/523033b8-4101-4a55-8320-55bef04ddaaf/volumes/kubernetes.io~projected/kube-api-access-dlg2j DeviceMajor:0 DeviceMinor:139 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1042 DeviceMajor:0 DeviceMinor:1042 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8c396c41-c617-4631-9700-a7052af5a276/volumes/kubernetes.io~secret/client-ca-bundle DeviceMajor:0 DeviceMinor:1249 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6d1c4a7e4e4241cdd4f673e537ec599a9ec1bd539d78669446c1a36b609a7a02/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1289 DeviceMajor:0 DeviceMinor:1289 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-182 DeviceMajor:0 DeviceMinor:182 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4f5b3b93-a59d-495c-a311-8913fa6000fc/volumes/kubernetes.io~secret/catalogserver-certs DeviceMajor:0 DeviceMinor:616 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1225 DeviceMajor:0 DeviceMinor:1225 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-692 DeviceMajor:0 DeviceMinor:692 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-698 DeviceMajor:0 DeviceMinor:698 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/74a7801b-b7a4-4292-91b3-6285c239aeb7/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert DeviceMajor:0 DeviceMinor:439 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-833 DeviceMajor:0 DeviceMinor:833 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/22a83952-32ec-48f7-85cd-209b62362ae2/volumes/kubernetes.io~secret/tls-certificates DeviceMajor:0 DeviceMinor:1111 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/87721a77ed25537b4640a4bb3b51b25ccc9baed9db06541dd0ac651dd7e4b7bb/userdata/shm DeviceMajor:0 DeviceMinor:1319 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1339 DeviceMajor:0 DeviceMinor:1339 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-161 DeviceMajor:0 DeviceMinor:161 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6a08a1e4-cf92-4733-a8af-c7ac5b21e925/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:1110 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6a08a1e4-cf92-4733-a8af-c7ac5b21e925/volumes/kubernetes.io~projected/kube-api-access-qc5kx DeviceMajor:0 DeviceMinor:1113 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/24765ff1-5e7d-4100-ad81-8f73555fc0a2/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:1180 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9538eb885cdee2fa9a588e118eb2c741ad080c47591f7c5e45f680a3f6d76460/userdata/shm DeviceMajor:0 DeviceMinor:1192 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1036 DeviceMajor:0 DeviceMinor:1036 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-177 DeviceMajor:0 DeviceMinor:177 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a6e4933443321f6f827221301b84c881881ea51343c84ac3ad457e15891f86d0/userdata/shm DeviceMajor:0 DeviceMinor:301 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a02536a3-7d3e-4e74-9625-aefed518ec35/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:256 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-94 DeviceMajor:0 DeviceMinor:94 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1154 DeviceMajor:0 DeviceMinor:1154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-511 DeviceMajor:0 DeviceMinor:511 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fb524201fadda92a97019a1e36f215d113e21212244e9e77433e72e6adcfc793/userdata/shm DeviceMajor:0 DeviceMinor:44 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/3e36c9eb-0368-46dc-af84-9c602a15555d/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:954 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-527 DeviceMajor:0 DeviceMinor:527 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:924 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b176946a-c056-441c-9145-b88ca4d75758/volumes/kubernetes.io~projected/kube-api-access-kcq24 DeviceMajor:0 DeviceMinor:613 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-821 DeviceMajor:0 DeviceMinor:821 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1088 DeviceMajor:0 DeviceMinor:1088 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e8d6a6c0-b944-4206-9178-9a9930b303b9/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:404 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/02f1d753-983a-4c4a-b1a0-560de173859a/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:239 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-667 DeviceMajor:0 DeviceMinor:667 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-676 DeviceMajor:0 DeviceMinor:676 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-854 DeviceMajor:0 DeviceMinor:854 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-500 DeviceMajor:0 DeviceMinor:500 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7ed7554e0b6eb88f1840ed2eee6ab3bddc21f230340186b0439d63f9a885eb31/userdata/shm DeviceMajor:0 DeviceMinor:478 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8f97b854deca962131113e1e495c21e96d538f802eb3f5de41bedce3ba1452e3/userdata/shm DeviceMajor:0 DeviceMinor:1021 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/3332acec-1553-4594-a903-a322399f6d9d/volumes/kubernetes.io~projected/kube-api-access-x6qs2 DeviceMajor:0 DeviceMinor:69 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f5861a89c1b826c96f8d7eb1735da2b4cdf59be101852d074de82cd32893d879/userdata/shm DeviceMajor:0 DeviceMinor:637 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-844 DeviceMajor:0 DeviceMinor:844 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1004 DeviceMajor:0 DeviceMinor:1004 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1038 DeviceMajor:0 DeviceMinor:1038 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-874 DeviceMajor:0 DeviceMinor:874 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c6153510-452b-4726-8b63-8cc894daa168/volumes/kubernetes.io~projected/kube-api-access-lxz8j DeviceMajor:0 DeviceMinor:419 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-307 DeviceMajor:0 DeviceMinor:307 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4a2d8ef6-14ac-490d-a931-7082344d3f46/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:614 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4f5b3b93-a59d-495c-a311-8913fa6000fc/volumes/kubernetes.io~projected/kube-api-access-2tszx DeviceMajor:0 DeviceMinor:618 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/011c6603-d533-4449-b409-f6f698a3bd50/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:971 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2/volumes/kubernetes.io~projected/kube-api-access-pwjpw DeviceMajor:0 DeviceMinor:259 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9cad383a-cb69-41a8-aec8-23ee1c930430/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:830 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/0127e0d5-9961-4ff6-851d-884e71e1dcf2/volumes/kubernetes.io~projected/kube-api-access-nbc5w DeviceMajor:0 DeviceMinor:922 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1174 DeviceMajor:0 DeviceMinor:1174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1200 DeviceMajor:0 DeviceMinor:1200 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1263 DeviceMajor:0 DeviceMinor:1263 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-99 DeviceMajor:0 DeviceMinor:99 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4a2d8ef6-14ac-490d-a931-7082344d3f46/volumes/kubernetes.io~projected/kube-api-access-ddtsj DeviceMajor:0 DeviceMinor:617 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8ebd1a97-ff7b-4a10-a1b5-956e427478a8/volumes/kubernetes.io~projected/kube-api-access-gckc2 DeviceMajor:0 DeviceMinor:873 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/df2b8111-41c6-4333-b473-4c08fb836f70/volumes/kubernetes.io~secret/prometheus-operator-tls DeviceMajor:0 DeviceMinor:1162 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9b5620d6-a5fe-45d7-b39e-8bed7f602a17/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:240 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/77149d9718de6b23dc52d1b4901db52831b3adbf959e9b29aeeec05c6d2db97e/userdata/shm DeviceMajor:0 DeviceMinor:630 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a5305004-5311-4bc4-ad7c-6670f97c89cb/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1185 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/db8d6627-394c-4087-bfa4-bf7580f6bb4b/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:621 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-595 DeviceMajor:0 DeviceMinor:595 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/608a8a56-daee-4fa1-8300-42155217c68b/volumes/kubernetes.io~projected/kube-api-access-px2vd DeviceMajor:0 DeviceMinor:1190 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/adc1097b-c1ab-4f09-965d-1c819671475b/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:164 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6320dbb5-b84d-4a57-8c65-fbed8421f84a/volumes/kubernetes.io~secret/package-server-manager-serving-cert DeviceMajor:0 DeviceMinor:624 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/df42c69b-1a0e-41f5-9006-17540369b9ad/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:1015 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1206 DeviceMajor:0 DeviceMinor:1206 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/db3e2d765c6b8a0f8e83a15ab78326f0bd14411e923e027c42dbca04e32ebad8/userdata/shm DeviceMajor:0 DeviceMinor:129 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9cad383a-cb69-41a8-aec8-23ee1c930430/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:829 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/0127e0d5-9961-4ff6-851d-884e71e1dcf2/volumes/kubernetes.io~secret/samples-operator-tls DeviceMajor:0 DeviceMinor:914 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a8543f0f38e0eb6ba966eb5116372824fd9c0ed337e28520e9ed982545e8ec8c/userdata/shm DeviceMajor:0 DeviceMinor:1198 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1223 DeviceMajor:0 DeviceMinor:1223 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c6153510-452b-4726-8b63-8cc894daa168/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:418 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e68b3061-c9d2-469d-babf-7ccac0ad9b14/volumes/kubernetes.io~projected/kube-api-access-6xz68 DeviceMajor:0 DeviceMinor:1296 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/55a2662a-d672-4a46-9b81-bfcaf334eedb/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:405 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/560cc2fc6affd50d504fd0043ec0076b50148137946f51caca10417e2832ae2a/userdata/shm DeviceMajor:0 DeviceMinor:822 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-115 DeviceMajor:0 DeviceMinor:115 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-112 DeviceMajor:0 DeviceMinor:112 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6a9ccd8e-d964-4c03-8ffc-51b464030c25/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:476 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7b4e3ba0-5194-4e20-8f12-dea4b67504fe/volumes/kubernetes.io~projected/kube-api-access-dqqkv DeviceMajor:0 DeviceMinor:241 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/583b21f55c4eaab72f3731e41c571ee4872bace9012dd0a496219d9d98220f85/userdata/shm DeviceMajor:0 DeviceMinor:485 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-688 DeviceMajor:0 DeviceMinor:688 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1075 DeviceMajor:0 DeviceMinor:1075 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-100 DeviceMajor:0 DeviceMinor:100 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-221 DeviceMajor:0 DeviceMinor:221 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9867de2597cef8ea21b27065bc6c5ebb42eda67f6c0e55aa21a2e92ed8089c54/userdata/shm DeviceMajor:0 DeviceMinor:813 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ca1250a6-30f0-4cc0-b9b0-eabde42aefcf/volumes/kubernetes.io~projected/kube-api-access-fqqwj DeviceMajor:0 DeviceMinor:1112 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-794 DeviceMajor:0 DeviceMinor:794 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f0367a6433cb34322b034ae4858460e50d6150d575d08858a19734f838b6527f/userdata/shm DeviceMajor:0 DeviceMinor:896 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-891 DeviceMajor:0 DeviceMinor:891 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1136 DeviceMajor:0 DeviceMinor:1136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-723 DeviceMajor:0 DeviceMinor:723 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-420 DeviceMajor:0 DeviceMinor:420 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a547b5d4267673c7d0d24b2e2ba4109ca6066121198db068b1fa1a5a39df064e/userdata/shm DeviceMajor:0 DeviceMinor:1194 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1303 DeviceMajor:0 DeviceMinor:1303 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/df2b8111-41c6-4333-b473-4c08fb836f70/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1166 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-836 DeviceMajor:0 DeviceMinor:836 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:473 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:1061 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1131 DeviceMajor:0 DeviceMinor:1131 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:1315 Capacity:200003584 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1902deada2be96bfa5d915252c7df17f18da007080acab9c3aa02ba85365b1cc/userdata/shm DeviceMajor:0 DeviceMinor:481 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/25190a18-bdac-479b-b526-840d28636be3/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:466 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-385 DeviceMajor:0 DeviceMinor:385 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1120 DeviceMajor:0 DeviceMinor:1120 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6dc4ae2fbc88ea5c43de5d695fea8a7c8829343138c8856dcfeff187994c5c0f/userdata/shm DeviceMajor:0 DeviceMinor:583 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f571f0a4aeeadbbb146ec437860c7a57c1e485b485fb0691d9981c6a2b22a120/userdata/shm DeviceMajor:0 DeviceMinor:641 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-124 DeviceMajor:0 DeviceMinor:124 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/64c3913ef0868e964da24e47fde7afcb2edc5db0527066ac2d8451806802e649/userdata/shm DeviceMajor:0 DeviceMinor:296 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5365291f917df72d79e3f7635bb96352fde98df3b379b1b1c5331b7f5952d294/userdata/shm DeviceMajor:0 DeviceMinor:638 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-444 DeviceMajor:0 DeviceMinor:444 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0cb042de-c873-408c-a4c4-ef9f7e546a08/volumes/kubernetes.io~projected/kube-api-access-p9ngc DeviceMajor:0 DeviceMinor:576 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-945 DeviceMajor:0 DeviceMinor:945 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b/volumes/kubernetes.io~projected/kube-api-access-bbnd2 DeviceMajor:0 DeviceMinor:118 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-428 DeviceMajor:0 DeviceMinor:428 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-96 DeviceMajor:0 DeviceMinor:96 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-210 DeviceMajor:0 DeviceMinor:210 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-768 DeviceMajor:0 DeviceMinor:768 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/29c6111030d71a276fc5ae8422a3897c52faae1bbf5d2f44516c595b0829852b/userdata/shm DeviceMajor:0 DeviceMinor:811 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/91168f3d-70eb-4351-bb83-5411a96ad29d/volumes/kubernetes.io~projected/kube-api-access-rt2q4 DeviceMajor:0 DeviceMinor:970 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/da0959fc5c7a27175270ce726463fd3e9e8da5aff2a8a6bf45a477613fc17349/userdata/shm DeviceMajor:0 DeviceMinor:974 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-313 DeviceMajor:0 DeviceMinor:313 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bb7daa4c061606545ddec9122a80563e4f785ed9c98cafaa54bb7196f126bd02/userdata/shm DeviceMajor:0 DeviceMinor:543 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/0ce6dd93-084c-4e15-8b7c-e0829a6df14e/volumes/kubernetes.io~projected/kube-api-access-q8msx DeviceMajor:0 DeviceMinor:1013 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1134 DeviceMajor:0 DeviceMinor:1134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/35cb29197091c21cf559145587727d1cb31b46813a4d0aded5d8409120c45182/userdata/shm DeviceMajor:0 DeviceMinor:634 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1334 DeviceMajor:0 DeviceMinor:1334 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/608a8a56-daee-4fa1-8300-42155217c68b/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1184 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/29621914ea05b7d9aefb3ef92742f6212ca05bc6251d28674ae45265f66276a1/userdata/shm DeviceMajor:0 DeviceMinor:325 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b176946a-c056-441c-9145-b88ca4d75758/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:610 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/608a8a56-daee-4fa1-8300-42155217c68b/volumes/kubernetes.io~secret/openshift-state-metrics-tls DeviceMajor:0 DeviceMinor:1186 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/83490c1a955fe6b943eda48c6b81b0120dda14df023aa9b81ab0e80b7e90cadf/userdata/shm DeviceMajor:0 DeviceMinor:263 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-585 DeviceMajor:0 DeviceMinor:585 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/02f1d753-983a-4c4a-b1a0-560de173859a/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:620 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/f2e9cdff-8c15-43df-b8df-7fe3a73fda86/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls DeviceMajor:0 DeviceMinor:625 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-686 DeviceMajor:0 DeviceMinor:686 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2dab12c36fbca650a107bc58df00044fd6561209f9c466f04a4c8ce72b69201d/userdata/shm DeviceMajor:0 DeviceMinor:1121 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-763 DeviceMajor:0 DeviceMinor:763 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/638b3f88-0386-4f30-8ca5-6255e8f936fc/volumes/kubernetes.io~projected/kube-api-access-996wg DeviceMajor:0 DeviceMinor:571 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-50 DeviceMajor:0 DeviceMinor:50 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:252 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c84dc269-43ae-4083-9998-a0b3c90bb681/volumes/kubernetes.io~projected/kube-api-access-9sp95 DeviceMajor:0 DeviceMinor:282 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6a9ccd8e-d964-4c03-8ffc-51b464030c25/volumes/kubernetes.io~secret/node-tuning-operator-tls DeviceMajor:0 DeviceMinor:471 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-655 DeviceMajor:0 DeviceMinor:655 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-684 DeviceMajor:0 DeviceMinor:684 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8e70a9f5-1154-40e9-a487-21e36e7f420a/volumes/kubernetes.io~projected/kube-api-access-mb7jb DeviceMajor:0 DeviceMinor:1011 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-133 DeviceMajor:0 DeviceMinor:133 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-71 DeviceMajor:0 DeviceMinor:71 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-342 DeviceMajor:0 DeviceMinor:342 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-905 DeviceMajor:0 DeviceMinor:905 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-319 DeviceMajor:0 DeviceMinor:319 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c/volumes/kubernetes.io~projected/kube-api-access-qpg44 DeviceMajor:0 DeviceMinor:582 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-690 DeviceMajor:0 DeviceMinor:690 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1202 DeviceMajor:0 DeviceMinor:1202 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a97c301937f1a0e25ebd74de8f7b7dfda3c088599ac5506143f0a1006a2bb044/userdata/shm DeviceMajor:0 DeviceMinor:80 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d884f8a9271a3be209f8b517c106210cb5d535a1b46d052e9c8de84e6be62441/userdata/shm DeviceMajor:0 DeviceMinor:311 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-515 DeviceMajor:0 DeviceMinor:515 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-827 DeviceMajor:0 DeviceMinor:827 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1299 DeviceMajor:0 DeviceMinor:1299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-215 DeviceMajor:0 DeviceMinor:215 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8ebd1a97-ff7b-4a10-a1b5-956e427478a8/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:872 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7a26b2c8abf7070791144c3b808d314860b4cb305adbb9095d34928ddbac7f4a/userdata/shm DeviceMajor:0 DeviceMinor:1020 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/638b3f88-0386-4f30-8ca5-6255e8f936fc/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:567 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-574 DeviceMajor:0 DeviceMinor:574 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-437 DeviceMajor:0 DeviceMinor:437 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1157 DeviceMajor:0 DeviceMinor:1157 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-41 DeviceMajor:0 DeviceMinor:41 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e54e467fec77344a591689afeb76ae49385e45cfe4c4aeb2a94eec65da6cdce5/userdata/shm DeviceMajor:0 DeviceMinor:284 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-951 DeviceMajor:0 DeviceMinor:951 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-741 DeviceMajor:0 DeviceMinor:741 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-608 DeviceMajor:0 DeviceMinor:608 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1170 DeviceMajor:0 DeviceMinor:1170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1219 DeviceMajor:0 DeviceMinor:1219 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-548 DeviceMajor:0 DeviceMinor:548 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d789b7d1d1c624f3c1461f3405b95a301ab5f66347a0727135e2339f341d9052/userdata/shm DeviceMajor:0 DeviceMinor:628 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/91168f3d-70eb-4351-bb83-5411a96ad29d/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:956 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6a9ccd8e-d964-4c03-8ffc-51b464030c25/volumes/kubernetes.io~projected/kube-api-access-ssz8p DeviceMajor:0 DeviceMinor:276 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-426 DeviceMajor:0 DeviceMinor:426 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0e94bb6d8da81f692c353aed9041e8cea1ef96da518c0c68ab1453f8b2183856/userdata/shm DeviceMajor:0 DeviceMinor:619 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-603 DeviceMajor:0 DeviceMinor:603 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0ce6dd93-084c-4e15-8b7c-e0829a6df14e/volumes/kubernetes.io~secret/machine-api-operator-tls DeviceMajor:0 DeviceMinor:1012 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1040 DeviceMajor:0 DeviceMinor:1040 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-97 DeviceMajor:0 DeviceMinor:97 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fb39fcc8-beb4-410e-b2a4-0b3e150719cc/volumes/kubernetes.io~projected/kube-api-access-rc8jx DeviceMajor:0 DeviceMinor:141 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/fb39fcc8-beb4-410e-b2a4-0b3e150719cc/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-303 DeviceMajor:0 DeviceMinor:303 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/390a7aa5-c7f7-4baf-a2d2-e6da9a465042/volumes/kubernetes.io~projected/kube-api-access-dkpjn DeviceMajor:0 DeviceMinor:587 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/21aa7b4dfda40f1610fd6b64e23f1c617ce7b50ea96960fc42e2a8aaa9a792b2/userdata/shm DeviceMajor:0 DeviceMinor:589 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-653 DeviceMajor:0 DeviceMinor:653 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8e0c87ae-6387-4c00-b03d-582566907fb6/volumes/kubernetes.io~projected/kube-api-access-5dcvb DeviceMajor:0 DeviceMinor:979 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1032 DeviceMajor:0 DeviceMinor:1032 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-197 DeviceMajor:0 DeviceMinor:197 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/14bae3b4a416d85f1092a8f00a7f0b630edcc978d596d53dd23dbb652596ecd8/userdata/shm DeviceMajor:0 DeviceMinor:1297 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-401 DeviceMajor:0 DeviceMinor:401 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e3a675b9-feaa-4456-b7b4-0cd3afc42a42/volumes/kubernetes.io~projected/kube-api-access-nn8hz DeviceMajor:0 DeviceMinor:331 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-323 DeviceMajor:0 DeviceMinor:323 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1084 DeviceMajor:0 DeviceMinor:1084 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/adc1097b-c1ab-4f09-965d-1c819671475b/volumes/kubernetes.io~projected/kube-api-access-nqtld DeviceMajor:0 DeviceMinor:163 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-531 DeviceMajor:0 DeviceMinor:531 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5c3398a6c263edc9332a777f898c18bf8d4d5354af4bc2396e80f920a1e77f07/userdata/shm DeviceMajor:0 DeviceMinor:751 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-148 DeviceMajor:0 DeviceMinor:148 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/12b89e05-a503-47aa-90b2-4d741e015b19/volumes/kubernetes.io~projected/kube-api-access-twgrj DeviceMajor:0 DeviceMinor:243 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/2cb764f6-40f8-4e87-8be0-b9d7b0364201/volumes/kubernetes.io~projected/kube-api-access-sp8hv DeviceMajor:0 DeviceMinor:286 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-414 DeviceMajor:0 DeviceMinor:414 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1221 DeviceMajor:0 DeviceMinor:1221 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/46f23e74184a869450a53e076049b086fc11c3d08fab3acc813aa63061b356f3/userdata/shm DeviceMajor:0 DeviceMinor:70 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-858 DeviceMajor:0 DeviceMinor:858 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-867 DeviceMajor:0 DeviceMinor:867 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-102 DeviceMajor:0 DeviceMinor:102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-591 DeviceMajor:0 DeviceMinor:591 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-85 DeviceMajor:0 DeviceMinor:85 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-183 DeviceMajor:0 DeviceMinor:183 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1284 DeviceMajor:0 DeviceMinor:1284 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-113 DeviceMajor:0 DeviceMinor:113 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-745 DeviceMajor:0 DeviceMinor:745 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-853 DeviceMajor:0 DeviceMinor:853 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8e70a9f5-1154-40e9-a487-21e36e7f420a/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:1010 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0034746b398351f91b0a88e97985b40bb4895c122d618141a5cb5cca87941d23/userdata/shm DeviceMajor:0 DeviceMinor:808 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-492 DeviceMajor:0 DeviceMinor:492 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/638b3f88-0386-4f30-8ca5-6255e8f936fc/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:570 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-805 DeviceMajor:0 DeviceMinor:805 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1323 DeviceMajor:0 DeviceMinor:1323 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3d94304059d808624e692a18999e46c1ed32aa07c16bb3ea5a63de6a687dd377/userdata/shm DeviceMajor:0 DeviceMinor:338 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/25190a18-bdac-479b-b526-840d28636be3/volumes/kubernetes.io~projected/kube-api-access-bsb4q DeviceMajor:0 DeviceMinor:468 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-986 DeviceMajor:0 DeviceMinor:986 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1030 DeviceMajor:0 DeviceMinor:1030 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f64a1a9e81543288c082ba54493b536ca2db47fef63c0b6ea8e2ecd8d4fc6a3b/userdata/shm DeviceMajor:0 DeviceMinor:299 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-841 DeviceMajor:0 DeviceMinor:841 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/276e47463c76f9595550735ca5a2eb97f44bfa685298a20ea61ee705f8a41bd4/userdata/shm DeviceMajor:0 DeviceMinor:1130 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/45681e7db0a00432167c0ceb01dfa150d4182b397673d5a0da048e4b9054ffea/userdata/shm DeviceMajor:0 DeviceMinor:771 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-831 DeviceMajor:0 DeviceMinor:831 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:262 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c84dc269-43ae-4083-9998-a0b3c90bb681/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:258 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-982 DeviceMajor:0 DeviceMinor:982 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-843 DeviceMajor:0 DeviceMinor:843 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:249 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/523033b8-4101-4a55-8320-55bef04ddaaf/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:138 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a4267e3a-aaaf-4b2f-a37c-0f097a35783f/volumes/kubernetes.io~projected/kube-api-access-5pc72 DeviceMajor:0 DeviceMinor:815 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-379 DeviceMajor:0 DeviceMinor:379 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f85222bf-f51a-4232-8db1-1e6ee593617b/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:274 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-292 DeviceMajor:0 DeviceMinor:292 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-344 DeviceMajor:0 DeviceMinor:344 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6a08a1e4-cf92-4733-a8af-c7ac5b21e925/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:1108 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/55a2662a-d672-4a46-9b81-bfcaf334eedb/volumes/kubernetes.io~projected/kube-api-access-gzghr DeviceMajor:0 DeviceMinor:749 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/56b5dc5b3e9740ae05d95dc7b2a84307e363cddd956bef52b197b1f840f462b7/userdata/shm DeviceMajor:0 DeviceMinor:479 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fe31ec10252333ed40b830b2aacce6de4e895210a7d0b0aebae765349ccfa670/userdata/shm DeviceMajor:0 DeviceMinor:168 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c/volumes/kubernetes.io~projected/kube-api-access-n2b65 DeviceMajor:0 DeviceMinor:294 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-327 DeviceMajor:0 DeviceMinor:327 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-446 DeviceMajor:0 DeviceMinor:446 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1071 DeviceMajor:0 DeviceMinor:1071 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d7b6636edfdbd08e77deab1053c053904d89a5b39e0804954f8c80e56f1c9467/userdata/shm DeviceMajor:0 DeviceMinor:47 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7b8c524be621d3b232cfcc53d4958e12d26da68f7931a17964ec87b85eee7bba/userdata/shm DeviceMajor:0 DeviceMinor:105 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/12b89e05-a503-47aa-90b2-4d741e015b19/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:235 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-399 DeviceMajor:0 DeviceMinor:399 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1212 DeviceMajor:0 DeviceMinor:1212 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-73 DeviceMajor:0 DeviceMinor:73 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-898 DeviceMajor:0 DeviceMinor:898 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-78 DeviceMajor:0 DeviceMinor:78 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/89afab5096911f8752c791dc8598fa3869a80370cd36dec7298c9b9d91c19d81/userdata/shm DeviceMajor:0 DeviceMinor:440 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8e0c87ae-6387-4c00-b03d-582566907fb6/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:983 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-869 DeviceMajor:0 DeviceMinor:869 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-940 DeviceMajor:0 DeviceMinor:940 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-441 DeviceMajor:0 DeviceMinor:441 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7b098bd4-5751-4b01-8409-0688fd29233e/volumes/kubernetes.io~projected/kube-api-access-86pcb DeviceMajor:0 DeviceMinor:260 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a02536a3-7d3e-4e74-9625-aefed518ec35/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:265 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/25190a18-bdac-479b-b526-840d28636be3/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:467 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-659 DeviceMajor:0 DeviceMinor:659 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1002 DeviceMajor:0 DeviceMinor:1002 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e68b3061-c9d2-469d-babf-7ccac0ad9b14/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:1291 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-187 DeviceMajor:0 DeviceMinor:187 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-547 DeviceMajor:0 DeviceMinor:547 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-696 DeviceMajor:0 DeviceMinor:696 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/61e5e77001e1e5b4b53f6c82868401419bbcf0e5600dbe4c283c403c8bc8a720/userdata/shm DeviceMajor:0 DeviceMinor:837 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes Feb 24 02:20:57.000837 master-0 kubenswrapper[31411]: :true} {Device:/var/lib/kubelet/pods/e8d6a6c0-b944-4206-9178-9a9930b303b9/volumes/kubernetes.io~projected/kube-api-access-zxjtb DeviceMajor:0 DeviceMinor:736 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/containers/storage/overlay-containers/229e86eaf4e88e77ef5e6c4bc8577da2618b97072b856ff2c58bb725165574ff/userdata/shm DeviceMajor:0 DeviceMinor:442 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-880 DeviceMajor:0 DeviceMinor:880 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1028 DeviceMajor:0 DeviceMinor:1028 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-977 DeviceMajor:0 DeviceMinor:977 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1237 DeviceMajor:0 DeviceMinor:1237 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1286 DeviceMajor:0 DeviceMinor:1286 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-430 DeviceMajor:0 DeviceMinor:430 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-517 DeviceMajor:0 DeviceMinor:517 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-513 DeviceMajor:0 DeviceMinor:513 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-220 DeviceMajor:0 DeviceMinor:220 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/39f5130f968afbc399fff9d4193b2dc7547fc4010b24b675d8cfe4f908871554/userdata/shm DeviceMajor:0 DeviceMinor:632 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/824e46bd462387249603454ca45d03ebca612478cdeb336ee057821a4d25b262/userdata/shm DeviceMajor:0 DeviceMinor:1062 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1141 DeviceMajor:0 DeviceMinor:1141 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1073 DeviceMajor:0 DeviceMinor:1073 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1301 DeviceMajor:0 DeviceMinor:1301 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5ba2f6486b90f665f4193dee37876ce40336ba0c3b009bf85c911f6014a84585/userdata/shm DeviceMajor:0 DeviceMinor:142 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-883 DeviceMajor:0 DeviceMinor:883 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/89a7efb88fa53095b161b71e1b8530a4c4c20e49713a6786f3a59609c9325838/userdata/shm DeviceMajor:0 DeviceMinor:636 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-554 DeviceMajor:0 DeviceMinor:554 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/627eb8f17d5fd787312a09b22ed574b8b738c499ce476e336267fe5d3546a7b9/userdata/shm DeviceMajor:0 DeviceMinor:998 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-496 DeviceMajor:0 DeviceMinor:496 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e7c778232fad4af52f47c31c73f233a65718cb5d7849085291ba01455710c481/userdata/shm DeviceMajor:0 DeviceMinor:360 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-769 DeviceMajor:0 DeviceMinor:769 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cee49e60dc37b41a9c1559a523e9c4e3b09f5f3e76df27a36cc4a9d63ff6bee9/userdata/shm DeviceMajor:0 DeviceMinor:503 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-675 DeviceMajor:0 DeviceMinor:675 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/db8d6627-394c-4087-bfa4-bf7580f6bb4b/volumes/kubernetes.io~projected/kube-api-access-x6lsp DeviceMajor:0 DeviceMinor:283 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/cabdddba-5507-4e47-98ef-a00c6d0f305d/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:251 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-353 DeviceMajor:0 DeviceMinor:353 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-461 DeviceMajor:0 DeviceMinor:461 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-545 DeviceMajor:0 DeviceMinor:545 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/303d5058-84df-40d1-a941-896b093ae470/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:248 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-597 DeviceMajor:0 DeviceMinor:597 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1026 DeviceMajor:0 DeviceMinor:1026 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-711 DeviceMajor:0 DeviceMinor:711 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-912 DeviceMajor:0 DeviceMinor:912 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-289 DeviceMajor:0 DeviceMinor:289 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-351 DeviceMajor:0 DeviceMinor:351 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-498 DeviceMajor:0 DeviceMinor:498 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a5305004-5311-4bc4-ad7c-6670f97c89cb/volumes/kubernetes.io~projected/kube-api-access-kznmr DeviceMajor:0 DeviceMinor:1191 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-364 DeviceMajor:0 DeviceMinor:364 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-357 DeviceMajor:0 DeviceMinor:357 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/35a3cac3cfce9496c0f221e8539970cdcedf87aabbcb92ba9a5c445596750d49/userdata/shm DeviceMajor:0 DeviceMinor:267 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:909 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-780 DeviceMajor:0 DeviceMinor:780 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-409 DeviceMajor:0 DeviceMinor:409 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fb67bb4fcbc0cf30dc19aad2f8b3b13f31473c855e7d30010f86d687f8822d44/userdata/shm DeviceMajor:0 DeviceMinor:1127 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-524 DeviceMajor:0 DeviceMinor:524 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a5305004-5311-4bc4-ad7c-6670f97c89cb/volumes/kubernetes.io~secret/kube-state-metrics-tls DeviceMajor:0 DeviceMinor:1187 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-103 DeviceMajor:0 DeviceMinor:103 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fcbda577-b943-4b5c-b041-948aece8e40f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:247 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1336 DeviceMajor:0 DeviceMinor:1336 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-126 DeviceMajor:0 DeviceMinor:126 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-368 DeviceMajor:0 DeviceMinor:368 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/303d5058-84df-40d1-a941-896b093ae470/volumes/kubernetes.io~projected/kube-api-access-79bl6 DeviceMajor:0 DeviceMinor:291 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1276 DeviceMajor:0 DeviceMinor:1276 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fb39fcc8-beb4-410e-b2a4-0b3e150719cc/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:140 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b085f760-0e24-41a8-af09-538396aad935/volumes/kubernetes.io~projected/kube-api-access-q86gx DeviceMajor:0 DeviceMinor:713 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/24765ff1-5e7d-4100-ad81-8f73555fc0a2/volumes/kubernetes.io~projected/kube-api-access-98725 DeviceMajor:0 DeviceMinor:1189 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-321 DeviceMajor:0 DeviceMinor:321 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1cc1996551692c223eb12edcadd4f14bef06fed859ebb6d00f4391944783b38d/userdata/shm DeviceMajor:0 DeviceMinor:280 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/12b89e05-a503-47aa-90b2-4d741e015b19/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:623 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a4cea44a-1c6e-465f-97df-2c951056cb85/volumes/kubernetes.io~projected/kube-api-access-57x9m DeviceMajor:0 DeviceMinor:457 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-988 DeviceMajor:0 DeviceMinor:988 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-588 DeviceMajor:0 DeviceMinor:588 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ff39808811189af69f67503d76fa167bb97add817a078f10dcf74a7660201e4e/userdata/shm DeviceMajor:0 DeviceMinor:277 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/586631f1005e0eec9e04637dd3347ca45f3a799902b12b7ee4c09257fef0aee9/userdata/shm DeviceMajor:0 DeviceMinor:424 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9cad383a-cb69-41a8-aec8-23ee1c930430/volumes/kubernetes.io~projected/kube-api-access-svc78 DeviceMajor:0 DeviceMinor:839 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/74a7801b-b7a4-4292-91b3-6285c239aeb7/volumes/kubernetes.io~projected/kube-api-access-pdmhx DeviceMajor:0 DeviceMinor:520 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-890 DeviceMajor:0 DeviceMinor:890 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:246 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a4cea44a-1c6e-465f-97df-2c951056cb85/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls DeviceMajor:0 DeviceMinor:417 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/f6e7b773-7ecd-4a5c-8bef-d672f371e7e5/volumes/kubernetes.io~projected/kube-api-access-q7lsb DeviceMajor:0 DeviceMinor:350 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-984 DeviceMajor:0 DeviceMinor:984 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8c396c41-c617-4631-9700-a7052af5a276/volumes/kubernetes.io~secret/secret-metrics-client-certs DeviceMajor:0 DeviceMinor:1250 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-358 DeviceMajor:0 DeviceMinor:358 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-355 DeviceMajor:0 DeviceMinor:355 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-521 DeviceMajor:0 DeviceMinor:521 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:581 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b176946a-c056-441c-9145-b88ca4d75758/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:611 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e56a17d6-d740-4349-833e-b5279f7db2d4/volumes/kubernetes.io~projected/kube-api-access-gg7sb DeviceMajor:0 DeviceMinor:804 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c28e00882d5ec6f229538a77e2756ba9244f00b81361306d5103b5a9571bb19f/userdata/shm DeviceMajor:0 DeviceMinor:176 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/25190a18-bdac-479b-b526-840d28636be3/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:508 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/fcbda577-b943-4b5c-b041-948aece8e40f/volumes/kubernetes.io~projected/kube-api-access-vpg26 DeviceMajor:0 DeviceMinor:295 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0a122f4d5531f3489b5545a54ec94812a3a4adf1ceb59316f98f88f87840e7dc/userdata/shm DeviceMajor:0 DeviceMinor:1125 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-594 DeviceMajor:0 DeviceMinor:594 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9b5620d6-a5fe-45d7-b39e-8bed7f602a17/volumes/kubernetes.io~projected/kube-api-access-jtf52 DeviceMajor:0 DeviceMinor:244 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-651 DeviceMajor:0 DeviceMinor:651 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/df2b8111-41c6-4333-b473-4c08fb836f70/volumes/kubernetes.io~projected/kube-api-access-cd796 DeviceMajor:0 DeviceMinor:1167 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4f5b3b93-a59d-495c-a311-8913fa6000fc/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:615 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/91d16f7b-390a-4d9d-99d6-cc8e210801d1/volumes/kubernetes.io~secret/marketplace-operator-metrics DeviceMajor:0 DeviceMinor:626 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/24765ff1-5e7d-4100-ad81-8f73555fc0a2/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1188 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c92835f0-7f32-4584-8304-843d7979392a/volumes/kubernetes.io~projected/kube-api-access-6nwzm DeviceMajor:0 DeviceMinor:273 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-649 DeviceMajor:0 DeviceMinor:649 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1cf9c3623efa047b3c733a0c601bd847d659d71e97fdf999c590347704f0d5c3/userdata/shm DeviceMajor:0 DeviceMinor:1014 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8c396c41-c617-4631-9700-a7052af5a276/volumes/kubernetes.io~projected/kube-api-access-n4grf DeviceMajor:0 DeviceMinor:1251 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b9883c9e4f2305d2772b0fcadf9ca3936959b05b72ec07b2a03edcd3558e0737/userdata/shm DeviceMajor:0 DeviceMinor:1330 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:261 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1000 DeviceMajor:0 DeviceMinor:1000 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/02f1d753-983a-4c4a-b1a0-560de173859a/volumes/kubernetes.io~projected/kube-api-access-mb52w DeviceMajor:0 DeviceMinor:245 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-980 DeviceMajor:0 DeviceMinor:980 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-84 DeviceMajor:0 DeviceMinor:84 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-108 DeviceMajor:0 DeviceMinor:108 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-672 DeviceMajor:0 DeviceMinor:672 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f85222bf-f51a-4232-8db1-1e6ee593617b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:255 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-87 DeviceMajor:0 DeviceMinor:87 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/df42c69b-1a0e-41f5-9006-17540369b9ad/volumes/kubernetes.io~projected/kube-api-access-f4q7n DeviceMajor:0 DeviceMinor:1016 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/513f9261949841afe139d9cdba0a1314c71b8cc3ca522e4a37e97a5c0f7cd056/userdata/shm DeviceMajor:0 DeviceMinor:1168 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1196 DeviceMajor:0 DeviceMinor:1196 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1204 DeviceMajor:0 DeviceMinor:1204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-67 DeviceMajor:0 DeviceMinor:67 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c92835f0-7f32-4584-8304-843d7979392a/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:250 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/90a0d6bae4f861a78e1bdfe5f47cce060c508d92cdd797bd0fed3982c351779c/userdata/shm DeviceMajor:0 DeviceMinor:403 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7b4e3ba0-5194-4e20-8f12-dea4b67504fe/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls DeviceMajor:0 DeviceMinor:474 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-657 DeviceMajor:0 DeviceMinor:657 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-731 DeviceMajor:0 DeviceMinor:731 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6a08a1e4-cf92-4733-a8af-c7ac5b21e925/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:1106 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-68 DeviceMajor:0 DeviceMinor:68 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1158 DeviceMajor:0 DeviceMinor:1158 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d8e20d47-aeb6-41bf-9715-c437beb8e9e4/volumes/kubernetes.io~projected/kube-api-access-qv6t5 DeviceMajor:0 DeviceMinor:298 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-346 DeviceMajor:0 DeviceMinor:346 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-572 DeviceMajor:0 DeviceMinor:572 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-790 DeviceMajor:0 DeviceMinor:790 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-463 DeviceMajor:0 DeviceMinor:463 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/57811d07-ae8a-44b7-8efb-dafc5afad31e/volumes/kubernetes.io~projected/kube-api-access-vrmsh DeviceMajor:0 DeviceMinor:128 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-192 DeviceMajor:0 DeviceMinor:192 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-566 DeviceMajor:0 DeviceMinor:566 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b36d8451-0fda-4d9d-a850-d05c8f847016/volumes/kubernetes.io~projected/kube-api-access-njjq8 DeviceMajor:0 DeviceMinor:266 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/91d16f7b-390a-4d9d-99d6-cc8e210801d1/volumes/kubernetes.io~projected/kube-api-access-b8rjx DeviceMajor:0 DeviceMinor:242 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-750 DeviceMajor:0 DeviceMinor:750 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/200fe8533b917a93531075b1165c1c0e535a0cad7b646b4108d64e256389832b/userdata/shm DeviceMajor:0 DeviceMinor:818 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1034 DeviceMajor:0 DeviceMinor:1034 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3e36c9eb-0368-46dc-af84-9c602a15555d/volumes/kubernetes.io~projected/kube-api-access-lbzsl DeviceMajor:0 DeviceMinor:953 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-58 DeviceMajor:0 DeviceMinor:58 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-91 DeviceMajor:0 DeviceMinor:91 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-205 DeviceMajor:0 DeviceMinor:205 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/cabdddba-5507-4e47-98ef-a00c6d0f305d/volumes/kubernetes.io~projected/kube-api-access-h6f7j DeviceMajor:0 DeviceMinor:272 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-412 DeviceMajor:0 DeviceMinor:412 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-782 DeviceMajor:0 DeviceMinor:782 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ca4a08102d80addfcc85dbdd564f6e40965982eca8126d325ae121c2e1c48c40/userdata/shm DeviceMajor:0 DeviceMinor:122 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-494 DeviceMajor:0 DeviceMinor:494 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-866 DeviceMajor:0 DeviceMinor:866 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-926 DeviceMajor:0 DeviceMinor:926 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1022 DeviceMajor:0 DeviceMinor:1022 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-146 DeviceMajor:0 DeviceMinor:146 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:253 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9b9766c83ab547d93c665b0d79f8c94f21cf677d4157ff5e1bc24f519048fa91/userdata/shm DeviceMajor:0 DeviceMinor:287 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/011c6603-d533-4449-b409-f6f698a3bd50/volumes/kubernetes.io~projected/kube-api-access-xh4wr DeviceMajor:0 DeviceMinor:972 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1097 DeviceMajor:0 DeviceMinor:1097 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1217 DeviceMajor:0 DeviceMinor:1217 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1684de33a1b99214450be6d5f8c060f5a9f1a7c517e642d15f1d667f8a119c75/userdata/shm DeviceMajor:0 DeviceMinor:1252 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-318 DeviceMajor:0 DeviceMinor:318 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-230 DeviceMajor:0 DeviceMinor:230 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-899 DeviceMajor:0 DeviceMinor:899 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1321 DeviceMajor:0 DeviceMinor:1321 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0fcf5ec505a8c60fa755fbe1033404d9a2bfa8dd51c8a8904db6a212bec7d594/userdata/shm DeviceMajor:0 DeviceMinor:640 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-784 DeviceMajor:0 DeviceMinor:784 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-432 DeviceMajor:0 DeviceMinor:432 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c84dc269-43ae-4083-9998-a0b3c90bb681/volumes/kubernetes.io~secret/image-registry-operator-tls DeviceMajor:0 DeviceMinor:477 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c8a44d641739b0edde589e3cc2ab82e120d1f854cda8b41d7ab46952d705c4b9/userdata/shm DeviceMajor:0 DeviceMinor:933 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/70e2ba24-4871-4d1d-9935-156fdbeb2810/volumes/kubernetes.io~projected/kube-api-access-4nmd6 DeviceMajor:0 DeviceMinor:135 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b176946a-c056-441c-9145-b88ca4d75758/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:612 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-929 DeviceMajor:0 DeviceMinor:929 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1123 DeviceMajor:0 DeviceMinor:1123 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1103 DeviceMajor:0 DeviceMinor:1103 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-309 DeviceMajor:0 DeviceMinor:309 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1/volumes/kubernetes.io~projected/kube-api-access-rv6zq DeviceMajor:0 DeviceMinor:925 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-776 DeviceMajor:0 DeviceMinor:776 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b36d8451-0fda-4d9d-a850-d05c8f847016/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:254 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6320dbb5-b84d-4a57-8c65-fbed8421f84a/volumes/kubernetes.io~projected/kube-api-access-pgjlz DeviceMajor:0 DeviceMinor:279 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e6d28a4266f3905d697e133577d4e67e6ee815cccb7f5ef59b536b8c0d26cb94/userdata/shm DeviceMajor:0 DeviceMinor:315 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1313 DeviceMajor:0 DeviceMinor:1313 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-747 DeviceMajor:0 DeviceMinor:747 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-807 DeviceMajor:0 DeviceMinor:807 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2cb764f6-40f8-4e87-8be0-b9d7b0364201/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:472 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-669 DeviceMajor:0 DeviceMinor:669 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f2e9cdff-8c15-43df-b8df-7fe3a73fda86/volumes/kubernetes.io~projected/kube-api-access-82hfh DeviceMajor:0 DeviceMinor:275 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-671 DeviceMajor:0 DeviceMinor:671 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/732a3831-20e0-47dc-a29a-8bb4659541b7/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:475 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7b4e3ba0-5194-4e20-8f12-dea4b67504fe/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:470 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/volumes/kubernetes.io~projected/kube-api-access-qph4g DeviceMajor:0 DeviceMinor:257 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2335a36c0dbf2382a55062b41bd9de9b70220499140428315aa61fdfd6dde11f/userdata/shm DeviceMajor:0 DeviceMinor:973 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8c396c41-c617-4631-9700-a7052af5a276/volumes/kubernetes.io~secret/secret-metrics-server-tls DeviceMajor:0 DeviceMinor:1238 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-416 DeviceMajor:0 DeviceMinor:416 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/732a3831-20e0-47dc-a29a-8bb4659541b7/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:110 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-407 DeviceMajor:0 DeviceMinor:407 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1018 DeviceMajor:0 DeviceMinor:1018 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1254 DeviceMajor:0 DeviceMinor:1254 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3332acec-1553-4594-a903-a322399f6d9d/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-564 DeviceMajor:0 DeviceMinor:564 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1140 DeviceMajor:0 DeviceMinor:1140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-706 DeviceMajor:0 DeviceMinor:706 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-785 DeviceMajor:0 DeviceMinor:785 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-86 DeviceMajor:0 DeviceMinor:86 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-332 DeviceMajor:0 DeviceMinor:332 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/70e2ba24-4871-4d1d-9935-156fdbeb2810/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:627 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-456 DeviceMajor:0 DeviceMinor:456 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f807f33c-8132-48a8-ab12-4b54c1cd2b10/volumes/kubernetes.io~projected/kube-api-access-g8742 DeviceMajor:0 DeviceMinor:359 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/68a30a55ed2f979625e18a77c39c55f0bd820b511f058e5d010e556725054ded/userdata/shm DeviceMajor:0 DeviceMinor:487 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a75855ac22ad61c526e140082a63e50802db589f96d5c1f8fe72f371e5c93069/userdata/shm DeviceMajor:0 DeviceMinor:270 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:0034746b398351f MacAddress:8a:f2:f5:00:27:b2 Speed:10000 Mtu:8900} {Name:0a122f4d5531f34 MacAddress:ea:cb:2c:62:b7:55 Speed:10000 Mtu:8900} {Name:0e94bb6d8da81f6 MacAddress:d6:a4:cf:f8:b8:dc Speed:10000 Mtu:8900} {Name:0fcf5ec505a8c60 MacAddress:de:ff:fa:9f:5f:a2 Speed:10000 Mtu:8900} {Name:14bae3b4a416d85 MacAddress:c6:07:b7:4d:f8:c7 Speed:10000 Mtu:8900} {Name:1684de33a1b9921 MacAddress:9e:ed:bc:84:41:4a Speed:10000 Mtu:8900} {Name:1902deada2be96b MacAddress:66:85:c0:0a:8d:a3 Speed:10000 Mtu:8900} {Name:1cc1996551692c2 MacAddress:5a:29:ec:ef:58:96 Speed:10000 Mtu:8900} {Name:200fe8533b917a9 MacAddress:4a:2a:fc:9b:81:73 Speed:10000 Mtu:8900} {Name:229e86eaf4e88e7 MacAddress:06:0b:fa:d0:2e:f2 Speed:10000 Mtu:8900} {Name:2335a36c0dbf238 MacAddress:aa:1d:80:52:a7:a1 Speed:10000 Mtu:8900} {Name:29c6111030d71a2 MacAddress:5e:db:63:37:1c:9c Speed:10000 Mtu:8900} {Name:35a3cac3cfce949 MacAddress:ee:9e:9b:75:36:0d Speed:10000 Mtu:8900} {Name:35cb29197091c21 MacAddress:22:8e:a4:03:a9:40 Speed:10000 Mtu:8900} {Name:39f5130f968afbc MacAddress:12:69:23:1b:3f:d3 Speed:10000 Mtu:8900} {Name:3d94304059d8086 MacAddress:76:9b:36:01:fc:eb Speed:10000 Mtu:8900} {Name:45681e7db0a0043 MacAddress:62:cd:1c:bf:c8:60 Speed:10000 Mtu:8900} {Name:513f9261949841a MacAddress:9a:dd:d6:e5:e3:39 Speed:10000 Mtu:8900} {Name:5365291f917df72 MacAddress:06:52:aa:46:fd:24 Speed:10000 Mtu:8900} {Name:56b5dc5b3e9740a MacAddress:56:94:0c:17:0f:ef Speed:10000 Mtu:8900} {Name:583b21f55c4eaab MacAddress:46:44:bc:ab:b8:cf Speed:10000 Mtu:8900} {Name:586631f1005e0ee MacAddress:4e:90:3b:b3:43:3d Speed:10000 Mtu:8900} {Name:5c3398a6c263edc MacAddress:ca:52:89:2b:e6:83 Speed:10000 Mtu:8900} {Name:61e5e77001e1e5b MacAddress:42:ad:5f:17:79:13 Speed:10000 Mtu:8900} {Name:627eb8f17d5fd78 MacAddress:82:27:38:d5:68:2f Speed:10000 Mtu:8900} {Name:64c3913ef0868e9 MacAddress:7e:3f:1e:1d:af:fd Speed:10000 Mtu:8900} {Name:68a30a55ed2f979 MacAddress:26:30:9b:e7:30:f4 Speed:10000 Mtu:8900} {Name:6dc4ae2fbc88ea5 MacAddress:96:65:27:34:60:5e Speed:10000 Mtu:8900} {Name:77149d9718de6b2 MacAddress:0e:c7:52:de:32:1c Speed:10000 Mtu:8900} {Name:7a26b2c8abf7070 MacAddress:ea:4b:41:a2:e4:61 Speed:10000 Mtu:8900} {Name:7ed7554e0b6eb88 MacAddress:8e:31:8e:06:25:a5 Speed:10000 Mtu:8900} {Name:824e46bd4623872 MacAddress:ce:54:11:7d:4d:a6 Speed:10000 Mtu:8900} {Name:83490c1a955fe6b MacAddress:e6:33:82:ce:bc:48 Speed:10000 Mtu:8900} {Name:87721a77ed25537 MacAddress:92:f1:0e:55:d5:73 Speed:10000 Mtu:8900} {Name:89a7efb88fa5309 MacAddress:2a:5b:a4:0b:ec:2a Speed:10000 Mtu:8900} {Name:89afab5096911f8 MacAddress:b6:7f:c9:be:e6:7f Speed:10000 Mtu:8900} {Name:90a0d6bae4f861a MacAddress:fa:7e:f4:a2:6d:2a Speed:10000 Mtu:8900} {Name:9538eb885cdee2f MacAddress:76:4b:c9:3b:46:0a Speed:10000 Mtu:8900} {Name:9867de2597cef8e MacAddress:92:ae:4f:97:f3:69 Speed:10000 Mtu:8900} {Name:9b9766c83ab547d MacAddress:46:8b:6f:b4:88:e8 Speed:10000 Mtu:8900} {Name:a6e4933443321f6 MacAddress:d6:6d:9f:5c:d5:20 Speed:10000 Mtu:8900} {Name:a75855ac22ad61c MacAddress:8e:db:c6:e3:b7:1f Speed:10000 Mtu:8900} {Name:a8543f0f38e0eb6 MacAddress:9e:c8:c2:72:09:64 Speed:10000 Mtu:8900} {Name:b9883c9e4f2305d MacAddress:8a:19:0f:3f:7d:df Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:56:04:d0:ce:78:ac Speed:0 Mtu:8900} {Name:c8a44d641739b0e MacAddress:f6:6c:ce:a3:4f:fb Speed:10000 Mtu:8900} {Name:cee49e60dc37b41 MacAddress:56:ce:55:ff:51:01 Speed:10000 Mtu:8900} {Name:d789b7d1d1c624f MacAddress:66:7b:3b:8e:ec:3d Speed:10000 Mtu:8900} {Name:d884f8a9271a3be MacAddress:0e:0d:b7:3a:9b:0f Speed:10000 Mtu:8900} {Name:da0959fc5c7a271 MacAddress:56:bf:ba:8b:ea:fa Speed:10000 Mtu:8900} {Name:e54e467fec77344 MacAddress:ce:70:87:ab:71:a9 Speed:10000 Mtu:8900} {Name:e6d28a4266f3905 MacAddress:76:ad:3d:9a:5b:1f Speed:10000 Mtu:8900} {Name:e7c778232fad4af MacAddress:de:f3:40:58:47:42 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:b3:1a:4a Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:91:46:2f Speed:-1 Mtu:9000} {Name:f0367a6433cb343 MacAddress:56:fc:fb:f9:8b:8b Speed:10000 Mtu:8900} {Name:f571f0a4aeeadbb MacAddress:86:fc:75:0a:35:8a Speed:10000 Mtu:8900} {Name:f5861a89c1b826c MacAddress:5e:10:ec:a4:d2:90 Speed:10000 Mtu:8900} {Name:f64a1a9e8154328 MacAddress:96:6b:44:66:74:fa Speed:10000 Mtu:8900} {Name:fb67bb4fcbc0cf3 MacAddress:aa:c5:30:9d:2d:e8 Speed:10000 Mtu:8900} {Name:ff39808811189af MacAddress:ae:4c:bd:18:d8:ba Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:72:e3:6b:de:66:46 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514153472 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 24 02:20:57.000837 master-0 kubenswrapper[31411]: I0224 02:20:57.000142 31411 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 24 02:20:57.000837 master-0 kubenswrapper[31411]: I0224 02:20:57.000248 31411 manager.go:233] Version: {KernelVersion:5.14.0-427.109.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602022246-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 24 02:20:57.000837 master-0 kubenswrapper[31411]: I0224 02:20:57.000692 31411 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 24 02:20:57.001718 master-0 kubenswrapper[31411]: I0224 02:20:57.000980 31411 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 24 02:20:57.001718 master-0 kubenswrapper[31411]: I0224 02:20:57.001029 31411 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 24 02:20:57.001718 master-0 kubenswrapper[31411]: I0224 02:20:57.001353 31411 topology_manager.go:138] "Creating topology manager with none policy" Feb 24 02:20:57.001718 master-0 kubenswrapper[31411]: I0224 02:20:57.001368 31411 container_manager_linux.go:303] "Creating device plugin manager" Feb 24 02:20:57.001718 master-0 kubenswrapper[31411]: I0224 02:20:57.001380 31411 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 24 02:20:57.001718 master-0 kubenswrapper[31411]: I0224 02:20:57.001415 31411 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 24 02:20:57.001718 master-0 kubenswrapper[31411]: I0224 02:20:57.001468 31411 state_mem.go:36] "Initialized new in-memory state store" Feb 24 02:20:57.001718 master-0 kubenswrapper[31411]: I0224 02:20:57.001615 31411 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 24 02:20:57.001718 master-0 kubenswrapper[31411]: I0224 02:20:57.001701 31411 kubelet.go:418] "Attempting to sync node with API server" Feb 24 02:20:57.001718 master-0 kubenswrapper[31411]: I0224 02:20:57.001719 31411 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 24 02:20:57.001718 master-0 kubenswrapper[31411]: I0224 02:20:57.001742 31411 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 24 02:20:57.002370 master-0 kubenswrapper[31411]: I0224 02:20:57.001763 31411 kubelet.go:324] "Adding apiserver pod source" Feb 24 02:20:57.002370 master-0 kubenswrapper[31411]: I0224 02:20:57.001789 31411 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 24 02:20:57.007171 master-0 kubenswrapper[31411]: I0224 02:20:57.007088 31411 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-6.rhaos4.18.git7ed6156.el9" apiVersion="v1" Feb 24 02:20:57.007538 master-0 kubenswrapper[31411]: I0224 02:20:57.007480 31411 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 24 02:20:57.010093 master-0 kubenswrapper[31411]: I0224 02:20:57.010027 31411 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 24 02:20:57.010947 master-0 kubenswrapper[31411]: I0224 02:20:57.010903 31411 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 24 02:20:57.011037 master-0 kubenswrapper[31411]: I0224 02:20:57.010956 31411 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 24 02:20:57.011098 master-0 kubenswrapper[31411]: I0224 02:20:57.011042 31411 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 24 02:20:57.011098 master-0 kubenswrapper[31411]: I0224 02:20:57.011068 31411 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 24 02:20:57.011098 master-0 kubenswrapper[31411]: I0224 02:20:57.011091 31411 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 24 02:20:57.011289 master-0 kubenswrapper[31411]: I0224 02:20:57.011107 31411 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 24 02:20:57.011289 master-0 kubenswrapper[31411]: I0224 02:20:57.011125 31411 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 24 02:20:57.011915 master-0 kubenswrapper[31411]: I0224 02:20:57.011861 31411 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 24 02:20:57.011915 master-0 kubenswrapper[31411]: I0224 02:20:57.011909 31411 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 24 02:20:57.012058 master-0 kubenswrapper[31411]: I0224 02:20:57.011932 31411 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 24 02:20:57.012058 master-0 kubenswrapper[31411]: I0224 02:20:57.011958 31411 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 24 02:20:57.012058 master-0 kubenswrapper[31411]: I0224 02:20:57.011990 31411 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 24 02:20:57.012237 master-0 kubenswrapper[31411]: I0224 02:20:57.012064 31411 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 24 02:20:57.012975 master-0 kubenswrapper[31411]: I0224 02:20:57.012931 31411 server.go:1280] "Started kubelet" Feb 24 02:20:57.013157 master-0 kubenswrapper[31411]: I0224 02:20:57.013086 31411 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 24 02:20:57.015644 master-0 kubenswrapper[31411]: I0224 02:20:57.013684 31411 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 24 02:20:57.015644 master-0 kubenswrapper[31411]: I0224 02:20:57.013888 31411 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 24 02:20:57.015644 master-0 kubenswrapper[31411]: I0224 02:20:57.014802 31411 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 24 02:20:57.014274 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 24 02:20:57.018764 master-0 kubenswrapper[31411]: I0224 02:20:57.018713 31411 server.go:449] "Adding debug handlers to kubelet server" Feb 24 02:20:57.025439 master-0 kubenswrapper[31411]: I0224 02:20:57.025381 31411 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 24 02:20:57.034478 master-0 kubenswrapper[31411]: I0224 02:20:57.034408 31411 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 24 02:20:57.044103 master-0 kubenswrapper[31411]: I0224 02:20:57.044055 31411 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 24 02:20:57.044103 master-0 kubenswrapper[31411]: I0224 02:20:57.044105 31411 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 24 02:20:57.046978 master-0 kubenswrapper[31411]: E0224 02:20:57.046906 31411 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 24 02:20:57.047455 master-0 kubenswrapper[31411]: I0224 02:20:57.047387 31411 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-25 01:54:04 +0000 UTC, rotation deadline is 2026-02-24 18:57:07.857176405 +0000 UTC Feb 24 02:20:57.047455 master-0 kubenswrapper[31411]: I0224 02:20:57.047451 31411 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 16h36m10.809729353s for next certificate rotation Feb 24 02:20:57.049115 master-0 kubenswrapper[31411]: I0224 02:20:57.049082 31411 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 24 02:20:57.049115 master-0 kubenswrapper[31411]: I0224 02:20:57.049103 31411 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 24 02:20:57.049977 master-0 kubenswrapper[31411]: I0224 02:20:57.049805 31411 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 24 02:20:57.051168 master-0 kubenswrapper[31411]: I0224 02:20:57.051052 31411 factory.go:55] Registering systemd factory Feb 24 02:20:57.051311 master-0 kubenswrapper[31411]: I0224 02:20:57.051188 31411 factory.go:221] Registration of the systemd container factory successfully Feb 24 02:20:57.051731 master-0 kubenswrapper[31411]: I0224 02:20:57.051695 31411 factory.go:153] Registering CRI-O factory Feb 24 02:20:57.051731 master-0 kubenswrapper[31411]: I0224 02:20:57.051724 31411 factory.go:221] Registration of the crio container factory successfully Feb 24 02:20:57.052320 master-0 kubenswrapper[31411]: I0224 02:20:57.051848 31411 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 24 02:20:57.052320 master-0 kubenswrapper[31411]: I0224 02:20:57.051901 31411 factory.go:103] Registering Raw factory Feb 24 02:20:57.052320 master-0 kubenswrapper[31411]: I0224 02:20:57.051929 31411 manager.go:1196] Started watching for new ooms in manager Feb 24 02:20:57.053019 master-0 kubenswrapper[31411]: I0224 02:20:57.052967 31411 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 24 02:20:57.053431 master-0 kubenswrapper[31411]: I0224 02:20:57.053298 31411 manager.go:319] Starting recovery of all containers Feb 24 02:20:57.075166 master-0 kubenswrapper[31411]: I0224 02:20:57.074256 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="011c6603-d533-4449-b409-f6f698a3bd50" volumeName="kubernetes.io/projected/011c6603-d533-4449-b409-f6f698a3bd50-kube-api-access-xh4wr" seLinuxMountContext="" Feb 24 02:20:57.075166 master-0 kubenswrapper[31411]: I0224 02:20:57.074389 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e36c9eb-0368-46dc-af84-9c602a15555d" volumeName="kubernetes.io/secret/3e36c9eb-0368-46dc-af84-9c602a15555d-cert" seLinuxMountContext="" Feb 24 02:20:57.075166 master-0 kubenswrapper[31411]: I0224 02:20:57.074448 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f5b3b93-a59d-495c-a311-8913fa6000fc" volumeName="kubernetes.io/projected/4f5b3b93-a59d-495c-a311-8913fa6000fc-ca-certs" seLinuxMountContext="" Feb 24 02:20:57.075166 master-0 kubenswrapper[31411]: I0224 02:20:57.074480 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="732a3831-20e0-47dc-a29a-8bb4659541b7" volumeName="kubernetes.io/secret/732a3831-20e0-47dc-a29a-8bb4659541b7-serving-cert" seLinuxMountContext="" Feb 24 02:20:57.075166 master-0 kubenswrapper[31411]: I0224 02:20:57.074510 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b4e3ba0-5194-4e20-8f12-dea4b67504fe" volumeName="kubernetes.io/configmap/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-config" seLinuxMountContext="" Feb 24 02:20:57.075166 master-0 kubenswrapper[31411]: I0224 02:20:57.074539 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c396c41-c617-4631-9700-a7052af5a276" volumeName="kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-secret-metrics-client-certs" seLinuxMountContext="" Feb 24 02:20:57.075166 master-0 kubenswrapper[31411]: I0224 02:20:57.074604 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="df2b8111-41c6-4333-b473-4c08fb836f70" volumeName="kubernetes.io/configmap/df2b8111-41c6-4333-b473-4c08fb836f70-metrics-client-ca" seLinuxMountContext="" Feb 24 02:20:57.075166 master-0 kubenswrapper[31411]: I0224 02:20:57.074635 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c" volumeName="kubernetes.io/configmap/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c-config" seLinuxMountContext="" Feb 24 02:20:57.075166 master-0 kubenswrapper[31411]: I0224 02:20:57.074714 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a02536a3-7d3e-4e74-9625-aefed518ec35" volumeName="kubernetes.io/projected/a02536a3-7d3e-4e74-9625-aefed518ec35-kube-api-access" seLinuxMountContext="" Feb 24 02:20:57.075166 master-0 kubenswrapper[31411]: I0224 02:20:57.074821 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="02f1d753-983a-4c4a-b1a0-560de173859a" volumeName="kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert" seLinuxMountContext="" Feb 24 02:20:57.075166 master-0 kubenswrapper[31411]: I0224 02:20:57.074846 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25190a18-bdac-479b-b526-840d28636be3" volumeName="kubernetes.io/secret/25190a18-bdac-479b-b526-840d28636be3-etcd-client" seLinuxMountContext="" Feb 24 02:20:57.075166 master-0 kubenswrapper[31411]: I0224 02:20:57.074868 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b4e3ba0-5194-4e20-8f12-dea4b67504fe" volumeName="kubernetes.io/configmap/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-images" seLinuxMountContext="" Feb 24 02:20:57.075166 master-0 kubenswrapper[31411]: I0224 02:20:57.074910 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb39fcc8-beb4-410e-b2a4-0b3e150719cc" volumeName="kubernetes.io/secret/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-ovn-node-metrics-cert" seLinuxMountContext="" Feb 24 02:20:57.075166 master-0 kubenswrapper[31411]: I0224 02:20:57.074952 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25190a18-bdac-479b-b526-840d28636be3" volumeName="kubernetes.io/configmap/25190a18-bdac-479b-b526-840d28636be3-image-import-ca" seLinuxMountContext="" Feb 24 02:20:57.075166 master-0 kubenswrapper[31411]: I0224 02:20:57.074990 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55a2662a-d672-4a46-9b81-bfcaf334eedb" volumeName="kubernetes.io/configmap/55a2662a-d672-4a46-9b81-bfcaf334eedb-config" seLinuxMountContext="" Feb 24 02:20:57.075166 master-0 kubenswrapper[31411]: I0224 02:20:57.075014 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70e2ba24-4871-4d1d-9935-156fdbeb2810" volumeName="kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs" seLinuxMountContext="" Feb 24 02:20:57.075166 master-0 kubenswrapper[31411]: I0224 02:20:57.075071 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a4cea44a-1c6e-465f-97df-2c951056cb85" volumeName="kubernetes.io/secret/a4cea44a-1c6e-465f-97df-2c951056cb85-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 24 02:20:57.075166 master-0 kubenswrapper[31411]: I0224 02:20:57.075091 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="adc1097b-c1ab-4f09-965d-1c819671475b" volumeName="kubernetes.io/configmap/adc1097b-c1ab-4f09-965d-1c819671475b-env-overrides" seLinuxMountContext="" Feb 24 02:20:57.075166 master-0 kubenswrapper[31411]: I0224 02:20:57.075119 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d" volumeName="kubernetes.io/configmap/e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d-mcc-auth-proxy-config" seLinuxMountContext="" Feb 24 02:20:57.075166 master-0 kubenswrapper[31411]: I0224 02:20:57.075148 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91d16f7b-390a-4d9d-99d6-cc8e210801d1" volumeName="kubernetes.io/projected/91d16f7b-390a-4d9d-99d6-cc8e210801d1-kube-api-access-b8rjx" seLinuxMountContext="" Feb 24 02:20:57.075166 master-0 kubenswrapper[31411]: I0224 02:20:57.075177 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c" volumeName="kubernetes.io/projected/8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c-kube-api-access-qpg44" seLinuxMountContext="" Feb 24 02:20:57.075166 master-0 kubenswrapper[31411]: I0224 02:20:57.075215 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ebd1a97-ff7b-4a10-a1b5-956e427478a8" volumeName="kubernetes.io/secret/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-machine-approver-tls" seLinuxMountContext="" Feb 24 02:20:57.076691 master-0 kubenswrapper[31411]: I0224 02:20:57.075245 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b176946a-c056-441c-9145-b88ca4d75758" volumeName="kubernetes.io/projected/b176946a-c056-441c-9145-b88ca4d75758-kube-api-access-kcq24" seLinuxMountContext="" Feb 24 02:20:57.076691 master-0 kubenswrapper[31411]: I0224 02:20:57.075333 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3278a82-ee70-4d6c-9c96-f8cb1bcb9334" volumeName="kubernetes.io/projected/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-bound-sa-token" seLinuxMountContext="" Feb 24 02:20:57.076691 master-0 kubenswrapper[31411]: I0224 02:20:57.076647 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6153510-452b-4726-8b63-8cc894daa168" volumeName="kubernetes.io/configmap/c6153510-452b-4726-8b63-8cc894daa168-signing-cabundle" seLinuxMountContext="" Feb 24 02:20:57.076691 master-0 kubenswrapper[31411]: I0224 02:20:57.076685 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cabdddba-5507-4e47-98ef-a00c6d0f305d" volumeName="kubernetes.io/projected/cabdddba-5507-4e47-98ef-a00c6d0f305d-kube-api-access-h6f7j" seLinuxMountContext="" Feb 24 02:20:57.076927 master-0 kubenswrapper[31411]: I0224 02:20:57.076731 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="df2b8111-41c6-4333-b473-4c08fb836f70" volumeName="kubernetes.io/secret/df2b8111-41c6-4333-b473-4c08fb836f70-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Feb 24 02:20:57.076927 master-0 kubenswrapper[31411]: I0224 02:20:57.076788 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="12b89e05-a503-47aa-90b2-4d741e015b19" volumeName="kubernetes.io/projected/12b89e05-a503-47aa-90b2-4d741e015b19-kube-api-access-twgrj" seLinuxMountContext="" Feb 24 02:20:57.076927 master-0 kubenswrapper[31411]: I0224 02:20:57.076811 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9cad383a-cb69-41a8-aec8-23ee1c930430" volumeName="kubernetes.io/empty-dir/9cad383a-cb69-41a8-aec8-23ee1c930430-tmpfs" seLinuxMountContext="" Feb 24 02:20:57.076927 master-0 kubenswrapper[31411]: I0224 02:20:57.076852 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="db8d6627-394c-4087-bfa4-bf7580f6bb4b" volumeName="kubernetes.io/configmap/db8d6627-394c-4087-bfa4-bf7580f6bb4b-auth-proxy-config" seLinuxMountContext="" Feb 24 02:20:57.076927 master-0 kubenswrapper[31411]: I0224 02:20:57.076872 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e8d6a6c0-b944-4206-9178-9a9930b303b9" volumeName="kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-proxy-ca-bundles" seLinuxMountContext="" Feb 24 02:20:57.076927 master-0 kubenswrapper[31411]: I0224 02:20:57.076895 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2e9cdff-8c15-43df-b8df-7fe3a73fda86" volumeName="kubernetes.io/projected/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-kube-api-access-82hfh" seLinuxMountContext="" Feb 24 02:20:57.077270 master-0 kubenswrapper[31411]: I0224 02:20:57.076974 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0127e0d5-9961-4ff6-851d-884e71e1dcf2" volumeName="kubernetes.io/projected/0127e0d5-9961-4ff6-851d-884e71e1dcf2-kube-api-access-nbc5w" seLinuxMountContext="" Feb 24 02:20:57.077270 master-0 kubenswrapper[31411]: I0224 02:20:57.077013 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24765ff1-5e7d-4100-ad81-8f73555fc0a2" volumeName="kubernetes.io/configmap/24765ff1-5e7d-4100-ad81-8f73555fc0a2-metrics-client-ca" seLinuxMountContext="" Feb 24 02:20:57.077270 master-0 kubenswrapper[31411]: I0224 02:20:57.077049 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5da829af-05fb-4f6e-9bec-c4dcc9cbec4b" volumeName="kubernetes.io/configmap/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-daemon-config" seLinuxMountContext="" Feb 24 02:20:57.077270 master-0 kubenswrapper[31411]: I0224 02:20:57.077071 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="608a8a56-daee-4fa1-8300-42155217c68b" volumeName="kubernetes.io/secret/608a8a56-daee-4fa1-8300-42155217c68b-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Feb 24 02:20:57.077270 master-0 kubenswrapper[31411]: I0224 02:20:57.077092 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e0c87ae-6387-4c00-b03d-582566907fb6" volumeName="kubernetes.io/secret/8e0c87ae-6387-4c00-b03d-582566907fb6-serving-cert" seLinuxMountContext="" Feb 24 02:20:57.077270 master-0 kubenswrapper[31411]: I0224 02:20:57.077126 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fcbda577-b943-4b5c-b041-948aece8e40f" volumeName="kubernetes.io/secret/fcbda577-b943-4b5c-b041-948aece8e40f-serving-cert" seLinuxMountContext="" Feb 24 02:20:57.077270 master-0 kubenswrapper[31411]: I0224 02:20:57.077183 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6320dbb5-b84d-4a57-8c65-fbed8421f84a" volumeName="kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert" seLinuxMountContext="" Feb 24 02:20:57.077270 master-0 kubenswrapper[31411]: I0224 02:20:57.077225 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cabdddba-5507-4e47-98ef-a00c6d0f305d" volumeName="kubernetes.io/secret/cabdddba-5507-4e47-98ef-a00c6d0f305d-serving-cert" seLinuxMountContext="" Feb 24 02:20:57.077270 master-0 kubenswrapper[31411]: I0224 02:20:57.077281 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2e9cdff-8c15-43df-b8df-7fe3a73fda86" volumeName="kubernetes.io/configmap/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-telemetry-config" seLinuxMountContext="" Feb 24 02:20:57.077816 master-0 kubenswrapper[31411]: I0224 02:20:57.077304 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="303d5058-84df-40d1-a941-896b093ae470" volumeName="kubernetes.io/projected/303d5058-84df-40d1-a941-896b093ae470-kube-api-access-79bl6" seLinuxMountContext="" Feb 24 02:20:57.077816 master-0 kubenswrapper[31411]: I0224 02:20:57.077338 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cabdddba-5507-4e47-98ef-a00c6d0f305d" volumeName="kubernetes.io/configmap/cabdddba-5507-4e47-98ef-a00c6d0f305d-service-ca-bundle" seLinuxMountContext="" Feb 24 02:20:57.077816 master-0 kubenswrapper[31411]: I0224 02:20:57.077460 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f6e7b773-7ecd-4a5c-8bef-d672f371e7e5" volumeName="kubernetes.io/projected/f6e7b773-7ecd-4a5c-8bef-d672f371e7e5-kube-api-access-q7lsb" seLinuxMountContext="" Feb 24 02:20:57.077816 master-0 kubenswrapper[31411]: I0224 02:20:57.077506 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25190a18-bdac-479b-b526-840d28636be3" volumeName="kubernetes.io/secret/25190a18-bdac-479b-b526-840d28636be3-encryption-config" seLinuxMountContext="" Feb 24 02:20:57.077816 master-0 kubenswrapper[31411]: I0224 02:20:57.077527 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="608a8a56-daee-4fa1-8300-42155217c68b" volumeName="kubernetes.io/projected/608a8a56-daee-4fa1-8300-42155217c68b-kube-api-access-px2vd" seLinuxMountContext="" Feb 24 02:20:57.077816 master-0 kubenswrapper[31411]: I0224 02:20:57.077565 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="638b3f88-0386-4f30-8ca5-6255e8f936fc" volumeName="kubernetes.io/projected/638b3f88-0386-4f30-8ca5-6255e8f936fc-kube-api-access-996wg" seLinuxMountContext="" Feb 24 02:20:57.077816 master-0 kubenswrapper[31411]: I0224 02:20:57.077636 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ebd1a97-ff7b-4a10-a1b5-956e427478a8" volumeName="kubernetes.io/configmap/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-auth-proxy-config" seLinuxMountContext="" Feb 24 02:20:57.077816 master-0 kubenswrapper[31411]: I0224 02:20:57.077769 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="db8d6627-394c-4087-bfa4-bf7580f6bb4b" volumeName="kubernetes.io/configmap/db8d6627-394c-4087-bfa4-bf7580f6bb4b-images" seLinuxMountContext="" Feb 24 02:20:57.077816 master-0 kubenswrapper[31411]: I0224 02:20:57.077808 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ce6dd93-084c-4e15-8b7c-e0829a6df14e" volumeName="kubernetes.io/configmap/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-images" seLinuxMountContext="" Feb 24 02:20:57.078337 master-0 kubenswrapper[31411]: I0224 02:20:57.077856 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ce6dd93-084c-4e15-8b7c-e0829a6df14e" volumeName="kubernetes.io/secret/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-machine-api-operator-tls" seLinuxMountContext="" Feb 24 02:20:57.078337 master-0 kubenswrapper[31411]: I0224 02:20:57.077948 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57811d07-ae8a-44b7-8efb-dafc5afad31e" volumeName="kubernetes.io/configmap/57811d07-ae8a-44b7-8efb-dafc5afad31e-cni-binary-copy" seLinuxMountContext="" Feb 24 02:20:57.078337 master-0 kubenswrapper[31411]: I0224 02:20:57.078059 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" volumeName="kubernetes.io/projected/6a08a1e4-cf92-4733-a8af-c7ac5b21e925-kube-api-access-qc5kx" seLinuxMountContext="" Feb 24 02:20:57.078337 master-0 kubenswrapper[31411]: I0224 02:20:57.078099 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b085f760-0e24-41a8-af09-538396aad935" volumeName="kubernetes.io/empty-dir/b085f760-0e24-41a8-af09-538396aad935-catalog-content" seLinuxMountContext="" Feb 24 02:20:57.078337 master-0 kubenswrapper[31411]: I0224 02:20:57.078145 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3332acec-1553-4594-a903-a322399f6d9d" volumeName="kubernetes.io/secret/3332acec-1553-4594-a903-a322399f6d9d-metrics-tls" seLinuxMountContext="" Feb 24 02:20:57.078337 master-0 kubenswrapper[31411]: I0224 02:20:57.078250 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b176946a-c056-441c-9145-b88ca4d75758" volumeName="kubernetes.io/configmap/b176946a-c056-441c-9145-b88ca4d75758-etcd-serving-ca" seLinuxMountContext="" Feb 24 02:20:57.078709 master-0 kubenswrapper[31411]: I0224 02:20:57.078371 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d8e20d47-aeb6-41bf-9715-c437beb8e9e4" volumeName="kubernetes.io/configmap/d8e20d47-aeb6-41bf-9715-c437beb8e9e4-iptables-alerter-script" seLinuxMountContext="" Feb 24 02:20:57.078709 master-0 kubenswrapper[31411]: I0224 02:20:57.078411 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fbe9964a-9e82-48e9-82b0-7c07e4cec3a2" volumeName="kubernetes.io/secret/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-serving-cert" seLinuxMountContext="" Feb 24 02:20:57.078709 master-0 kubenswrapper[31411]: I0224 02:20:57.078446 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c84dc269-43ae-4083-9998-a0b3c90bb681" volumeName="kubernetes.io/configmap/c84dc269-43ae-4083-9998-a0b3c90bb681-trusted-ca" seLinuxMountContext="" Feb 24 02:20:57.078709 master-0 kubenswrapper[31411]: I0224 02:20:57.078477 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ca1250a6-30f0-4cc0-b9b0-eabde42aefcf" volumeName="kubernetes.io/projected/ca1250a6-30f0-4cc0-b9b0-eabde42aefcf-kube-api-access-fqqwj" seLinuxMountContext="" Feb 24 02:20:57.078963 master-0 kubenswrapper[31411]: I0224 02:20:57.078740 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e36c9eb-0368-46dc-af84-9c602a15555d" volumeName="kubernetes.io/projected/3e36c9eb-0368-46dc-af84-9c602a15555d-kube-api-access-lbzsl" seLinuxMountContext="" Feb 24 02:20:57.078963 master-0 kubenswrapper[31411]: I0224 02:20:57.078780 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b176946a-c056-441c-9145-b88ca4d75758" volumeName="kubernetes.io/secret/b176946a-c056-441c-9145-b88ca4d75758-etcd-client" seLinuxMountContext="" Feb 24 02:20:57.078963 master-0 kubenswrapper[31411]: I0224 02:20:57.078870 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d8e20d47-aeb6-41bf-9715-c437beb8e9e4" volumeName="kubernetes.io/projected/d8e20d47-aeb6-41bf-9715-c437beb8e9e4-kube-api-access-qv6t5" seLinuxMountContext="" Feb 24 02:20:57.079142 master-0 kubenswrapper[31411]: I0224 02:20:57.078974 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5305004-5311-4bc4-ad7c-6670f97c89cb" volumeName="kubernetes.io/secret/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-state-metrics-tls" seLinuxMountContext="" Feb 24 02:20:57.079142 master-0 kubenswrapper[31411]: I0224 02:20:57.079081 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="df42c69b-1a0e-41f5-9006-17540369b9ad" volumeName="kubernetes.io/projected/df42c69b-1a0e-41f5-9006-17540369b9ad-kube-api-access-f4q7n" seLinuxMountContext="" Feb 24 02:20:57.079267 master-0 kubenswrapper[31411]: I0224 02:20:57.079167 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25190a18-bdac-479b-b526-840d28636be3" volumeName="kubernetes.io/configmap/25190a18-bdac-479b-b526-840d28636be3-audit" seLinuxMountContext="" Feb 24 02:20:57.079267 master-0 kubenswrapper[31411]: I0224 02:20:57.079205 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a2d8ef6-14ac-490d-a931-7082344d3f46" volumeName="kubernetes.io/projected/4a2d8ef6-14ac-490d-a931-7082344d3f46-kube-api-access-ddtsj" seLinuxMountContext="" Feb 24 02:20:57.079267 master-0 kubenswrapper[31411]: I0224 02:20:57.079241 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd" volumeName="kubernetes.io/projected/5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd-kube-api-access" seLinuxMountContext="" Feb 24 02:20:57.079444 master-0 kubenswrapper[31411]: I0224 02:20:57.079278 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91168f3d-70eb-4351-bb83-5411a96ad29d" volumeName="kubernetes.io/configmap/91168f3d-70eb-4351-bb83-5411a96ad29d-auth-proxy-config" seLinuxMountContext="" Feb 24 02:20:57.079444 master-0 kubenswrapper[31411]: I0224 02:20:57.079298 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b098bd4-5751-4b01-8409-0688fd29233e" volumeName="kubernetes.io/projected/7b098bd4-5751-4b01-8409-0688fd29233e-kube-api-access-86pcb" seLinuxMountContext="" Feb 24 02:20:57.079444 master-0 kubenswrapper[31411]: I0224 02:20:57.079320 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b176946a-c056-441c-9145-b88ca4d75758" volumeName="kubernetes.io/configmap/b176946a-c056-441c-9145-b88ca4d75758-trusted-ca-bundle" seLinuxMountContext="" Feb 24 02:20:57.079444 master-0 kubenswrapper[31411]: I0224 02:20:57.079355 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3278a82-ee70-4d6c-9c96-f8cb1bcb9334" volumeName="kubernetes.io/projected/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-kube-api-access-qph4g" seLinuxMountContext="" Feb 24 02:20:57.079444 master-0 kubenswrapper[31411]: I0224 02:20:57.079392 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c84dc269-43ae-4083-9998-a0b3c90bb681" volumeName="kubernetes.io/projected/c84dc269-43ae-4083-9998-a0b3c90bb681-bound-sa-token" seLinuxMountContext="" Feb 24 02:20:57.079786 master-0 kubenswrapper[31411]: I0224 02:20:57.079479 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="db8d6627-394c-4087-bfa4-bf7580f6bb4b" volumeName="kubernetes.io/projected/db8d6627-394c-4087-bfa4-bf7580f6bb4b-kube-api-access-x6lsp" seLinuxMountContext="" Feb 24 02:20:57.079786 master-0 kubenswrapper[31411]: I0224 02:20:57.079505 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="011c6603-d533-4449-b409-f6f698a3bd50" volumeName="kubernetes.io/secret/011c6603-d533-4449-b409-f6f698a3bd50-cluster-storage-operator-serving-cert" seLinuxMountContext="" Feb 24 02:20:57.079786 master-0 kubenswrapper[31411]: I0224 02:20:57.079523 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" volumeName="kubernetes.io/secret/6a08a1e4-cf92-4733-a8af-c7ac5b21e925-default-certificate" seLinuxMountContext="" Feb 24 02:20:57.079786 master-0 kubenswrapper[31411]: I0224 02:20:57.079637 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="732a3831-20e0-47dc-a29a-8bb4659541b7" volumeName="kubernetes.io/configmap/732a3831-20e0-47dc-a29a-8bb4659541b7-service-ca" seLinuxMountContext="" Feb 24 02:20:57.079786 master-0 kubenswrapper[31411]: I0224 02:20:57.079659 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a4cea44a-1c6e-465f-97df-2c951056cb85" volumeName="kubernetes.io/projected/a4cea44a-1c6e-465f-97df-2c951056cb85-kube-api-access-57x9m" seLinuxMountContext="" Feb 24 02:20:57.079786 master-0 kubenswrapper[31411]: I0224 02:20:57.079680 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="303d5058-84df-40d1-a941-896b093ae470" volumeName="kubernetes.io/secret/303d5058-84df-40d1-a941-896b093ae470-cluster-olm-operator-serving-cert" seLinuxMountContext="" Feb 24 02:20:57.081078 master-0 kubenswrapper[31411]: I0224 02:20:57.080990 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57811d07-ae8a-44b7-8efb-dafc5afad31e" volumeName="kubernetes.io/configmap/57811d07-ae8a-44b7-8efb-dafc5afad31e-cni-sysctl-allowlist" seLinuxMountContext="" Feb 24 02:20:57.081078 master-0 kubenswrapper[31411]: I0224 02:20:57.081069 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57811d07-ae8a-44b7-8efb-dafc5afad31e" volumeName="kubernetes.io/configmap/57811d07-ae8a-44b7-8efb-dafc5afad31e-whereabouts-configmap" seLinuxMountContext="" Feb 24 02:20:57.081078 master-0 kubenswrapper[31411]: I0224 02:20:57.081091 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a02536a3-7d3e-4e74-9625-aefed518ec35" volumeName="kubernetes.io/configmap/a02536a3-7d3e-4e74-9625-aefed518ec35-config" seLinuxMountContext="" Feb 24 02:20:57.081553 master-0 kubenswrapper[31411]: I0224 02:20:57.081110 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5305004-5311-4bc4-ad7c-6670f97c89cb" volumeName="kubernetes.io/configmap/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Feb 24 02:20:57.081553 master-0 kubenswrapper[31411]: I0224 02:20:57.081128 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3278a82-ee70-4d6c-9c96-f8cb1bcb9334" volumeName="kubernetes.io/configmap/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-trusted-ca" seLinuxMountContext="" Feb 24 02:20:57.081553 master-0 kubenswrapper[31411]: I0224 02:20:57.081144 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3278a82-ee70-4d6c-9c96-f8cb1bcb9334" volumeName="kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls" seLinuxMountContext="" Feb 24 02:20:57.081553 master-0 kubenswrapper[31411]: I0224 02:20:57.081159 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e8d6a6c0-b944-4206-9178-9a9930b303b9" volumeName="kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-config" seLinuxMountContext="" Feb 24 02:20:57.081553 master-0 kubenswrapper[31411]: I0224 02:20:57.081176 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fbe9964a-9e82-48e9-82b0-7c07e4cec3a2" volumeName="kubernetes.io/configmap/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-etcd-ca" seLinuxMountContext="" Feb 24 02:20:57.081553 master-0 kubenswrapper[31411]: I0224 02:20:57.081194 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="523033b8-4101-4a55-8320-55bef04ddaaf" volumeName="kubernetes.io/projected/523033b8-4101-4a55-8320-55bef04ddaaf-kube-api-access-dlg2j" seLinuxMountContext="" Feb 24 02:20:57.081553 master-0 kubenswrapper[31411]: I0224 02:20:57.081208 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5305004-5311-4bc4-ad7c-6670f97c89cb" volumeName="kubernetes.io/empty-dir/a5305004-5311-4bc4-ad7c-6670f97c89cb-volume-directive-shadow" seLinuxMountContext="" Feb 24 02:20:57.081553 master-0 kubenswrapper[31411]: I0224 02:20:57.081224 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="df2b8111-41c6-4333-b473-4c08fb836f70" volumeName="kubernetes.io/secret/df2b8111-41c6-4333-b473-4c08fb836f70-prometheus-operator-tls" seLinuxMountContext="" Feb 24 02:20:57.081553 master-0 kubenswrapper[31411]: I0224 02:20:57.081238 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e68b3061-c9d2-469d-babf-7ccac0ad9b14" volumeName="kubernetes.io/secret/e68b3061-c9d2-469d-babf-7ccac0ad9b14-webhook-certs" seLinuxMountContext="" Feb 24 02:20:57.081553 master-0 kubenswrapper[31411]: I0224 02:20:57.081252 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b36d8451-0fda-4d9d-a850-d05c8f847016" volumeName="kubernetes.io/secret/b36d8451-0fda-4d9d-a850-d05c8f847016-serving-cert" seLinuxMountContext="" Feb 24 02:20:57.081553 master-0 kubenswrapper[31411]: I0224 02:20:57.081267 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e8d6a6c0-b944-4206-9178-9a9930b303b9" volumeName="kubernetes.io/projected/e8d6a6c0-b944-4206-9178-9a9930b303b9-kube-api-access-zxjtb" seLinuxMountContext="" Feb 24 02:20:57.081553 master-0 kubenswrapper[31411]: I0224 02:20:57.081282 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5da829af-05fb-4f6e-9bec-c4dcc9cbec4b" volumeName="kubernetes.io/configmap/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-cni-binary-copy" seLinuxMountContext="" Feb 24 02:20:57.081553 master-0 kubenswrapper[31411]: I0224 02:20:57.081297 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="638b3f88-0386-4f30-8ca5-6255e8f936fc" volumeName="kubernetes.io/empty-dir/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-tuned" seLinuxMountContext="" Feb 24 02:20:57.081553 master-0 kubenswrapper[31411]: I0224 02:20:57.081319 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="adc1097b-c1ab-4f09-965d-1c819671475b" volumeName="kubernetes.io/configmap/adc1097b-c1ab-4f09-965d-1c819671475b-ovnkube-identity-cm" seLinuxMountContext="" Feb 24 02:20:57.081553 master-0 kubenswrapper[31411]: I0224 02:20:57.081338 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b176946a-c056-441c-9145-b88ca4d75758" volumeName="kubernetes.io/secret/b176946a-c056-441c-9145-b88ca4d75758-serving-cert" seLinuxMountContext="" Feb 24 02:20:57.081553 master-0 kubenswrapper[31411]: I0224 02:20:57.081352 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d" volumeName="kubernetes.io/secret/e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d-proxy-tls" seLinuxMountContext="" Feb 24 02:20:57.081553 master-0 kubenswrapper[31411]: I0224 02:20:57.081365 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e8d6a6c0-b944-4206-9178-9a9930b303b9" volumeName="kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-client-ca" seLinuxMountContext="" Feb 24 02:20:57.081553 master-0 kubenswrapper[31411]: I0224 02:20:57.081379 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb39fcc8-beb4-410e-b2a4-0b3e150719cc" volumeName="kubernetes.io/configmap/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-ovnkube-script-lib" seLinuxMountContext="" Feb 24 02:20:57.081553 master-0 kubenswrapper[31411]: I0224 02:20:57.081395 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fcbda577-b943-4b5c-b041-948aece8e40f" volumeName="kubernetes.io/projected/fcbda577-b943-4b5c-b041-948aece8e40f-kube-api-access-vpg26" seLinuxMountContext="" Feb 24 02:20:57.081553 master-0 kubenswrapper[31411]: I0224 02:20:57.081411 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="390a7aa5-c7f7-4baf-a2d2-e6da9a465042" volumeName="kubernetes.io/projected/390a7aa5-c7f7-4baf-a2d2-e6da9a465042-kube-api-access-dkpjn" seLinuxMountContext="" Feb 24 02:20:57.081553 master-0 kubenswrapper[31411]: I0224 02:20:57.081426 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e70a9f5-1154-40e9-a487-21e36e7f420a" volumeName="kubernetes.io/projected/8e70a9f5-1154-40e9-a487-21e36e7f420a-kube-api-access-mb7jb" seLinuxMountContext="" Feb 24 02:20:57.081553 master-0 kubenswrapper[31411]: I0224 02:20:57.081441 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25190a18-bdac-479b-b526-840d28636be3" volumeName="kubernetes.io/secret/25190a18-bdac-479b-b526-840d28636be3-serving-cert" seLinuxMountContext="" Feb 24 02:20:57.081553 master-0 kubenswrapper[31411]: I0224 02:20:57.081492 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57811d07-ae8a-44b7-8efb-dafc5afad31e" volumeName="kubernetes.io/projected/57811d07-ae8a-44b7-8efb-dafc5afad31e-kube-api-access-vrmsh" seLinuxMountContext="" Feb 24 02:20:57.081553 master-0 kubenswrapper[31411]: I0224 02:20:57.081516 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6320dbb5-b84d-4a57-8c65-fbed8421f84a" volumeName="kubernetes.io/projected/6320dbb5-b84d-4a57-8c65-fbed8421f84a-kube-api-access-pgjlz" seLinuxMountContext="" Feb 24 02:20:57.081553 master-0 kubenswrapper[31411]: I0224 02:20:57.081538 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5305004-5311-4bc4-ad7c-6670f97c89cb" volumeName="kubernetes.io/projected/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-api-access-kznmr" seLinuxMountContext="" Feb 24 02:20:57.081553 master-0 kubenswrapper[31411]: I0224 02:20:57.081552 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5305004-5311-4bc4-ad7c-6670f97c89cb" volumeName="kubernetes.io/secret/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Feb 24 02:20:57.081553 master-0 kubenswrapper[31411]: I0224 02:20:57.081586 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b085f760-0e24-41a8-af09-538396aad935" volumeName="kubernetes.io/empty-dir/b085f760-0e24-41a8-af09-538396aad935-utilities" seLinuxMountContext="" Feb 24 02:20:57.081553 master-0 kubenswrapper[31411]: I0224 02:20:57.081604 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b36d8451-0fda-4d9d-a850-d05c8f847016" volumeName="kubernetes.io/projected/b36d8451-0fda-4d9d-a850-d05c8f847016-kube-api-access-njjq8" seLinuxMountContext="" Feb 24 02:20:57.081553 master-0 kubenswrapper[31411]: I0224 02:20:57.081623 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cabdddba-5507-4e47-98ef-a00c6d0f305d" volumeName="kubernetes.io/configmap/cabdddba-5507-4e47-98ef-a00c6d0f305d-config" seLinuxMountContext="" Feb 24 02:20:57.081553 master-0 kubenswrapper[31411]: I0224 02:20:57.081641 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f85222bf-f51a-4232-8db1-1e6ee593617b" volumeName="kubernetes.io/projected/f85222bf-f51a-4232-8db1-1e6ee593617b-kube-api-access" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.081659 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0127e0d5-9961-4ff6-851d-884e71e1dcf2" volumeName="kubernetes.io/secret/0127e0d5-9961-4ff6-851d-884e71e1dcf2-samples-operator-tls" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.081679 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0cb042de-c873-408c-a4c4-ef9f7e546a08" volumeName="kubernetes.io/projected/0cb042de-c873-408c-a4c4-ef9f7e546a08-kube-api-access-p9ngc" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.081696 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="523033b8-4101-4a55-8320-55bef04ddaaf" volumeName="kubernetes.io/secret/523033b8-4101-4a55-8320-55bef04ddaaf-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.081714 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c396c41-c617-4631-9700-a7052af5a276" volumeName="kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-secret-metrics-server-tls" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.081730 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e0c87ae-6387-4c00-b03d-582566907fb6" volumeName="kubernetes.io/configmap/8e0c87ae-6387-4c00-b03d-582566907fb6-trusted-ca-bundle" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.081747 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e0c87ae-6387-4c00-b03d-582566907fb6" volumeName="kubernetes.io/projected/8e0c87ae-6387-4c00-b03d-582566907fb6-kube-api-access-5dcvb" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.081761 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91d16f7b-390a-4d9d-99d6-cc8e210801d1" volumeName="kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.081775 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1" volumeName="kubernetes.io/secret/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1-node-bootstrap-token" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.081792 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b176946a-c056-441c-9145-b88ca4d75758" volumeName="kubernetes.io/configmap/b176946a-c056-441c-9145-b88ca4d75758-audit-policies" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.081805 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="db8d6627-394c-4087-bfa4-bf7580f6bb4b" volumeName="kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.081822 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb39fcc8-beb4-410e-b2a4-0b3e150719cc" volumeName="kubernetes.io/configmap/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-ovnkube-config" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.081853 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="070ebb2d-57a2-4c76-8c93-e09d398f3b73" volumeName="kubernetes.io/projected/070ebb2d-57a2-4c76-8c93-e09d398f3b73-kube-api-access" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.081871 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91168f3d-70eb-4351-bb83-5411a96ad29d" volumeName="kubernetes.io/projected/91168f3d-70eb-4351-bb83-5411a96ad29d-kube-api-access-rt2q4" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.081884 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="608a8a56-daee-4fa1-8300-42155217c68b" volumeName="kubernetes.io/secret/608a8a56-daee-4fa1-8300-42155217c68b-openshift-state-metrics-tls" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.081897 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="df42c69b-1a0e-41f5-9006-17540369b9ad" volumeName="kubernetes.io/configmap/df42c69b-1a0e-41f5-9006-17540369b9ad-mcd-auth-proxy-config" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.081914 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ce6dd93-084c-4e15-8b7c-e0829a6df14e" volumeName="kubernetes.io/configmap/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-config" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.081926 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="12b89e05-a503-47aa-90b2-4d741e015b19" volumeName="kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-profile-collector-cert" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.081941 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ccd8e-d964-4c03-8ffc-51b464030c25" volumeName="kubernetes.io/configmap/6a9ccd8e-d964-4c03-8ffc-51b464030c25-trusted-ca" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.081957 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c396c41-c617-4631-9700-a7052af5a276" volumeName="kubernetes.io/configmap/8c396c41-c617-4631-9700-a7052af5a276-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.081970 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e0c87ae-6387-4c00-b03d-582566907fb6" volumeName="kubernetes.io/empty-dir/8e0c87ae-6387-4c00-b03d-582566907fb6-snapshots" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.081990 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e70a9f5-1154-40e9-a487-21e36e7f420a" volumeName="kubernetes.io/secret/8e70a9f5-1154-40e9-a487-21e36e7f420a-cloud-controller-manager-operator-tls" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082004 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91168f3d-70eb-4351-bb83-5411a96ad29d" volumeName="kubernetes.io/secret/91168f3d-70eb-4351-bb83-5411a96ad29d-cert" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082019 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="df42c69b-1a0e-41f5-9006-17540369b9ad" volumeName="kubernetes.io/secret/df42c69b-1a0e-41f5-9006-17540369b9ad-proxy-tls" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082036 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e8d6a6c0-b944-4206-9178-9a9930b303b9" volumeName="kubernetes.io/secret/e8d6a6c0-b944-4206-9178-9a9930b303b9-serving-cert" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082081 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="523033b8-4101-4a55-8320-55bef04ddaaf" volumeName="kubernetes.io/configmap/523033b8-4101-4a55-8320-55bef04ddaaf-env-overrides" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082094 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91d16f7b-390a-4d9d-99d6-cc8e210801d1" volumeName="kubernetes.io/configmap/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-trusted-ca" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082113 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6153510-452b-4726-8b63-8cc894daa168" volumeName="kubernetes.io/secret/c6153510-452b-4726-8b63-8cc894daa168-signing-key" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082130 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c84dc269-43ae-4083-9998-a0b3c90bb681" volumeName="kubernetes.io/projected/c84dc269-43ae-4083-9998-a0b3c90bb681-kube-api-access-9sp95" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082144 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e56a17d6-d740-4349-833e-b5279f7db2d4" volumeName="kubernetes.io/empty-dir/e56a17d6-d740-4349-833e-b5279f7db2d4-utilities" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082157 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5463fbf-ac21-4058-9a3b-30d0e5ea31b7" volumeName="kubernetes.io/configmap/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7-config" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082179 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24765ff1-5e7d-4100-ad81-8f73555fc0a2" volumeName="kubernetes.io/secret/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-tls" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082193 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="523033b8-4101-4a55-8320-55bef04ddaaf" volumeName="kubernetes.io/configmap/523033b8-4101-4a55-8320-55bef04ddaaf-ovnkube-config" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082210 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b36d8451-0fda-4d9d-a850-d05c8f847016" volumeName="kubernetes.io/configmap/b36d8451-0fda-4d9d-a850-d05c8f847016-config" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082228 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d" volumeName="kubernetes.io/projected/e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d-kube-api-access-w6wvl" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082243 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fbe9964a-9e82-48e9-82b0-7c07e4cec3a2" volumeName="kubernetes.io/projected/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-kube-api-access-pwjpw" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082256 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c" volumeName="kubernetes.io/configmap/8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c-config-volume" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082272 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1" volumeName="kubernetes.io/projected/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1-kube-api-access-rv6zq" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082286 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c396c41-c617-4631-9700-a7052af5a276" volumeName="kubernetes.io/configmap/8c396c41-c617-4631-9700-a7052af5a276-metrics-server-audit-profiles" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082300 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c396c41-c617-4631-9700-a7052af5a276" volumeName="kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-client-ca-bundle" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082314 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f807f33c-8132-48a8-ab12-4b54c1cd2b10" volumeName="kubernetes.io/projected/f807f33c-8132-48a8-ab12-4b54c1cd2b10-kube-api-access-g8742" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082327 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="02f1d753-983a-4c4a-b1a0-560de173859a" volumeName="kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-profile-collector-cert" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082343 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb39fcc8-beb4-410e-b2a4-0b3e150719cc" volumeName="kubernetes.io/configmap/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-env-overrides" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082362 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c396c41-c617-4631-9700-a7052af5a276" volumeName="kubernetes.io/projected/8c396c41-c617-4631-9700-a7052af5a276-kube-api-access-n4grf" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082375 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="df2b8111-41c6-4333-b473-4c08fb836f70" volumeName="kubernetes.io/projected/df2b8111-41c6-4333-b473-4c08fb836f70-kube-api-access-cd796" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082390 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24765ff1-5e7d-4100-ad81-8f73555fc0a2" volumeName="kubernetes.io/projected/24765ff1-5e7d-4100-ad81-8f73555fc0a2-kube-api-access-98725" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082403 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25190a18-bdac-479b-b526-840d28636be3" volumeName="kubernetes.io/configmap/25190a18-bdac-479b-b526-840d28636be3-config" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082417 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2cb764f6-40f8-4e87-8be0-b9d7b0364201" volumeName="kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082433 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3332acec-1553-4594-a903-a322399f6d9d" volumeName="kubernetes.io/projected/3332acec-1553-4594-a903-a322399f6d9d-kube-api-access-x6qs2" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082449 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e70a9f5-1154-40e9-a487-21e36e7f420a" volumeName="kubernetes.io/configmap/8e70a9f5-1154-40e9-a487-21e36e7f420a-images" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082462 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c92835f0-7f32-4584-8304-843d7979392a" volumeName="kubernetes.io/empty-dir/c92835f0-7f32-4584-8304-843d7979392a-available-featuregates" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082475 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2cb764f6-40f8-4e87-8be0-b9d7b0364201" volumeName="kubernetes.io/projected/2cb764f6-40f8-4e87-8be0-b9d7b0364201-kube-api-access-sp8hv" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082489 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b4e3ba0-5194-4e20-8f12-dea4b67504fe" volumeName="kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082506 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ebd1a97-ff7b-4a10-a1b5-956e427478a8" volumeName="kubernetes.io/projected/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-kube-api-access-gckc2" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082521 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9b5620d6-a5fe-45d7-b39e-8bed7f602a17" volumeName="kubernetes.io/secret/9b5620d6-a5fe-45d7-b39e-8bed7f602a17-serving-cert" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082533 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9cad383a-cb69-41a8-aec8-23ee1c930430" volumeName="kubernetes.io/projected/9cad383a-cb69-41a8-aec8-23ee1c930430-kube-api-access-svc78" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082548 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f85222bf-f51a-4232-8db1-1e6ee593617b" volumeName="kubernetes.io/secret/f85222bf-f51a-4232-8db1-1e6ee593617b-serving-cert" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082609 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="02f1d753-983a-4c4a-b1a0-560de173859a" volumeName="kubernetes.io/projected/02f1d753-983a-4c4a-b1a0-560de173859a-kube-api-access-mb52w" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082624 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="12b89e05-a503-47aa-90b2-4d741e015b19" volumeName="kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082641 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a2d8ef6-14ac-490d-a931-7082344d3f46" volumeName="kubernetes.io/projected/4a2d8ef6-14ac-490d-a931-7082344d3f46-ca-certs" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082654 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6153510-452b-4726-8b63-8cc894daa168" volumeName="kubernetes.io/projected/c6153510-452b-4726-8b63-8cc894daa168-kube-api-access-lxz8j" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082667 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c84dc269-43ae-4083-9998-a0b3c90bb681" volumeName="kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082680 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cabdddba-5507-4e47-98ef-a00c6d0f305d" volumeName="kubernetes.io/configmap/cabdddba-5507-4e47-98ef-a00c6d0f305d-trusted-ca-bundle" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082694 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f5b3b93-a59d-495c-a311-8913fa6000fc" volumeName="kubernetes.io/secret/4f5b3b93-a59d-495c-a311-8913fa6000fc-catalogserver-certs" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082707 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" volumeName="kubernetes.io/secret/6a08a1e4-cf92-4733-a8af-c7ac5b21e925-stats-auth" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082720 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="732a3831-20e0-47dc-a29a-8bb4659541b7" volumeName="kubernetes.io/projected/732a3831-20e0-47dc-a29a-8bb4659541b7-kube-api-access" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082738 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e3a675b9-feaa-4456-b7b4-0cd3afc42a42" volumeName="kubernetes.io/projected/e3a675b9-feaa-4456-b7b4-0cd3afc42a42-kube-api-access-nn8hz" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082753 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e56a17d6-d740-4349-833e-b5279f7db2d4" volumeName="kubernetes.io/projected/e56a17d6-d740-4349-833e-b5279f7db2d4-kube-api-access-gg7sb" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082766 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fbe9964a-9e82-48e9-82b0-7c07e4cec3a2" volumeName="kubernetes.io/configmap/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-etcd-service-ca" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082778 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22a83952-32ec-48f7-85cd-209b62362ae2" volumeName="kubernetes.io/secret/22a83952-32ec-48f7-85cd-209b62362ae2-tls-certificates" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082791 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="303d5058-84df-40d1-a941-896b093ae470" volumeName="kubernetes.io/empty-dir/303d5058-84df-40d1-a941-896b093ae470-operand-assets" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082805 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="608a8a56-daee-4fa1-8300-42155217c68b" volumeName="kubernetes.io/configmap/608a8a56-daee-4fa1-8300-42155217c68b-metrics-client-ca" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082818 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c" volumeName="kubernetes.io/secret/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c-serving-cert" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082831 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a4267e3a-aaaf-4b2f-a37c-0f097a35783f" volumeName="kubernetes.io/empty-dir/a4267e3a-aaaf-4b2f-a37c-0f097a35783f-utilities" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082872 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a4267e3a-aaaf-4b2f-a37c-0f097a35783f" volumeName="kubernetes.io/projected/a4267e3a-aaaf-4b2f-a37c-0f097a35783f-kube-api-access-5pc72" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082887 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="adc1097b-c1ab-4f09-965d-1c819671475b" volumeName="kubernetes.io/projected/adc1097b-c1ab-4f09-965d-1c819671475b-kube-api-access-nqtld" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082899 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e70a9f5-1154-40e9-a487-21e36e7f420a" volumeName="kubernetes.io/configmap/8e70a9f5-1154-40e9-a487-21e36e7f420a-auth-proxy-config" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082913 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24765ff1-5e7d-4100-ad81-8f73555fc0a2" volumeName="kubernetes.io/secret/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082926 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b4e3ba0-5194-4e20-8f12-dea4b67504fe" volumeName="kubernetes.io/projected/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-kube-api-access-dqqkv" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082938 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9b5620d6-a5fe-45d7-b39e-8bed7f602a17" volumeName="kubernetes.io/projected/9b5620d6-a5fe-45d7-b39e-8bed7f602a17-kube-api-access-jtf52" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082950 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b176946a-c056-441c-9145-b88ca4d75758" volumeName="kubernetes.io/secret/b176946a-c056-441c-9145-b88ca4d75758-encryption-config" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082965 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fbe9964a-9e82-48e9-82b0-7c07e4cec3a2" volumeName="kubernetes.io/configmap/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-config" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082977 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55a2662a-d672-4a46-9b81-bfcaf334eedb" volumeName="kubernetes.io/configmap/55a2662a-d672-4a46-9b81-bfcaf334eedb-client-ca" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.082990 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2e9cdff-8c15-43df-b8df-7fe3a73fda86" volumeName="kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083003 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fcbda577-b943-4b5c-b041-948aece8e40f" volumeName="kubernetes.io/configmap/fcbda577-b943-4b5c-b041-948aece8e40f-config" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083020 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ce6dd93-084c-4e15-8b7c-e0829a6df14e" volumeName="kubernetes.io/projected/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-kube-api-access-q8msx" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083037 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" volumeName="kubernetes.io/secret/6a08a1e4-cf92-4733-a8af-c7ac5b21e925-metrics-certs" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083054 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ebd1a97-ff7b-4a10-a1b5-956e427478a8" volumeName="kubernetes.io/configmap/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-config" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083068 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1" volumeName="kubernetes.io/secret/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1-certs" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083087 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5463fbf-ac21-4058-9a3b-30d0e5ea31b7" volumeName="kubernetes.io/projected/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7-kube-api-access" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083101 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5da829af-05fb-4f6e-9bec-c4dcc9cbec4b" volumeName="kubernetes.io/projected/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-kube-api-access-bbnd2" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083115 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" volumeName="kubernetes.io/configmap/6a08a1e4-cf92-4733-a8af-c7ac5b21e925-service-ca-bundle" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083130 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b4e3ba0-5194-4e20-8f12-dea4b67504fe" volumeName="kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083144 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c92835f0-7f32-4584-8304-843d7979392a" volumeName="kubernetes.io/projected/c92835f0-7f32-4584-8304-843d7979392a-kube-api-access-6nwzm" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083159 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fbe9964a-9e82-48e9-82b0-7c07e4cec3a2" volumeName="kubernetes.io/secret/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-etcd-client" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083207 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0cb042de-c873-408c-a4c4-ef9f7e546a08" volumeName="kubernetes.io/empty-dir/0cb042de-c873-408c-a4c4-ef9f7e546a08-catalog-content" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083219 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5305004-5311-4bc4-ad7c-6670f97c89cb" volumeName="kubernetes.io/configmap/a5305004-5311-4bc4-ad7c-6670f97c89cb-metrics-client-ca" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083308 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f5b3b93-a59d-495c-a311-8913fa6000fc" volumeName="kubernetes.io/projected/4f5b3b93-a59d-495c-a311-8913fa6000fc-kube-api-access-2tszx" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083328 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74a7801b-b7a4-4292-91b3-6285c239aeb7" volumeName="kubernetes.io/projected/74a7801b-b7a4-4292-91b3-6285c239aeb7-kube-api-access-pdmhx" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083343 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c396c41-c617-4631-9700-a7052af5a276" volumeName="kubernetes.io/empty-dir/8c396c41-c617-4631-9700-a7052af5a276-audit-log" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083357 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a02536a3-7d3e-4e74-9625-aefed518ec35" volumeName="kubernetes.io/secret/a02536a3-7d3e-4e74-9625-aefed518ec35-serving-cert" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083374 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c92835f0-7f32-4584-8304-843d7979392a" volumeName="kubernetes.io/secret/c92835f0-7f32-4584-8304-843d7979392a-serving-cert" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083392 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24765ff1-5e7d-4100-ad81-8f73555fc0a2" volumeName="kubernetes.io/empty-dir/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-textfile" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083408 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25190a18-bdac-479b-b526-840d28636be3" volumeName="kubernetes.io/configmap/25190a18-bdac-479b-b526-840d28636be3-etcd-serving-ca" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083423 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55a2662a-d672-4a46-9b81-bfcaf334eedb" volumeName="kubernetes.io/projected/55a2662a-d672-4a46-9b81-bfcaf334eedb-kube-api-access-gzghr" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083439 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b085f760-0e24-41a8-af09-538396aad935" volumeName="kubernetes.io/projected/b085f760-0e24-41a8-af09-538396aad935-kube-api-access-q86gx" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083456 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f85222bf-f51a-4232-8db1-1e6ee593617b" volumeName="kubernetes.io/configmap/f85222bf-f51a-4232-8db1-1e6ee593617b-config" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083476 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25190a18-bdac-479b-b526-840d28636be3" volumeName="kubernetes.io/projected/25190a18-bdac-479b-b526-840d28636be3-kube-api-access-bsb4q" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083491 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ccd8e-d964-4c03-8ffc-51b464030c25" volumeName="kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083505 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c" volumeName="kubernetes.io/projected/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c-kube-api-access-n2b65" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083521 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9b5620d6-a5fe-45d7-b39e-8bed7f602a17" volumeName="kubernetes.io/configmap/9b5620d6-a5fe-45d7-b39e-8bed7f602a17-config" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083535 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e68b3061-c9d2-469d-babf-7ccac0ad9b14" volumeName="kubernetes.io/projected/e68b3061-c9d2-469d-babf-7ccac0ad9b14-kube-api-access-6xz68" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083560 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5463fbf-ac21-4058-9a3b-30d0e5ea31b7" volumeName="kubernetes.io/secret/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7-serving-cert" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083600 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="638b3f88-0386-4f30-8ca5-6255e8f936fc" volumeName="kubernetes.io/empty-dir/638b3f88-0386-4f30-8ca5-6255e8f936fc-tmp" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083617 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70e2ba24-4871-4d1d-9935-156fdbeb2810" volumeName="kubernetes.io/projected/70e2ba24-4871-4d1d-9935-156fdbeb2810-kube-api-access-4nmd6" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083634 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ccd8e-d964-4c03-8ffc-51b464030c25" volumeName="kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083650 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74a7801b-b7a4-4292-91b3-6285c239aeb7" volumeName="kubernetes.io/secret/74a7801b-b7a4-4292-91b3-6285c239aeb7-cloud-credential-operator-serving-cert" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083669 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f5b3b93-a59d-495c-a311-8913fa6000fc" volumeName="kubernetes.io/empty-dir/4f5b3b93-a59d-495c-a311-8913fa6000fc-cache" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083688 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ccd8e-d964-4c03-8ffc-51b464030c25" volumeName="kubernetes.io/projected/6a9ccd8e-d964-4c03-8ffc-51b464030c25-kube-api-access-ssz8p" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083704 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e0c87ae-6387-4c00-b03d-582566907fb6" volumeName="kubernetes.io/configmap/8e0c87ae-6387-4c00-b03d-582566907fb6-service-ca-bundle" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083719 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0cb042de-c873-408c-a4c4-ef9f7e546a08" volumeName="kubernetes.io/empty-dir/0cb042de-c873-408c-a4c4-ef9f7e546a08-utilities" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083734 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25190a18-bdac-479b-b526-840d28636be3" volumeName="kubernetes.io/configmap/25190a18-bdac-479b-b526-840d28636be3-trusted-ca-bundle" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083747 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a2d8ef6-14ac-490d-a931-7082344d3f46" volumeName="kubernetes.io/empty-dir/4a2d8ef6-14ac-490d-a931-7082344d3f46-cache" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083764 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55a2662a-d672-4a46-9b81-bfcaf334eedb" volumeName="kubernetes.io/secret/55a2662a-d672-4a46-9b81-bfcaf334eedb-serving-cert" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083776 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74a7801b-b7a4-4292-91b3-6285c239aeb7" volumeName="kubernetes.io/configmap/74a7801b-b7a4-4292-91b3-6285c239aeb7-cco-trusted-ca" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083794 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c" volumeName="kubernetes.io/secret/8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c-metrics-tls" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083808 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9cad383a-cb69-41a8-aec8-23ee1c930430" volumeName="kubernetes.io/secret/9cad383a-cb69-41a8-aec8-23ee1c930430-apiservice-cert" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083822 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="adc1097b-c1ab-4f09-965d-1c819671475b" volumeName="kubernetes.io/secret/adc1097b-c1ab-4f09-965d-1c819671475b-webhook-cert" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083836 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb39fcc8-beb4-410e-b2a4-0b3e150719cc" volumeName="kubernetes.io/projected/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-kube-api-access-rc8jx" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083849 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9cad383a-cb69-41a8-aec8-23ee1c930430" volumeName="kubernetes.io/secret/9cad383a-cb69-41a8-aec8-23ee1c930430-webhook-cert" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083864 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a4267e3a-aaaf-4b2f-a37c-0f097a35783f" volumeName="kubernetes.io/empty-dir/a4267e3a-aaaf-4b2f-a37c-0f097a35783f-catalog-content" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083879 31411 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e56a17d6-d740-4349-833e-b5279f7db2d4" volumeName="kubernetes.io/empty-dir/e56a17d6-d740-4349-833e-b5279f7db2d4-catalog-content" seLinuxMountContext="" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083893 31411 reconstruct.go:97] "Volume reconstruction finished" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.083904 31411 reconciler.go:26] "Reconciler: start to sync state" Feb 24 02:20:57.087389 master-0 kubenswrapper[31411]: I0224 02:20:57.086108 31411 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 24 02:20:57.099065 master-0 kubenswrapper[31411]: I0224 02:20:57.090197 31411 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 24 02:20:57.099065 master-0 kubenswrapper[31411]: I0224 02:20:57.090200 31411 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 24 02:20:57.099065 master-0 kubenswrapper[31411]: I0224 02:20:57.090326 31411 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 24 02:20:57.099065 master-0 kubenswrapper[31411]: I0224 02:20:57.090434 31411 kubelet.go:2335] "Starting kubelet main sync loop" Feb 24 02:20:57.099065 master-0 kubenswrapper[31411]: E0224 02:20:57.090541 31411 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 24 02:20:57.099065 master-0 kubenswrapper[31411]: I0224 02:20:57.093802 31411 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 24 02:20:57.111194 master-0 kubenswrapper[31411]: I0224 02:20:57.111073 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-8586dccc9b-sl5hz_b36d8451-0fda-4d9d-a850-d05c8f847016/openshift-apiserver-operator/2.log" Feb 24 02:20:57.111194 master-0 kubenswrapper[31411]: I0224 02:20:57.111124 31411 generic.go:334] "Generic (PLEG): container finished" podID="b36d8451-0fda-4d9d-a850-d05c8f847016" containerID="b55dffa76d343a2f86cafe3f911f9b262284cdca97531fab28e85dab3a7157d5" exitCode=1 Feb 24 02:20:57.120628 master-0 kubenswrapper[31411]: I0224 02:20:57.118901 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-6dlqb_c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/ingress-operator/4.log" Feb 24 02:20:57.123918 master-0 kubenswrapper[31411]: I0224 02:20:57.122805 31411 generic.go:334] "Generic (PLEG): container finished" podID="c3278a82-ee70-4d6c-9c96-f8cb1bcb9334" containerID="7ebe8b93ac5fdd43a48b73d5d4aae71709a78d8ce5151017ae8a23f470fe9ff8" exitCode=1 Feb 24 02:20:57.128434 master-0 kubenswrapper[31411]: I0224 02:20:57.127106 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-2492q_f85222bf-f51a-4232-8db1-1e6ee593617b/kube-apiserver-operator/1.log" Feb 24 02:20:57.128434 master-0 kubenswrapper[31411]: I0224 02:20:57.127189 31411 generic.go:334] "Generic (PLEG): container finished" podID="f85222bf-f51a-4232-8db1-1e6ee593617b" containerID="35a7a7e510655bf960c0ebb22ba6e0c6db1f746f1a9047b38fb6bb5a0b24bc60" exitCode=255 Feb 24 02:20:57.130494 master-0 kubenswrapper[31411]: I0224 02:20:57.130422 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-hvr8b_4a2d8ef6-14ac-490d-a931-7082344d3f46/manager/1.log" Feb 24 02:20:57.131103 master-0 kubenswrapper[31411]: I0224 02:20:57.131038 31411 generic.go:334] "Generic (PLEG): container finished" podID="4a2d8ef6-14ac-490d-a931-7082344d3f46" containerID="8f261203a7d383fb41e07035f6494d12c5ece1ea073bc54bdb848ad1f13db388" exitCode=1 Feb 24 02:20:57.139956 master-0 kubenswrapper[31411]: I0224 02:20:57.139905 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-c7fgn_7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c/openshift-controller-manager-operator/0.log" Feb 24 02:20:57.139956 master-0 kubenswrapper[31411]: I0224 02:20:57.139955 31411 generic.go:334] "Generic (PLEG): container finished" podID="7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c" containerID="e9d2b8b0026aada75f8d27003b5a7df3b3f0253b60da8ae01339ca6b74582705" exitCode=1 Feb 24 02:20:57.145930 master-0 kubenswrapper[31411]: I0224 02:20:57.145853 31411 generic.go:334] "Generic (PLEG): container finished" podID="7b098bd4-5751-4b01-8409-0688fd29233e" containerID="cd4d613347fcadf7d18c80092bf87aa7750bed8421d9d1f3c7ea9c90740b5523" exitCode=0 Feb 24 02:20:57.148409 master-0 kubenswrapper[31411]: I0224 02:20:57.148360 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-p5b6q_adc1097b-c1ab-4f09-965d-1c819671475b/approver/1.log" Feb 24 02:20:57.148853 master-0 kubenswrapper[31411]: I0224 02:20:57.148806 31411 generic.go:334] "Generic (PLEG): container finished" podID="adc1097b-c1ab-4f09-965d-1c819671475b" containerID="a75bd245067c2bd96dc6595a801a2c01cb01bd1d3a373e46bfec3a120455dff3" exitCode=1 Feb 24 02:20:57.154199 master-0 kubenswrapper[31411]: I0224 02:20:57.154142 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-jhklz_4f5b3b93-a59d-495c-a311-8913fa6000fc/manager/1.log" Feb 24 02:20:57.154800 master-0 kubenswrapper[31411]: I0224 02:20:57.154735 31411 generic.go:334] "Generic (PLEG): container finished" podID="4f5b3b93-a59d-495c-a311-8913fa6000fc" containerID="8ea7a40a6269170c26a7054768f3ac9bed9d2f95f70afef7c8db1dfa7590c2a4" exitCode=1 Feb 24 02:20:57.163233 master-0 kubenswrapper[31411]: I0224 02:20:57.163164 31411 generic.go:334] "Generic (PLEG): container finished" podID="fbe9964a-9e82-48e9-82b0-7c07e4cec3a2" containerID="fcdcbf5a149a0b578453f994965bc5ee1ca152377a3fb51c3ba2512d342fe454" exitCode=0 Feb 24 02:20:57.166036 master-0 kubenswrapper[31411]: I0224 02:20:57.165981 31411 generic.go:334] "Generic (PLEG): container finished" podID="56c3cb71c9851003c8de7e7c5db4b87e" containerID="5fbafce85063f872b1786e48e809b15f5aa08369e9d34c7d53d1c636ed17075e" exitCode=1 Feb 24 02:20:57.167991 master-0 kubenswrapper[31411]: I0224 02:20:57.167932 31411 generic.go:334] "Generic (PLEG): container finished" podID="7fa1462b-8f1c-4a77-9c1c-f0f79910737f" containerID="35c312973828464e3d9786034ffddad219bbd2d62792822db99238b48a9c981d" exitCode=0 Feb 24 02:20:57.179289 master-0 kubenswrapper[31411]: I0224 02:20:57.179198 31411 generic.go:334] "Generic (PLEG): container finished" podID="91d16f7b-390a-4d9d-99d6-cc8e210801d1" containerID="2ba78e893a4a0218430d836aa7034890de4059e422e2aea2e06c126f90cc740a" exitCode=0 Feb 24 02:20:57.189874 master-0 kubenswrapper[31411]: I0224 02:20:57.189807 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 24 02:20:57.190259 master-0 kubenswrapper[31411]: I0224 02:20:57.190212 31411 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="4ea164dd4d44c905424ce0b0b3ea58702494938b88cbbbe52d4ce16914c7762b" exitCode=1 Feb 24 02:20:57.190259 master-0 kubenswrapper[31411]: I0224 02:20:57.190241 31411 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="17bbd7fbbeaf0dec034d902b8e7575b6559c59f312f1a534ffc4119208ba1272" exitCode=0 Feb 24 02:20:57.191060 master-0 kubenswrapper[31411]: E0224 02:20:57.191013 31411 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 24 02:20:57.194838 master-0 kubenswrapper[31411]: I0224 02:20:57.194781 31411 generic.go:334] "Generic (PLEG): container finished" podID="b176946a-c056-441c-9145-b88ca4d75758" containerID="428d9eac1aa20c4549b1b4238b89b04f2faa950b7b0a74457007efebb7f09258" exitCode=0 Feb 24 02:20:57.196997 master-0 kubenswrapper[31411]: I0224 02:20:57.196935 31411 generic.go:334] "Generic (PLEG): container finished" podID="50c78047-1c4d-4535-ba2c-31f080d6a57d" containerID="7bb232625f3494579f18ed676cbbdfe8d63a7f633ead8439889c5a5bfa8b5a12" exitCode=0 Feb 24 02:20:57.203144 master-0 kubenswrapper[31411]: I0224 02:20:57.203087 31411 generic.go:334] "Generic (PLEG): container finished" podID="a4267e3a-aaaf-4b2f-a37c-0f097a35783f" containerID="1c5e5c71f0526c11a274e2510dcba4e8e573cc7f5dabb80f3b317e540bfa20fd" exitCode=0 Feb 24 02:20:57.203144 master-0 kubenswrapper[31411]: I0224 02:20:57.203113 31411 generic.go:334] "Generic (PLEG): container finished" podID="a4267e3a-aaaf-4b2f-a37c-0f097a35783f" containerID="17cb16eb2bff3a3eb8a7a600aa48da74788c2b2cfe679ee292ca901ee7cdc53d" exitCode=0 Feb 24 02:20:57.209217 master-0 kubenswrapper[31411]: I0224 02:20:57.209170 31411 generic.go:334] "Generic (PLEG): container finished" podID="888e23114cf20f3bf6573c5f7b88d7d0" containerID="4e2723790b49b8efef2a3cd4b841ce0f7ce144ba7c018da13a9e536997af68e4" exitCode=0 Feb 24 02:20:57.220192 master-0 kubenswrapper[31411]: I0224 02:20:57.220138 31411 generic.go:334] "Generic (PLEG): container finished" podID="e8d6a6c0-b944-4206-9178-9a9930b303b9" containerID="5ed088abb8fdf119602dca1779c3b84da28af95aaab8dcf8c7df738c7d83aa56" exitCode=0 Feb 24 02:20:57.235778 master-0 kubenswrapper[31411]: I0224 02:20:57.235726 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-46vmq_cabdddba-5507-4e47-98ef-a00c6d0f305d/authentication-operator/2.log" Feb 24 02:20:57.235778 master-0 kubenswrapper[31411]: I0224 02:20:57.235778 31411 generic.go:334] "Generic (PLEG): container finished" podID="cabdddba-5507-4e47-98ef-a00c6d0f305d" containerID="a6b3894561c0264e4475f846f6d8a4a1bfc54fcd4a779abb17626105631909c8" exitCode=255 Feb 24 02:20:57.247933 master-0 kubenswrapper[31411]: I0224 02:20:57.247861 31411 generic.go:334] "Generic (PLEG): container finished" podID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerID="16b88bdb19342563d81116f8e13c7e868beadde1813cafcd204be1678600a199" exitCode=0 Feb 24 02:20:57.262069 master-0 kubenswrapper[31411]: I0224 02:20:57.261995 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-8l58x_f6e7b773-7ecd-4a5c-8bef-d672f371e7e5/snapshot-controller/4.log" Feb 24 02:20:57.262069 master-0 kubenswrapper[31411]: I0224 02:20:57.262036 31411 generic.go:334] "Generic (PLEG): container finished" podID="f6e7b773-7ecd-4a5c-8bef-d672f371e7e5" containerID="de767e9040507053af6384aa3263bafeca9d1e8f1b629cb4c6dfbeef7e5cb93d" exitCode=1 Feb 24 02:20:57.266797 master-0 kubenswrapper[31411]: I0224 02:20:57.266729 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-ckntz_a4cea44a-1c6e-465f-97df-2c951056cb85/control-plane-machine-set-operator/0.log" Feb 24 02:20:57.266965 master-0 kubenswrapper[31411]: I0224 02:20:57.266819 31411 generic.go:334] "Generic (PLEG): container finished" podID="a4cea44a-1c6e-465f-97df-2c951056cb85" containerID="333b5a9659128a52ffaed5da3c25d8feb0986d4e855c20f96e40ad31f9cb9171" exitCode=1 Feb 24 02:20:57.274077 master-0 kubenswrapper[31411]: I0224 02:20:57.273986 31411 generic.go:334] "Generic (PLEG): container finished" podID="64b7ea36-8849-4955-80b5-c7e7c12fcc29" containerID="266ce948594252c2399468918fec845a74da7e6fcd999550c798b018f78a387f" exitCode=0 Feb 24 02:20:57.277719 master-0 kubenswrapper[31411]: I0224 02:20:57.276994 31411 generic.go:334] "Generic (PLEG): container finished" podID="8e0c87ae-6387-4c00-b03d-582566907fb6" containerID="2b6a32bb9499d220c3167aa1a2fb2a91c9d1624533d8361accf480baf5e26efd" exitCode=0 Feb 24 02:20:57.280486 master-0 kubenswrapper[31411]: I0224 02:20:57.280433 31411 generic.go:334] "Generic (PLEG): container finished" podID="24983c94-f158-4a07-854b-2e5455374f19" containerID="e51ee7952dfd445e552e98f87e6cac337f269d310845fcc3274f65c031cd5dd3" exitCode=0 Feb 24 02:20:57.287262 master-0 kubenswrapper[31411]: I0224 02:20:57.287203 31411 generic.go:334] "Generic (PLEG): container finished" podID="523033b8-4101-4a55-8320-55bef04ddaaf" containerID="6dce9a18bd8067d5b09584fe75915e8862f74f2dfbc221872c96fc50a87d78c5" exitCode=0 Feb 24 02:20:57.293125 master-0 kubenswrapper[31411]: I0224 02:20:57.293071 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-8znkt_8e70a9f5-1154-40e9-a487-21e36e7f420a/config-sync-controllers/0.log" Feb 24 02:20:57.294288 master-0 kubenswrapper[31411]: I0224 02:20:57.294235 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-8znkt_8e70a9f5-1154-40e9-a487-21e36e7f420a/cluster-cloud-controller-manager/0.log" Feb 24 02:20:57.294394 master-0 kubenswrapper[31411]: I0224 02:20:57.294303 31411 generic.go:334] "Generic (PLEG): container finished" podID="8e70a9f5-1154-40e9-a487-21e36e7f420a" containerID="092cbfe8313c87ea7a79610f389e04195756c11c4aca575ebaf70dbe1a3f496d" exitCode=1 Feb 24 02:20:57.294394 master-0 kubenswrapper[31411]: I0224 02:20:57.294332 31411 generic.go:334] "Generic (PLEG): container finished" podID="8e70a9f5-1154-40e9-a487-21e36e7f420a" containerID="d769b62bae7da248060974b37bc61a65e0831df5e231d5c8e62a89bc58f3df85" exitCode=1 Feb 24 02:20:57.301324 master-0 kubenswrapper[31411]: I0224 02:20:57.301280 31411 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="bb9a80ed6d7d7eb83242571f651240d13b6fe2b3ccfaff6770e496961a1600a5" exitCode=0 Feb 24 02:20:57.301324 master-0 kubenswrapper[31411]: I0224 02:20:57.301321 31411 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="e47b414c847a54539f28e830435dfe61ba2d4309c2e9e84ac24e938ca23917ff" exitCode=0 Feb 24 02:20:57.301521 master-0 kubenswrapper[31411]: I0224 02:20:57.301339 31411 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="e6d11eb8af0756a7414e361a0a41884731f78257822ffdb122c02d11a1914c35" exitCode=0 Feb 24 02:20:57.305309 master-0 kubenswrapper[31411]: I0224 02:20:57.305263 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_bd02da41-8a48-4436-ae58-6363e7554898/installer/0.log" Feb 24 02:20:57.305441 master-0 kubenswrapper[31411]: I0224 02:20:57.305331 31411 generic.go:334] "Generic (PLEG): container finished" podID="bd02da41-8a48-4436-ae58-6363e7554898" containerID="beff9cdd09dcda0a6932e333a63d749970c5574701c511858c571df2f87fa178" exitCode=1 Feb 24 02:20:57.308042 master-0 kubenswrapper[31411]: I0224 02:20:57.307973 31411 generic.go:334] "Generic (PLEG): container finished" podID="5508683b-09ae-47a1-89fd-b0891a881e09" containerID="6bd403605e79109075e7b61bac31b57ae266809e2fcec35f73761229b419851f" exitCode=0 Feb 24 02:20:57.314794 master-0 kubenswrapper[31411]: I0224 02:20:57.314740 31411 generic.go:334] "Generic (PLEG): container finished" podID="c84dc269-43ae-4083-9998-a0b3c90bb681" containerID="d334785d9219b7d8b6844162f83a93171c2b15443bfd41ab182a8af1bf2c16e9" exitCode=0 Feb 24 02:20:57.321665 master-0 kubenswrapper[31411]: I0224 02:20:57.321615 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-fc889cfd5-xdws2_fcbda577-b943-4b5c-b041-948aece8e40f/kube-storage-version-migrator-operator/1.log" Feb 24 02:20:57.321801 master-0 kubenswrapper[31411]: I0224 02:20:57.321766 31411 generic.go:334] "Generic (PLEG): container finished" podID="fcbda577-b943-4b5c-b041-948aece8e40f" containerID="b052a287fe47bb0fa4d703a983f03a367ec9eff6f9c816432ac44ee4c3a812f3" exitCode=255 Feb 24 02:20:57.327810 master-0 kubenswrapper[31411]: I0224 02:20:57.327767 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_c12652f5-003f-4b77-b2bb-b666c9d7bb53/installer/0.log" Feb 24 02:20:57.327945 master-0 kubenswrapper[31411]: I0224 02:20:57.327815 31411 generic.go:334] "Generic (PLEG): container finished" podID="c12652f5-003f-4b77-b2bb-b666c9d7bb53" containerID="50801a56a4404416a44874540419cd05a4a4bedf1fb5022f9e0b4725f3c11f4d" exitCode=1 Feb 24 02:20:57.330779 master-0 kubenswrapper[31411]: I0224 02:20:57.330726 31411 generic.go:334] "Generic (PLEG): container finished" podID="c6153510-452b-4726-8b63-8cc894daa168" containerID="cc397ba4850c1884796fd99f77165bb7223cb379b9ddec9b0da649d7c4abf92f" exitCode=0 Feb 24 02:20:57.332786 master-0 kubenswrapper[31411]: I0224 02:20:57.332730 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_683deae1-94b1-4c17-a73f-ad628a09134b/installer/0.log" Feb 24 02:20:57.332882 master-0 kubenswrapper[31411]: I0224 02:20:57.332802 31411 generic.go:334] "Generic (PLEG): container finished" podID="683deae1-94b1-4c17-a73f-ad628a09134b" containerID="94401de1842b75a4dd153e2d7cb3bd01f3f26706beddf59514cdea6c0eb4a139" exitCode=1 Feb 24 02:20:57.335036 master-0 kubenswrapper[31411]: I0224 02:20:57.334984 31411 generic.go:334] "Generic (PLEG): container finished" podID="9b5620d6-a5fe-45d7-b39e-8bed7f602a17" containerID="18e36dffd25cf50db60c55874ae5e83aa35faa4fa1dff2c477ec4899a01aa1f0" exitCode=0 Feb 24 02:20:57.340856 master-0 kubenswrapper[31411]: I0224 02:20:57.340809 31411 generic.go:334] "Generic (PLEG): container finished" podID="57811d07-ae8a-44b7-8efb-dafc5afad31e" containerID="0261b05dc86f44c57d1260d8e9e574b7afb0942396c397b4be98f1486a4e967b" exitCode=0 Feb 24 02:20:57.340856 master-0 kubenswrapper[31411]: I0224 02:20:57.340837 31411 generic.go:334] "Generic (PLEG): container finished" podID="57811d07-ae8a-44b7-8efb-dafc5afad31e" containerID="46f1df0f3044924b6c94bc53975525ce01b17baddc32b6007d1fff90c64f595f" exitCode=0 Feb 24 02:20:57.340856 master-0 kubenswrapper[31411]: I0224 02:20:57.340848 31411 generic.go:334] "Generic (PLEG): container finished" podID="57811d07-ae8a-44b7-8efb-dafc5afad31e" containerID="cfdb24d0d0b1a9e1ffe1c98259396806799adff6a318a37a19e4e31ee02f6987" exitCode=0 Feb 24 02:20:57.340856 master-0 kubenswrapper[31411]: I0224 02:20:57.340856 31411 generic.go:334] "Generic (PLEG): container finished" podID="57811d07-ae8a-44b7-8efb-dafc5afad31e" containerID="bdb96a50270730f3bce2e557a04b02a2063f4f2e15fbd55d5081bf5036b5f652" exitCode=0 Feb 24 02:20:57.340856 master-0 kubenswrapper[31411]: I0224 02:20:57.340867 31411 generic.go:334] "Generic (PLEG): container finished" podID="57811d07-ae8a-44b7-8efb-dafc5afad31e" containerID="73bcd3ba04771dbfaf54cb795e59bd88d55d88d355f426be066ffb50beee1f86" exitCode=0 Feb 24 02:20:57.341195 master-0 kubenswrapper[31411]: I0224 02:20:57.340878 31411 generic.go:334] "Generic (PLEG): container finished" podID="57811d07-ae8a-44b7-8efb-dafc5afad31e" containerID="78fb207cbc767c0fee7b7d210f99c9aaf3165a7c791dd4e586c95fb618507ed8" exitCode=0 Feb 24 02:20:57.343907 master-0 kubenswrapper[31411]: I0224 02:20:57.343853 31411 generic.go:334] "Generic (PLEG): container finished" podID="24765ff1-5e7d-4100-ad81-8f73555fc0a2" containerID="1f52501726ed970c81fbc87519c42dbbcb0a0375319ca30b25aeac0dc7303da1" exitCode=0 Feb 24 02:20:57.348634 master-0 kubenswrapper[31411]: I0224 02:20:57.348570 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-drrqm_3332acec-1553-4594-a903-a322399f6d9d/network-operator/1.log" Feb 24 02:20:57.348730 master-0 kubenswrapper[31411]: I0224 02:20:57.348654 31411 generic.go:334] "Generic (PLEG): container finished" podID="3332acec-1553-4594-a903-a322399f6d9d" containerID="ab02a48b627b8d33941cca436e701fe8cca5e7a818343839afa499d5ab3abe6a" exitCode=255 Feb 24 02:20:57.351777 master-0 kubenswrapper[31411]: I0224 02:20:57.351742 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-bcf775fc9-8x6sd_6a9ccd8e-d964-4c03-8ffc-51b464030c25/cluster-node-tuning-operator/0.log" Feb 24 02:20:57.351877 master-0 kubenswrapper[31411]: I0224 02:20:57.351799 31411 generic.go:334] "Generic (PLEG): container finished" podID="6a9ccd8e-d964-4c03-8ffc-51b464030c25" containerID="3b252dadf775881151b12a03ae689a3184f813df8c5304f84973c53cfd29954e" exitCode=1 Feb 24 02:20:57.359458 master-0 kubenswrapper[31411]: I0224 02:20:57.359254 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k98fq_7b4e3ba0-5194-4e20-8f12-dea4b67504fe/cluster-baremetal-operator/2.log" Feb 24 02:20:57.359900 master-0 kubenswrapper[31411]: I0224 02:20:57.359831 31411 generic.go:334] "Generic (PLEG): container finished" podID="7b4e3ba0-5194-4e20-8f12-dea4b67504fe" containerID="08908d93b30c563001e2ff25650203b4026284e8bf58ed4cc26c75825e885fed" exitCode=1 Feb 24 02:20:57.363602 master-0 kubenswrapper[31411]: I0224 02:20:57.363501 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-7dd9c7d7b9-sjqsx_8ebd1a97-ff7b-4a10-a1b5-956e427478a8/machine-approver-controller/0.log" Feb 24 02:20:57.363999 master-0 kubenswrapper[31411]: I0224 02:20:57.363966 31411 generic.go:334] "Generic (PLEG): container finished" podID="8ebd1a97-ff7b-4a10-a1b5-956e427478a8" containerID="1ad25f42402e4374c8b94191386be9d7bc2003ced71ea11e24c2117025405399" exitCode=255 Feb 24 02:20:57.367551 master-0 kubenswrapper[31411]: I0224 02:20:57.367499 31411 generic.go:334] "Generic (PLEG): container finished" podID="c92835f0-7f32-4584-8304-843d7979392a" containerID="31574f7edd04f096f094395019ad492b65bb9b76a514603c20ad3eb658100f5f" exitCode=0 Feb 24 02:20:57.367551 master-0 kubenswrapper[31411]: I0224 02:20:57.367546 31411 generic.go:334] "Generic (PLEG): container finished" podID="c92835f0-7f32-4584-8304-843d7979392a" containerID="ec5c26bb0883484781a82be8d7bf1a6eb78e1cb6c0192ee0fe34ebba8f9531c4" exitCode=0 Feb 24 02:20:57.372756 master-0 kubenswrapper[31411]: I0224 02:20:57.372731 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-77cd4d9559-8tttg_f5463fbf-ac21-4058-9a3b-30d0e5ea31b7/kube-scheduler-operator-container/1.log" Feb 24 02:20:57.372907 master-0 kubenswrapper[31411]: I0224 02:20:57.372885 31411 generic.go:334] "Generic (PLEG): container finished" podID="f5463fbf-ac21-4058-9a3b-30d0e5ea31b7" containerID="5d2191d76e599e2fb849008551eb68ecc941d4be08a06831c47e2e57561783d8" exitCode=255 Feb 24 02:20:57.377713 master-0 kubenswrapper[31411]: I0224 02:20:57.377649 31411 generic.go:334] "Generic (PLEG): container finished" podID="303d5058-84df-40d1-a941-896b093ae470" containerID="af016520fee3befe328cfcacf1e61661d632e7bab3a83e84890042b8512881e1" exitCode=0 Feb 24 02:20:57.377793 master-0 kubenswrapper[31411]: I0224 02:20:57.377713 31411 generic.go:334] "Generic (PLEG): container finished" podID="303d5058-84df-40d1-a941-896b093ae470" containerID="3765f830d9a5fe9077b8e56d63e0f2189d75d32a461453b1f0db5a0b05c13f47" exitCode=0 Feb 24 02:20:57.377793 master-0 kubenswrapper[31411]: I0224 02:20:57.377731 31411 generic.go:334] "Generic (PLEG): container finished" podID="303d5058-84df-40d1-a941-896b093ae470" containerID="4d87ace597126f6a6c5b7ecfae7ff8d57f99cad256a801b2bb6027c85887bf7c" exitCode=0 Feb 24 02:20:57.380875 master-0 kubenswrapper[31411]: I0224 02:20:57.380851 31411 generic.go:334] "Generic (PLEG): container finished" podID="b085f760-0e24-41a8-af09-538396aad935" containerID="532c8330be4e1bb00f3b8c98db49eb86ee33fea1c47fd0eb58ed9999c987cc56" exitCode=0 Feb 24 02:20:57.381015 master-0 kubenswrapper[31411]: I0224 02:20:57.380995 31411 generic.go:334] "Generic (PLEG): container finished" podID="b085f760-0e24-41a8-af09-538396aad935" containerID="aab5f16cb62468cbd33a1b962837194ed256ccf00334d577f86ad2e704134976" exitCode=0 Feb 24 02:20:57.391310 master-0 kubenswrapper[31411]: E0224 02:20:57.391236 31411 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 24 02:20:57.403775 master-0 kubenswrapper[31411]: I0224 02:20:57.403718 31411 generic.go:334] "Generic (PLEG): container finished" podID="0cb042de-c873-408c-a4c4-ef9f7e546a08" containerID="d8bc7a70a71673332c516e84a549e1618f6a4d5aacc90397bac38d952ac62d70" exitCode=0 Feb 24 02:20:57.403775 master-0 kubenswrapper[31411]: I0224 02:20:57.403771 31411 generic.go:334] "Generic (PLEG): container finished" podID="0cb042de-c873-408c-a4c4-ef9f7e546a08" containerID="700bebabab614b094ae34ffb33548f3295723695a9fad8972c8014d17036eac5" exitCode=0 Feb 24 02:20:57.407551 master-0 kubenswrapper[31411]: I0224 02:20:57.407523 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-tl97n_a02536a3-7d3e-4e74-9625-aefed518ec35/kube-controller-manager-operator/1.log" Feb 24 02:20:57.407761 master-0 kubenswrapper[31411]: I0224 02:20:57.407731 31411 generic.go:334] "Generic (PLEG): container finished" podID="a02536a3-7d3e-4e74-9625-aefed518ec35" containerID="fe82991d620c953a66413bff375a2b214f4a5b8652aa8341d49741ccabb41961" exitCode=255 Feb 24 02:20:57.415511 master-0 kubenswrapper[31411]: I0224 02:20:57.415454 31411 generic.go:334] "Generic (PLEG): container finished" podID="fb39fcc8-beb4-410e-b2a4-0b3e150719cc" containerID="c3063a301534062c954aa79867d0cc96573d7146ccda3bfb83406935c96bf2b9" exitCode=0 Feb 24 02:20:57.424702 master-0 kubenswrapper[31411]: I0224 02:20:57.424654 31411 generic.go:334] "Generic (PLEG): container finished" podID="25190a18-bdac-479b-b526-840d28636be3" containerID="2bd08832a83f0b581af5dd0d4502909325c97e4a1b072cf713d68506345db86b" exitCode=0 Feb 24 02:20:57.427598 master-0 kubenswrapper[31411]: I0224 02:20:57.427541 31411 generic.go:334] "Generic (PLEG): container finished" podID="06fb1d82-f9e9-473b-80c5-767ec3948bd4" containerID="23e5ece2a1174ce846ce41906ef5a0fcc35a5f58a900b96b34aee280e09c4850" exitCode=0 Feb 24 02:20:57.429956 master-0 kubenswrapper[31411]: I0224 02:20:57.429927 31411 generic.go:334] "Generic (PLEG): container finished" podID="eb9f7dc4-e69f-4fc1-bb1a-1878971d279d" containerID="7141c97ac4f89de3381db3a918874906220134e85f0b183e0b4eaacb0434c063" exitCode=0 Feb 24 02:20:57.444505 master-0 kubenswrapper[31411]: I0224 02:20:57.444340 31411 generic.go:334] "Generic (PLEG): container finished" podID="e56a17d6-d740-4349-833e-b5279f7db2d4" containerID="5c2bae7a5f82ac2dacb3d782b5c120ab2d48eaa30503f169922755bafd417358" exitCode=0 Feb 24 02:20:57.444505 master-0 kubenswrapper[31411]: I0224 02:20:57.444401 31411 generic.go:334] "Generic (PLEG): container finished" podID="e56a17d6-d740-4349-833e-b5279f7db2d4" containerID="1e7c0eb2fdff11adf850fda4f441e025c2724fb8123e10487c2648065fa6f259" exitCode=0 Feb 24 02:20:57.691110 master-0 kubenswrapper[31411]: I0224 02:20:57.691044 31411 manager.go:324] Recovery completed Feb 24 02:20:57.791437 master-0 kubenswrapper[31411]: E0224 02:20:57.791345 31411 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 24 02:20:57.816744 master-0 kubenswrapper[31411]: I0224 02:20:57.816661 31411 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 24 02:20:57.816744 master-0 kubenswrapper[31411]: I0224 02:20:57.816725 31411 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 24 02:20:57.816956 master-0 kubenswrapper[31411]: I0224 02:20:57.816790 31411 state_mem.go:36] "Initialized new in-memory state store" Feb 24 02:20:57.817304 master-0 kubenswrapper[31411]: I0224 02:20:57.817252 31411 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 24 02:20:57.817378 master-0 kubenswrapper[31411]: I0224 02:20:57.817297 31411 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 24 02:20:57.817378 master-0 kubenswrapper[31411]: I0224 02:20:57.817353 31411 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Feb 24 02:20:57.817378 master-0 kubenswrapper[31411]: I0224 02:20:57.817373 31411 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Feb 24 02:20:57.817562 master-0 kubenswrapper[31411]: I0224 02:20:57.817392 31411 policy_none.go:49] "None policy: Start" Feb 24 02:20:57.827761 master-0 kubenswrapper[31411]: I0224 02:20:57.827703 31411 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 24 02:20:57.827902 master-0 kubenswrapper[31411]: I0224 02:20:57.827772 31411 state_mem.go:35] "Initializing new in-memory state store" Feb 24 02:20:57.828271 master-0 kubenswrapper[31411]: I0224 02:20:57.828223 31411 state_mem.go:75] "Updated machine memory state" Feb 24 02:20:57.828271 master-0 kubenswrapper[31411]: I0224 02:20:57.828257 31411 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Feb 24 02:20:57.859106 master-0 kubenswrapper[31411]: I0224 02:20:57.859042 31411 manager.go:334] "Starting Device Plugin manager" Feb 24 02:20:57.859845 master-0 kubenswrapper[31411]: I0224 02:20:57.859196 31411 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 24 02:20:57.859845 master-0 kubenswrapper[31411]: I0224 02:20:57.859223 31411 server.go:79] "Starting device plugin registration server" Feb 24 02:20:57.860262 master-0 kubenswrapper[31411]: I0224 02:20:57.860212 31411 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 24 02:20:57.861483 master-0 kubenswrapper[31411]: I0224 02:20:57.860394 31411 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 24 02:20:57.861682 master-0 kubenswrapper[31411]: I0224 02:20:57.861630 31411 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 24 02:20:57.862411 master-0 kubenswrapper[31411]: I0224 02:20:57.862366 31411 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 24 02:20:57.862411 master-0 kubenswrapper[31411]: I0224 02:20:57.862396 31411 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 24 02:20:57.961685 master-0 kubenswrapper[31411]: I0224 02:20:57.961605 31411 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:20:57.967062 master-0 kubenswrapper[31411]: I0224 02:20:57.967006 31411 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:20:57.967156 master-0 kubenswrapper[31411]: I0224 02:20:57.967076 31411 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:20:57.967156 master-0 kubenswrapper[31411]: I0224 02:20:57.967100 31411 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:20:57.967301 master-0 kubenswrapper[31411]: I0224 02:20:57.967252 31411 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 24 02:20:57.974335 master-0 kubenswrapper[31411]: E0224 02:20:57.974263 31411 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Feb 24 02:20:58.005867 master-0 kubenswrapper[31411]: I0224 02:20:58.005784 31411 apiserver.go:52] "Watching apiserver" Feb 24 02:20:58.039925 master-0 kubenswrapper[31411]: I0224 02:20:58.039856 31411 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 24 02:20:58.175466 master-0 kubenswrapper[31411]: I0224 02:20:58.175337 31411 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:20:58.180861 master-0 kubenswrapper[31411]: I0224 02:20:58.180799 31411 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:20:58.180968 master-0 kubenswrapper[31411]: I0224 02:20:58.180880 31411 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:20:58.180968 master-0 kubenswrapper[31411]: I0224 02:20:58.180900 31411 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:20:58.181139 master-0 kubenswrapper[31411]: I0224 02:20:58.181078 31411 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 24 02:20:58.186295 master-0 kubenswrapper[31411]: E0224 02:20:58.186224 31411 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Feb 24 02:20:58.587527 master-0 kubenswrapper[31411]: I0224 02:20:58.587416 31411 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:20:58.591732 master-0 kubenswrapper[31411]: I0224 02:20:58.591528 31411 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Feb 24 02:20:58.592033 master-0 kubenswrapper[31411]: I0224 02:20:58.591959 31411 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:20:58.592033 master-0 kubenswrapper[31411]: I0224 02:20:58.592036 31411 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:20:58.592222 master-0 kubenswrapper[31411]: I0224 02:20:58.592058 31411 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:20:58.592459 master-0 kubenswrapper[31411]: I0224 02:20:58.592406 31411 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 24 02:20:58.593254 master-0 kubenswrapper[31411]: I0224 02:20:58.593080 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-0","openshift-multus/multus-7fbjw","openshift-multus/multus-admission-controller-5f54bf67d4-ctssl","openshift-network-operator/iptables-alerter-rjbl5","kube-system/bootstrap-kube-scheduler-master-0","openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn","openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4","openshift-ingress-operator/ingress-operator-6569778c84-6dlqb","openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-c5wlk","openshift-dns/dns-default-5rf6m","openshift-etcd/installer-2-master-0","openshift-ingress-canary/ingress-canary-jjpsc","openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz","openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt","openshift-cluster-node-tuning-operator/tuned-26b2v","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg","openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb","openshift-monitoring/node-exporter-2qn8m","openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd","openshift-service-ca/service-ca-576b4d78bd-nqcs2","openshift-marketplace/marketplace-operator-6f5488b997-4qf9p","openshift-network-diagnostics/network-check-target-54b95","openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz","openshift-cluster-version/cluster-version-operator-57476485-9cjj5","openshift-machine-api/control-plane-machine-set-operator-686847ff5f-ckntz","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-network-diagnostics/network-check-source-58fb6744f5-l4wh6","openshift-network-operator/network-operator-7d7db75979-drrqm","openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k","openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x","openshift-dns/node-resolver-4lwwp","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2","openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7","openshift-marketplace/redhat-marketplace-qqt7p","openshift-ovn-kubernetes/ovnkube-node-rg9r6","openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k","openshift-kube-apiserver/installer-2-master-0","openshift-kube-controller-manager/installer-3-master-0","openshift-machine-config-operator/machine-config-controller-54cb48566c-xzpl4","openshift-marketplace/certified-operators-brpmb","openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6","openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr","openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd","openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6","openshift-kube-scheduler/installer-3-retry-1-master-0","openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-multus/multus-additional-cni-plugins-jtdht","openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c","openshift-multus/network-metrics-daemon-tntcf","openshift-machine-config-operator/machine-config-daemon-hfpql","openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2","openshift-operator-lifecycle-manager/collect-profiles-29531655-kw6fn","openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb","openshift-kube-apiserver/installer-1-master-0","openshift-kube-storage-version-migrator/migrator-5c85bff57-t5rgn","openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm","openshift-marketplace/community-operators-kkwwl","openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b","openshift-operator-lifecycle-manager/collect-profiles-29531640-kptmw","openshift-dns-operator/dns-operator-8c7d49845-hxcn2","openshift-etcd/etcd-master-0","openshift-etcd/installer-1-master-0","openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n","openshift-machine-config-operator/machine-config-server-drf28","openshift-marketplace/redhat-operators-4znnj","openshift-monitoring/prometheus-operator-754bc4d665-66lml","openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc","openshift-kube-controller-manager/installer-2-master-0","openshift-kube-controller-manager/installer-2-retry-1-master-0","openshift-monitoring/metrics-server-7b9cc5984b-smpdl","openshift-network-node-identity/network-node-identity-p5b6q","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-kube-scheduler/installer-4-master-0","openshift-apiserver/apiserver-79dc9447fd-x64vl","openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-c95qc","openshift-ingress/router-default-7b65dc9fcb-22sgl","openshift-insights/insights-operator-59b498fcfb-dbkwd","openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c","openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk","openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq","openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb","openshift-monitoring/kube-state-metrics-59584d565f-f6f26","openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-9gkp2","assisted-installer/assisted-installer-controller-f2lj9","openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59","openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-bkc9s","openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q"] Feb 24 02:20:58.593543 master-0 kubenswrapper[31411]: I0224 02:20:58.593423 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-f2lj9" Feb 24 02:20:58.602639 master-0 kubenswrapper[31411]: I0224 02:20:58.601626 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 24 02:20:58.602639 master-0 kubenswrapper[31411]: I0224 02:20:58.601992 31411 kubelet.go:2566] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="906fbe12-582c-4c3d-8417-22d9670712ed" Feb 24 02:20:58.604121 master-0 kubenswrapper[31411]: I0224 02:20:58.603716 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 24 02:20:58.604529 master-0 kubenswrapper[31411]: I0224 02:20:58.604485 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 24 02:20:58.604709 master-0 kubenswrapper[31411]: I0224 02:20:58.604612 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531640-kptmw" Feb 24 02:20:58.635658 master-0 kubenswrapper[31411]: I0224 02:20:58.634895 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 24 02:20:58.635658 master-0 kubenswrapper[31411]: I0224 02:20:58.635319 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 24 02:20:58.654612 master-0 kubenswrapper[31411]: I0224 02:20:58.651888 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 24 02:20:58.654612 master-0 kubenswrapper[31411]: I0224 02:20:58.653134 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 24 02:20:58.654612 master-0 kubenswrapper[31411]: I0224 02:20:58.654451 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 24 02:20:58.654990 master-0 kubenswrapper[31411]: I0224 02:20:58.654743 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.655810 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.655961 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.656618 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.656674 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.656924 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: E0224 02:20:58.656942 31411 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.657003 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.657065 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.657134 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: E0224 02:20:58.657274 31411 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.657391 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.657549 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.657604 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.657734 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.657947 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: E0224 02:20:58.658001 31411 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-startup-monitor-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.658108 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.658139 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.658189 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.657948 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.658341 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.658499 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.658501 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.658627 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.658679 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.659496 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.660272 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.660407 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.660445 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.660516 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.660701 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.660704 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.660895 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.661048 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: E0224 02:20:58.661620 31411 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-master-0\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.661760 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.661891 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.662083 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.662237 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.662601 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.662711 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.662954 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.663063 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.663194 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.663267 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.663108 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 24 02:20:58.663590 master-0 kubenswrapper[31411]: I0224 02:20:58.663521 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 24 02:20:58.664999 master-0 kubenswrapper[31411]: I0224 02:20:58.664175 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 24 02:20:58.664999 master-0 kubenswrapper[31411]: I0224 02:20:58.664648 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 24 02:20:58.681975 master-0 kubenswrapper[31411]: I0224 02:20:58.681763 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 24 02:20:58.683554 master-0 kubenswrapper[31411]: I0224 02:20:58.682841 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 24 02:20:58.683554 master-0 kubenswrapper[31411]: I0224 02:20:58.683073 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 24 02:20:58.685001 master-0 kubenswrapper[31411]: I0224 02:20:58.683882 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 24 02:20:58.686600 master-0 kubenswrapper[31411]: I0224 02:20:58.685105 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 24 02:20:58.686600 master-0 kubenswrapper[31411]: I0224 02:20:58.685554 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 24 02:20:58.689651 master-0 kubenswrapper[31411]: I0224 02:20:58.688865 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 24 02:20:58.689651 master-0 kubenswrapper[31411]: I0224 02:20:58.688990 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 24 02:20:58.689651 master-0 kubenswrapper[31411]: I0224 02:20:58.689231 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 24 02:20:58.689651 master-0 kubenswrapper[31411]: I0224 02:20:58.689536 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 24 02:20:58.689821 master-0 kubenswrapper[31411]: I0224 02:20:58.689719 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 24 02:20:58.689821 master-0 kubenswrapper[31411]: E0224 02:20:58.689719 31411 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 02:20:58.689821 master-0 kubenswrapper[31411]: I0224 02:20:58.689809 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 24 02:20:58.689915 master-0 kubenswrapper[31411]: I0224 02:20:58.689906 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 24 02:20:58.690010 master-0 kubenswrapper[31411]: I0224 02:20:58.689992 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 24 02:20:58.690057 master-0 kubenswrapper[31411]: I0224 02:20:58.690037 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 24 02:20:58.690153 master-0 kubenswrapper[31411]: I0224 02:20:58.690136 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 24 02:20:58.690198 master-0 kubenswrapper[31411]: I0224 02:20:58.690184 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 24 02:20:58.690469 master-0 kubenswrapper[31411]: I0224 02:20:58.690453 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 24 02:20:58.690527 master-0 kubenswrapper[31411]: I0224 02:20:58.690492 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 24 02:20:58.690587 master-0 kubenswrapper[31411]: I0224 02:20:58.690552 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 24 02:20:58.690655 master-0 kubenswrapper[31411]: I0224 02:20:58.690643 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 24 02:20:58.690743 master-0 kubenswrapper[31411]: I0224 02:20:58.690727 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 24 02:20:58.690787 master-0 kubenswrapper[31411]: I0224 02:20:58.690771 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 24 02:20:58.690891 master-0 kubenswrapper[31411]: I0224 02:20:58.690875 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 24 02:20:58.691033 master-0 kubenswrapper[31411]: I0224 02:20:58.691022 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 24 02:20:58.691120 master-0 kubenswrapper[31411]: I0224 02:20:58.691102 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 24 02:20:58.691161 master-0 kubenswrapper[31411]: I0224 02:20:58.691148 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 24 02:20:58.691633 master-0 kubenswrapper[31411]: I0224 02:20:58.691567 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 24 02:20:58.691633 master-0 kubenswrapper[31411]: I0224 02:20:58.691624 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 24 02:20:58.691723 master-0 kubenswrapper[31411]: I0224 02:20:58.691681 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 24 02:20:58.691723 master-0 kubenswrapper[31411]: I0224 02:20:58.691625 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 24 02:20:58.692610 master-0 kubenswrapper[31411]: I0224 02:20:58.691816 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 24 02:20:58.692610 master-0 kubenswrapper[31411]: I0224 02:20:58.691934 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 24 02:20:58.692610 master-0 kubenswrapper[31411]: I0224 02:20:58.692007 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 24 02:20:58.692610 master-0 kubenswrapper[31411]: I0224 02:20:58.692330 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 24 02:20:58.692610 master-0 kubenswrapper[31411]: I0224 02:20:58.692502 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 24 02:20:58.692610 master-0 kubenswrapper[31411]: I0224 02:20:58.692532 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 24 02:20:58.692959 master-0 kubenswrapper[31411]: I0224 02:20:58.691624 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.699935 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.713209 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.708789 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.709860 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.709995 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.701369 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f85222bf-f51a-4232-8db1-1e6ee593617b-config\") pod \"kube-apiserver-operator-5d87bf58c-2492q\" (UID: \"f85222bf-f51a-4232-8db1-1e6ee593617b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.701957 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.702318 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.701878 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.701144 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f85222bf-f51a-4232-8db1-1e6ee593617b-config\") pod \"kube-apiserver-operator-5d87bf58c-2492q\" (UID: \"f85222bf-f51a-4232-8db1-1e6ee593617b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.701918 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.713672 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b5620d6-a5fe-45d7-b39e-8bed7f602a17-config\") pod \"service-ca-operator-c48c8bf7c-6fqkr\" (UID: \"9b5620d6-a5fe-45d7-b39e-8bed7f602a17\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.713704 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cabdddba-5507-4e47-98ef-a00c6d0f305d-config\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.713733 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f85222bf-f51a-4232-8db1-1e6ee593617b-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-2492q\" (UID: \"f85222bf-f51a-4232-8db1-1e6ee593617b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.713755 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f85222bf-f51a-4232-8db1-1e6ee593617b-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-2492q\" (UID: \"f85222bf-f51a-4232-8db1-1e6ee593617b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.713790 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.713809 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.713831 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/57811d07-ae8a-44b7-8efb-dafc5afad31e-cni-binary-copy\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.702507 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" event={"ID":"b36d8451-0fda-4d9d-a850-d05c8f847016","Type":"ContainerStarted","Data":"439729411b82e0e29cbe7419552c7cfd16042f6d754bf9586cfa89a02bdfce23"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.713852 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b36d8451-0fda-4d9d-a850-d05c8f847016-config\") pod \"openshift-apiserver-operator-8586dccc9b-sl5hz\" (UID: \"b36d8451-0fda-4d9d-a850-d05c8f847016\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.713880 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrmsh\" (UniqueName: \"kubernetes.io/projected/57811d07-ae8a-44b7-8efb-dafc5afad31e-kube-api-access-vrmsh\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.713883 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" event={"ID":"b36d8451-0fda-4d9d-a850-d05c8f847016","Type":"ContainerDied","Data":"b55dffa76d343a2f86cafe3f911f9b262284cdca97531fab28e85dab3a7157d5"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.713904 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/adc1097b-c1ab-4f09-965d-1c819671475b-webhook-cert\") pod \"network-node-identity-p5b6q\" (UID: \"adc1097b-c1ab-4f09-965d-1c819671475b\") " pod="openshift-network-node-identity/network-node-identity-p5b6q" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.713913 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" event={"ID":"b36d8451-0fda-4d9d-a850-d05c8f847016","Type":"ContainerStarted","Data":"64c3913ef0868e964da24e47fde7afcb2edc5db0527066ac2d8451806802e649"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.713932 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.713926 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs\") pod \"network-metrics-daemon-tntcf\" (UID: \"70e2ba24-4871-4d1d-9935-156fdbeb2810\") " pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.713970 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" event={"ID":"8c396c41-c617-4631-9700-a7052af5a276","Type":"ContainerStarted","Data":"6e72c582c52cc1175706db2ef4a54c95fdecae69c4b7d4caf28fde6f98e8eaa4"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.713979 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3332acec-1553-4594-a903-a322399f6d9d-metrics-tls\") pod \"network-operator-7d7db75979-drrqm\" (UID: \"3332acec-1553-4594-a903-a322399f6d9d\") " pod="openshift-network-operator/network-operator-7d7db75979-drrqm" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.713985 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" event={"ID":"8c396c41-c617-4631-9700-a7052af5a276","Type":"ContainerStarted","Data":"1684de33a1b99214450be6d5f8c060f5a9f1a7c517e642d15f1d667f8a119c75"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.713999 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk" event={"ID":"91168f3d-70eb-4351-bb83-5411a96ad29d","Type":"ContainerStarted","Data":"48e2670880e2a3683b1d35f70f451bd1df96189544e8c098ed8ce993e49c290a"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714002 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8tttg\" (UID: \"f5463fbf-ac21-4058-9a3b-30d0e5ea31b7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714013 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk" event={"ID":"91168f3d-70eb-4351-bb83-5411a96ad29d","Type":"ContainerStarted","Data":"67bcd090168c3067b9df003f44922fafc0a07cb705086156a46960cbedf5866d"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714026 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk" event={"ID":"91168f3d-70eb-4351-bb83-5411a96ad29d","Type":"ContainerStarted","Data":"2335a36c0dbf2382a55062b41bd9de9b70220499140428315aa61fdfd6dde11f"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714039 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" event={"ID":"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334","Type":"ContainerStarted","Data":"cf86bb9cd234b5b6c8b149beb0c0baa8d6338735d55b02c552c06e59b051b932"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714052 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" event={"ID":"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334","Type":"ContainerDied","Data":"7ebe8b93ac5fdd43a48b73d5d4aae71709a78d8ce5151017ae8a23f470fe9ff8"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714065 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" event={"ID":"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334","Type":"ContainerStarted","Data":"71611d4b716a18718dc4de42fcc89f0b3de2244f35a8054faf3e9c668e532c8f"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714079 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" event={"ID":"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334","Type":"ContainerStarted","Data":"68a30a55ed2f979625e18a77c39c55f0bd820b511f058e5d010e556725054ded"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714089 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" event={"ID":"f85222bf-f51a-4232-8db1-1e6ee593617b","Type":"ContainerStarted","Data":"012d6b821fd3ee17fb9f5bf5451fa90bd22ad830cc6d1b88590aa2b80b979353"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714101 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" event={"ID":"f85222bf-f51a-4232-8db1-1e6ee593617b","Type":"ContainerDied","Data":"35a7a7e510655bf960c0ebb22ba6e0c6db1f746f1a9047b38fb6bb5a0b24bc60"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714113 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" event={"ID":"f85222bf-f51a-4232-8db1-1e6ee593617b","Type":"ContainerStarted","Data":"9b9766c83ab547d93c665b0d79f8c94f21cf677d4157ff5e1bc24f519048fa91"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714124 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" event={"ID":"4a2d8ef6-14ac-490d-a931-7082344d3f46","Type":"ContainerStarted","Data":"9646b934b2718a877d84d3844b7ac6e3d8136d87f64b2e2fac02d09f99a5f0af"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714136 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" event={"ID":"4a2d8ef6-14ac-490d-a931-7082344d3f46","Type":"ContainerDied","Data":"8f261203a7d383fb41e07035f6494d12c5ece1ea073bc54bdb848ad1f13db388"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714148 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" event={"ID":"4a2d8ef6-14ac-490d-a931-7082344d3f46","Type":"ContainerStarted","Data":"c63bd4b594bd4f6109b07378bf72db7f1a51b694e1fb2208c36ad6c33c119837"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714157 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" event={"ID":"4a2d8ef6-14ac-490d-a931-7082344d3f46","Type":"ContainerStarted","Data":"77149d9718de6b23dc52d1b4901db52831b3adbf959e9b29aeeec05c6d2db97e"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714168 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tntcf" event={"ID":"70e2ba24-4871-4d1d-9935-156fdbeb2810","Type":"ContainerStarted","Data":"7d3c45256a841eb70d92b2ac89a09096ea7f9fe8d5d44f496e1fd00978ad355c"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714179 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tntcf" event={"ID":"70e2ba24-4871-4d1d-9935-156fdbeb2810","Type":"ContainerStarted","Data":"9fc038be325d2c443e30ca715ec9ed96813d02435d90dbec4dda98e800fa480c"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714190 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tntcf" event={"ID":"70e2ba24-4871-4d1d-9935-156fdbeb2810","Type":"ContainerStarted","Data":"39f5130f968afbc399fff9d4193b2dc7547fc4010b24b675d8cfe4f908871554"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714201 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" event={"ID":"12b89e05-a503-47aa-90b2-4d741e015b19","Type":"ContainerStarted","Data":"e83ff6dbf21d18933989d16dabdd55b76dba2e6aefd8a69a1cc0797cddc207b9"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714221 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" event={"ID":"12b89e05-a503-47aa-90b2-4d741e015b19","Type":"ContainerStarted","Data":"5365291f917df72d79e3f7635bb96352fde98df3b379b1b1c5331b7f5952d294"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714232 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn" event={"ID":"7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c","Type":"ContainerStarted","Data":"fd1584ac1124bfeb40135ff2c60362b0107b9088e06ac8a200164cc1456f163b"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714243 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn" event={"ID":"7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c","Type":"ContainerDied","Data":"e9d2b8b0026aada75f8d27003b5a7df3b3f0253b60da8ae01339ca6b74582705"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714255 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn" event={"ID":"7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c","Type":"ContainerStarted","Data":"d884f8a9271a3be209f8b517c106210cb5d535a1b46d052e9c8de84e6be62441"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714268 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f54bf67d4-ctssl" event={"ID":"e68b3061-c9d2-469d-babf-7ccac0ad9b14","Type":"ContainerStarted","Data":"56d3fdb5093e1d4eccefbac182f17fc4900bc5eca061248db29c85445165c4dc"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714281 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f54bf67d4-ctssl" event={"ID":"e68b3061-c9d2-469d-babf-7ccac0ad9b14","Type":"ContainerStarted","Data":"e6b7fffe1df34eb4ec92449883945249c2a727f4ae3f99a2b5f3f554aa75c619"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714284 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b5620d6-a5fe-45d7-b39e-8bed7f602a17-config\") pod \"service-ca-operator-c48c8bf7c-6fqkr\" (UID: \"9b5620d6-a5fe-45d7-b39e-8bed7f602a17\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714303 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b36d8451-0fda-4d9d-a850-d05c8f847016-config\") pod \"openshift-apiserver-operator-8586dccc9b-sl5hz\" (UID: \"b36d8451-0fda-4d9d-a850-d05c8f847016\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714291 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f54bf67d4-ctssl" event={"ID":"e68b3061-c9d2-469d-babf-7ccac0ad9b14","Type":"ContainerStarted","Data":"14bae3b4a416d85f1092a8f00a7f0b630edcc978d596d53dd23dbb652596ecd8"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714027 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-ovn-node-metrics-cert\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714350 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-c95qc" event={"ID":"7b098bd4-5751-4b01-8409-0688fd29233e","Type":"ContainerStarted","Data":"f292b51fcd18744289c5ad0eb2ad98ecedf1200c5c17b00777cb5c2c1e7e3e7d"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714380 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-c95qc" event={"ID":"7b098bd4-5751-4b01-8409-0688fd29233e","Type":"ContainerDied","Data":"cd4d613347fcadf7d18c80092bf87aa7750bed8421d9d1f3c7ea9c90740b5523"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714395 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-c95qc" event={"ID":"7b098bd4-5751-4b01-8409-0688fd29233e","Type":"ContainerStarted","Data":"35a3cac3cfce9496c0f221e8539970cdcedf87aabbcb92ba9a5c445596750d49"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714408 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-p5b6q" event={"ID":"adc1097b-c1ab-4f09-965d-1c819671475b","Type":"ContainerStarted","Data":"ebba0a8319d4fdb20034a34d2683c5848263067d143ef6656f178fc12ac2c957"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714422 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-p5b6q" event={"ID":"adc1097b-c1ab-4f09-965d-1c819671475b","Type":"ContainerDied","Data":"a75bd245067c2bd96dc6595a801a2c01cb01bd1d3a373e46bfec3a120455dff3"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714436 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-p5b6q" event={"ID":"adc1097b-c1ab-4f09-965d-1c819671475b","Type":"ContainerStarted","Data":"6f03a78798c0233a4b142276c0a188023ee91558d59191ef7fa3f909cb0c5802"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714447 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-p5b6q" event={"ID":"adc1097b-c1ab-4f09-965d-1c819671475b","Type":"ContainerStarted","Data":"fe31ec10252333ed40b830b2aacce6de4e895210a7d0b0aebae765349ccfa670"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714460 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-26b2v" event={"ID":"638b3f88-0386-4f30-8ca5-6255e8f936fc","Type":"ContainerStarted","Data":"b391ee85aee1f228cd791d5e8c18c2ef5e9b62963dca456df8107dfcd3ddc959"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714472 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-26b2v" event={"ID":"638b3f88-0386-4f30-8ca5-6255e8f936fc","Type":"ContainerStarted","Data":"813fd4ddbe7cd984c71971b2f1f90cf9374e4a929ba9dc48db8a98bf388f1d1f"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714484 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" event={"ID":"4f5b3b93-a59d-495c-a311-8913fa6000fc","Type":"ContainerStarted","Data":"448360d167a3924b4e80b020c352dc3f31f6a37b9004d07ffe025473c90dfad5"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714497 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" event={"ID":"4f5b3b93-a59d-495c-a311-8913fa6000fc","Type":"ContainerDied","Data":"8ea7a40a6269170c26a7054768f3ac9bed9d2f95f70afef7c8db1dfa7590c2a4"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714500 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cabdddba-5507-4e47-98ef-a00c6d0f305d-config\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714509 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" event={"ID":"4f5b3b93-a59d-495c-a311-8913fa6000fc","Type":"ContainerStarted","Data":"5d6948ce490f3fa6ce851d875800c55d419c3dab3ddc783fd0565943ba63fbf3"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714561 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" event={"ID":"4f5b3b93-a59d-495c-a311-8913fa6000fc","Type":"ContainerStarted","Data":"d789b7d1d1c624f3c1461f3405b95a301ab5f66347a0727135e2339f341d9052"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714609 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="469ea89978a106e8d886f7d5bc0536175799cd2440585cb8454e806af61e0350" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714620 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-ovn-node-metrics-cert\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714621 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5rf6m" event={"ID":"8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c","Type":"ContainerStarted","Data":"231439d030fa0bf89da2c3a1b0ebaeaa5fe611600712dfc3abffa791f37a575f"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714681 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/57811d07-ae8a-44b7-8efb-dafc5afad31e-cni-binary-copy\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714695 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5rf6m" event={"ID":"8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c","Type":"ContainerStarted","Data":"13cd6a6aa3859231bf27568381f642374c95090918911bed4bf38f7204f41cd6"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714718 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5rf6m" event={"ID":"8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c","Type":"ContainerStarted","Data":"6dc4ae2fbc88ea5c43de5d695fea8a7c8829343138c8856dcfeff187994c5c0f"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714734 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d130af32cfc701e2db477383fd28dc66d411455c0f39e12c729c963e3f569427" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714748 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" event={"ID":"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2","Type":"ContainerStarted","Data":"0ebc34731370eef8de7778627bb84636b6e1c8e231035e298fc1c33b8cc5b26c"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714760 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" event={"ID":"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2","Type":"ContainerDied","Data":"fcdcbf5a149a0b578453f994965bc5ee1ca152377a3fb51c3ba2512d342fe454"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714774 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" event={"ID":"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2","Type":"ContainerStarted","Data":"ff39808811189af69f67503d76fa167bb97add817a078f10dcf74a7660201e4e"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714786 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"f26d70a857d43fac3deacb2102ae3da953979c9be93877036525bd880271cb08"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714798 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerDied","Data":"5fbafce85063f872b1786e48e809b15f5aa08369e9d34c7d53d1c636ed17075e"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714812 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"6d1c4a7e4e4241cdd4f673e537ec599a9ec1bd539d78669446c1a36b609a7a02"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714823 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-f2lj9" event={"ID":"7fa1462b-8f1c-4a77-9c1c-f0f79910737f","Type":"ContainerDied","Data":"35c312973828464e3d9786034ffddad219bbd2d62792822db99238b48a9c981d"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714837 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-f2lj9" event={"ID":"7fa1462b-8f1c-4a77-9c1c-f0f79910737f","Type":"ContainerDied","Data":"acf2d712c422a3cffe6f1ecfbb7b0b5262ef2f0636bb7727cd7fd72186b9cf69"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714847 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acf2d712c422a3cffe6f1ecfbb7b0b5262ef2f0636bb7727cd7fd72186b9cf69" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714862 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" event={"ID":"6320dbb5-b84d-4a57-8c65-fbed8421f84a","Type":"ContainerStarted","Data":"0a8ba6ce68a26edc1451ec182c1b30b8fe0084be7cbefabb415d809de554af9e"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714875 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" event={"ID":"6320dbb5-b84d-4a57-8c65-fbed8421f84a","Type":"ContainerStarted","Data":"8fb2770446d8dd83e57430002eaccc69501e1f4b4da7692571e4fb967ebfeb35"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714886 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" event={"ID":"6320dbb5-b84d-4a57-8c65-fbed8421f84a","Type":"ContainerStarted","Data":"35cb29197091c21cf559145587727d1cb31b46813a4d0aded5d8409120c45182"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714899 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" event={"ID":"91d16f7b-390a-4d9d-99d6-cc8e210801d1","Type":"ContainerStarted","Data":"36bf2499ceb16a6789bfaea260bc661782023dffc5c354b07ad277186683d4ac"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714933 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" event={"ID":"91d16f7b-390a-4d9d-99d6-cc8e210801d1","Type":"ContainerDied","Data":"2ba78e893a4a0218430d836aa7034890de4059e422e2aea2e06c126f90cc740a"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714948 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" event={"ID":"91d16f7b-390a-4d9d-99d6-cc8e210801d1","Type":"ContainerStarted","Data":"89a7efb88fa53095b161b71e1b8530a4c4c20e49713a6786f3a59609c9325838"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714959 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" event={"ID":"a5305004-5311-4bc4-ad7c-6670f97c89cb","Type":"ContainerStarted","Data":"8ae69ce84c01e5e4b4fac1f5290af78bea77d7988dab5915ebc9b71d7bb9c8b4"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714973 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" event={"ID":"a5305004-5311-4bc4-ad7c-6670f97c89cb","Type":"ContainerStarted","Data":"c5800339748674f2134be8e9b847e6be2d094f1a815b59844e33e063e8189399"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714987 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" event={"ID":"a5305004-5311-4bc4-ad7c-6670f97c89cb","Type":"ContainerStarted","Data":"2072e285d4993b0d4fb9ce8970b1145b2c5b69f9a6f5dae58087e7fa262a83b4"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.714998 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" event={"ID":"a5305004-5311-4bc4-ad7c-6670f97c89cb","Type":"ContainerStarted","Data":"a8543f0f38e0eb6ba966eb5116372824fd9c0ed337e28520e9ed982545e8ec8c"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715009 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" event={"ID":"9cad383a-cb69-41a8-aec8-23ee1c930430","Type":"ContainerStarted","Data":"29a0512f64a48cd44b18e37c89ec77f34d9d3c4b94ceaef45526fc50d80bc784"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715022 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" event={"ID":"9cad383a-cb69-41a8-aec8-23ee1c930430","Type":"ContainerStarted","Data":"61e5e77001e1e5b4b53f6c82868401419bbcf0e5600dbe4c283c403c8bc8a720"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715032 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" event={"ID":"0ce6dd93-084c-4e15-8b7c-e0829a6df14e","Type":"ContainerStarted","Data":"4b48f973de3ca235282063a3a1d4565ab0f94d9aba4a7a4a30b638daf156cb07"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715045 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" event={"ID":"0ce6dd93-084c-4e15-8b7c-e0829a6df14e","Type":"ContainerStarted","Data":"da1d53ab7e4f85fb1b75944d0a63b108d12150f96511744937d45deda5882b0f"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715056 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" event={"ID":"0ce6dd93-084c-4e15-8b7c-e0829a6df14e","Type":"ContainerStarted","Data":"7a26b2c8abf7070791144c3b808d314860b4cb305adbb9095d34928ddbac7f4a"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715067 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"73250fbf83eb734a494f12593474f38faaba12f425754ad28c833c6cc94b24a7"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715077 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"4ea164dd4d44c905424ce0b0b3ea58702494938b88cbbbe52d4ce16914c7762b"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715091 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"17bbd7fbbeaf0dec034d902b8e7575b6559c59f312f1a534ffc4119208ba1272"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715103 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"fb524201fadda92a97019a1e36f215d113e21212244e9e77433e72e6adcfc793"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715115 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" event={"ID":"27f0c4d0-17dd-49ed-a8a4-7be1d82738c7","Type":"ContainerDied","Data":"f562d675528b0b4d595c589a6e7d7e57e6c855daab8a48f034a1f5df7b3b29a7"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715128 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f562d675528b0b4d595c589a6e7d7e57e6c855daab8a48f034a1f5df7b3b29a7" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715139 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" event={"ID":"b176946a-c056-441c-9145-b88ca4d75758","Type":"ContainerStarted","Data":"a40a29f749e0573ac6d90972333ff728d387ff0f88ce4e87bf1c84f1f2298927"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715151 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" event={"ID":"b176946a-c056-441c-9145-b88ca4d75758","Type":"ContainerDied","Data":"428d9eac1aa20c4549b1b4238b89b04f2faa950b7b0a74457007efebb7f09258"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715164 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" event={"ID":"b176946a-c056-441c-9145-b88ca4d75758","Type":"ContainerStarted","Data":"0e94bb6d8da81f692c353aed9041e8cea1ef96da518c0c68ab1453f8b2183856"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715176 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"50c78047-1c4d-4535-ba2c-31f080d6a57d","Type":"ContainerDied","Data":"7bb232625f3494579f18ed676cbbdfe8d63a7f633ead8439889c5a5bfa8b5a12"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715190 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"50c78047-1c4d-4535-ba2c-31f080d6a57d","Type":"ContainerDied","Data":"073c8b4053396ee6cbbc1314c9a8361c6af6be047fe705c3463b911834d8b963"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715198 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="073c8b4053396ee6cbbc1314c9a8361c6af6be047fe705c3463b911834d8b963" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715207 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkwwl" event={"ID":"a4267e3a-aaaf-4b2f-a37c-0f097a35783f","Type":"ContainerStarted","Data":"e749b70347c5e0663b75c02245f5717b4cc199456a6d1ef4ebc3ed6a3962364b"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715222 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkwwl" event={"ID":"a4267e3a-aaaf-4b2f-a37c-0f097a35783f","Type":"ContainerDied","Data":"1c5e5c71f0526c11a274e2510dcba4e8e573cc7f5dabb80f3b317e540bfa20fd"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715234 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkwwl" event={"ID":"a4267e3a-aaaf-4b2f-a37c-0f097a35783f","Type":"ContainerDied","Data":"17cb16eb2bff3a3eb8a7a600aa48da74788c2b2cfe679ee292ca901ee7cdc53d"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715244 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kkwwl" event={"ID":"a4267e3a-aaaf-4b2f-a37c-0f097a35783f","Type":"ContainerStarted","Data":"200fe8533b917a93531075b1165c1c0e535a0cad7b646b4108d64e256389832b"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715256 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"888e23114cf20f3bf6573c5f7b88d7d0","Type":"ContainerStarted","Data":"50cb231a9b1774d52ada393ca11772e3b1ec2821c7a67614161aa92f0c51c9f1"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715267 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"888e23114cf20f3bf6573c5f7b88d7d0","Type":"ContainerStarted","Data":"cc4f41e88d31c09269221f1953bba1f1ec74ac34cb3604d797dd60e2b7ff3d27"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715278 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"888e23114cf20f3bf6573c5f7b88d7d0","Type":"ContainerStarted","Data":"6b1118fa0c775e5798c55738a3388475285cbd52e99121aa6b42aa4b89f48976"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715288 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"888e23114cf20f3bf6573c5f7b88d7d0","Type":"ContainerStarted","Data":"558bf7531d8535d6ff0e2eef748fcf2e0526fa528cbc80b5c0930f84e0c9378c"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715301 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"888e23114cf20f3bf6573c5f7b88d7d0","Type":"ContainerStarted","Data":"3a6d28be061daa57e672f2fb4170c8cb1508d58e58f77136d5136349ebce9c80"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715311 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"888e23114cf20f3bf6573c5f7b88d7d0","Type":"ContainerDied","Data":"4e2723790b49b8efef2a3cd4b841ce0f7ce144ba7c018da13a9e536997af68e4"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715325 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"888e23114cf20f3bf6573c5f7b88d7d0","Type":"ContainerStarted","Data":"7b8c524be621d3b232cfcc53d4958e12d26da68f7931a17964ec87b85eee7bba"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715338 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hfpql" event={"ID":"df42c69b-1a0e-41f5-9006-17540369b9ad","Type":"ContainerStarted","Data":"67ca08a71ef0ee4b6264d0683316a5400dc6c91bab3f8b6764b01d1d93cebc51"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715350 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hfpql" event={"ID":"df42c69b-1a0e-41f5-9006-17540369b9ad","Type":"ContainerStarted","Data":"7f363b229955dc8837df5e264f53a130100fee7d47644a7ba2897d8fb2e9598c"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715361 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hfpql" event={"ID":"df42c69b-1a0e-41f5-9006-17540369b9ad","Type":"ContainerStarted","Data":"8f97b854deca962131113e1e495c21e96d538f802eb3f5de41bedce3ba1452e3"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715373 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" event={"ID":"732a3831-20e0-47dc-a29a-8bb4659541b7","Type":"ContainerStarted","Data":"535ac4e7253be828eb8365a54787c4f76f98be222f5aa26756ea0b019790bb97"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715384 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" event={"ID":"732a3831-20e0-47dc-a29a-8bb4659541b7","Type":"ContainerStarted","Data":"bb7daa4c061606545ddec9122a80563e4f785ed9c98cafaa54bb7196f126bd02"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715395 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" event={"ID":"e8d6a6c0-b944-4206-9178-9a9930b303b9","Type":"ContainerStarted","Data":"00fb88dd6ea7a9ddb5d7ddf189575d130d7630f12716d8a17f87c83ef377ea62"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715409 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" event={"ID":"e8d6a6c0-b944-4206-9178-9a9930b303b9","Type":"ContainerDied","Data":"5ed088abb8fdf119602dca1779c3b84da28af95aaab8dcf8c7df738c7d83aa56"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715447 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" event={"ID":"e8d6a6c0-b944-4206-9178-9a9930b303b9","Type":"ContainerStarted","Data":"5c3398a6c263edc9332a777f898c18bf8d4d5354af4bc2396e80f920a1e77f07"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715459 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-xzpl4" event={"ID":"e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d","Type":"ContainerStarted","Data":"82424b9b463daf23d9c394c69448182da116e0694938488e2079645b0e8398dd"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715471 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-xzpl4" event={"ID":"e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d","Type":"ContainerStarted","Data":"25626cd2d8668a56750cfaaad01d432499b9f72732dea0fb561e1b3a7aadf5c7"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715483 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-xzpl4" event={"ID":"e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d","Type":"ContainerStarted","Data":"824e46bd462387249603454ca45d03ebca612478cdeb336ee057821a4d25b262"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715493 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"95806c9442ee27c355bfbf25ba6f70f0","Type":"ContainerStarted","Data":"d3a2e46fc3575684f8b4b20ad7df8bf7f99ff87fb6ed3592a6211e989eb744b8"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715506 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"95806c9442ee27c355bfbf25ba6f70f0","Type":"ContainerStarted","Data":"a97c301937f1a0e25ebd74de8f7b7dfda3c088599ac5506143f0a1006a2bb044"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715518 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" event={"ID":"cabdddba-5507-4e47-98ef-a00c6d0f305d","Type":"ContainerStarted","Data":"07dffa654def49a750cd8c0b2fc4b62a229828d8d6aee70a6f54eea4290dad7f"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715532 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" event={"ID":"cabdddba-5507-4e47-98ef-a00c6d0f305d","Type":"ContainerDied","Data":"a6b3894561c0264e4475f846f6d8a4a1bfc54fcd4a779abb17626105631909c8"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715545 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" event={"ID":"cabdddba-5507-4e47-98ef-a00c6d0f305d","Type":"ContainerStarted","Data":"e54e467fec77344a591689afeb76ae49385e45cfe4c4aeb2a94eec65da6cdce5"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715557 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" event={"ID":"6a08a1e4-cf92-4733-a8af-c7ac5b21e925","Type":"ContainerStarted","Data":"623a5a87ba4a222b47eda7260b4c2df4f22508607ad25afe2feb179bbb1bb4b6"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715583 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" event={"ID":"6a08a1e4-cf92-4733-a8af-c7ac5b21e925","Type":"ContainerDied","Data":"16b88bdb19342563d81116f8e13c7e868beadde1813cafcd204be1678600a199"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715597 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" event={"ID":"6a08a1e4-cf92-4733-a8af-c7ac5b21e925","Type":"ContainerStarted","Data":"2dab12c36fbca650a107bc58df00044fd6561209f9c466f04a4c8ce72b69201d"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715609 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" event={"ID":"db8d6627-394c-4087-bfa4-bf7580f6bb4b","Type":"ContainerStarted","Data":"19d33a97db38aed4fe60654f6c6d7b0c8c528614fa81fd6b06e33fcb80383ce0"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715624 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" event={"ID":"db8d6627-394c-4087-bfa4-bf7580f6bb4b","Type":"ContainerStarted","Data":"9f51b04d3c1486984e3307c52dc019c9d3269455962317058730d274ae7bbc94"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715637 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" event={"ID":"db8d6627-394c-4087-bfa4-bf7580f6bb4b","Type":"ContainerStarted","Data":"0fcf5ec505a8c60fa755fbe1033404d9a2bfa8dd51c8a8904db6a212bec7d594"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715654 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"986668ae1bbdf9cce9dceeca068e9031","Type":"ContainerStarted","Data":"dea6fc53fd855f67e834ba785b44e2405a5a05c89259e81225c2459f00ab9410"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715666 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"986668ae1bbdf9cce9dceeca068e9031","Type":"ContainerStarted","Data":"f7e48d7a0d2c98b03ed618e0f0670a90b569c740794e4265b966c9259a6da4db"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715678 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"986668ae1bbdf9cce9dceeca068e9031","Type":"ContainerStarted","Data":"8dc4659ecc15c6bfe49a9925903b4f4687f239838392613c4865c83a8905650b"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715689 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"986668ae1bbdf9cce9dceeca068e9031","Type":"ContainerStarted","Data":"802885a1b2ef2df10b5ffa5642c921493548e8e31f3eaf3dc4bdd2f7c156af95"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715700 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"986668ae1bbdf9cce9dceeca068e9031","Type":"ContainerStarted","Data":"d7b6636edfdbd08e77deab1053c053904d89a5b39e0804954f8c80e56f1c9467"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715712 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" event={"ID":"f6e7b773-7ecd-4a5c-8bef-d672f371e7e5","Type":"ContainerStarted","Data":"f8325f862e5ed3523628a1c7d7820991fcc2eb76b4b3b4a1a68c402a2679e36b"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715728 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" event={"ID":"f6e7b773-7ecd-4a5c-8bef-d672f371e7e5","Type":"ContainerDied","Data":"de767e9040507053af6384aa3263bafeca9d1e8f1b629cb4c6dfbeef7e5cb93d"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715743 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" event={"ID":"f6e7b773-7ecd-4a5c-8bef-d672f371e7e5","Type":"ContainerStarted","Data":"e7c778232fad4af52f47c31c73f233a65718cb5d7849085291ba01455710c481"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715755 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-ckntz" event={"ID":"a4cea44a-1c6e-465f-97df-2c951056cb85","Type":"ContainerStarted","Data":"fbabd347815449a60e4b2b5993aa92710830e3b0983b74f7142b339cad432918"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715768 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-ckntz" event={"ID":"a4cea44a-1c6e-465f-97df-2c951056cb85","Type":"ContainerDied","Data":"333b5a9659128a52ffaed5da3c25d8feb0986d4e855c20f96e40ad31f9cb9171"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715781 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-ckntz" event={"ID":"a4cea44a-1c6e-465f-97df-2c951056cb85","Type":"ContainerStarted","Data":"cee49e60dc37b41a9c1559a523e9c4e3b09f5f3e76df27a36cc4a9d63ff6bee9"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715791 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-rjbl5" event={"ID":"d8e20d47-aeb6-41bf-9715-c437beb8e9e4","Type":"ContainerStarted","Data":"e3388d1f93809d1412794f7fce092cae4e044368882706df0d8c690d58cc928d"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715804 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-rjbl5" event={"ID":"d8e20d47-aeb6-41bf-9715-c437beb8e9e4","Type":"ContainerStarted","Data":"29621914ea05b7d9aefb3ef92742f6212ca05bc6251d28674ae45265f66276a1"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715815 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"64b7ea36-8849-4955-80b5-c7e7c12fcc29","Type":"ContainerDied","Data":"266ce948594252c2399468918fec845a74da7e6fcd999550c798b018f78a387f"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.699997 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531655-kw6fn" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715829 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"64b7ea36-8849-4955-80b5-c7e7c12fcc29","Type":"ContainerDied","Data":"88f76db39d71bd25ceb20f4306d7f26b67459cb15713885a8eb24d8304cfae77"} Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715891 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88f76db39d71bd25ceb20f4306d7f26b67459cb15713885a8eb24d8304cfae77" Feb 24 02:20:58.715644 master-0 kubenswrapper[31411]: I0224 02:20:58.715906 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" event={"ID":"8e0c87ae-6387-4c00-b03d-582566907fb6","Type":"ContainerStarted","Data":"28c8964d64effe5cfb9aee600d94edf8ec500cf08e78ee4ba28d38f4864c5e27"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.715922 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" event={"ID":"8e0c87ae-6387-4c00-b03d-582566907fb6","Type":"ContainerDied","Data":"2b6a32bb9499d220c3167aa1a2fb2a91c9d1624533d8361accf480baf5e26efd"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.715937 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" event={"ID":"8e0c87ae-6387-4c00-b03d-582566907fb6","Type":"ContainerStarted","Data":"627eb8f17d5fd787312a09b22ed574b8b738c499ce476e336267fe5d3546a7b9"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.715950 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531640-kptmw" event={"ID":"24983c94-f158-4a07-854b-2e5455374f19","Type":"ContainerDied","Data":"e51ee7952dfd445e552e98f87e6cac337f269d310845fcc3274f65c031cd5dd3"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.715965 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531640-kptmw" event={"ID":"24983c94-f158-4a07-854b-2e5455374f19","Type":"ContainerDied","Data":"4953fab92508c627d5109fd9998dd27ef32a95f892628beaf8d18c65fdcda821"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.715975 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4953fab92508c627d5109fd9998dd27ef32a95f892628beaf8d18c65fdcda821" Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.715987 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" event={"ID":"523033b8-4101-4a55-8320-55bef04ddaaf","Type":"ContainerStarted","Data":"08da08bd752d65477ab01471c9630dda4850b6474f22c31e418eb4d79c852e14"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716000 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" event={"ID":"523033b8-4101-4a55-8320-55bef04ddaaf","Type":"ContainerDied","Data":"6dce9a18bd8067d5b09584fe75915e8862f74f2dfbc221872c96fc50a87d78c5"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716014 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" event={"ID":"523033b8-4101-4a55-8320-55bef04ddaaf","Type":"ContainerStarted","Data":"fe6c9d0cd94245484579be53b58d962cf0308943b0463bad3a228d1517043027"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716026 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" event={"ID":"523033b8-4101-4a55-8320-55bef04ddaaf","Type":"ContainerStarted","Data":"5ba2f6486b90f665f4193dee37876ce40336ba0c3b009bf85c911f6014a84585"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716041 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-jjpsc" event={"ID":"3e36c9eb-0368-46dc-af84-9c602a15555d","Type":"ContainerStarted","Data":"687d08c64fa062df61a0c3e82a45be4f2c11c06761c616052b9e81d2135f7d70"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716055 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-jjpsc" event={"ID":"3e36c9eb-0368-46dc-af84-9c602a15555d","Type":"ContainerStarted","Data":"0034746b398351f91b0a88e97985b40bb4895c122d618141a5cb5cca87941d23"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716067 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" event={"ID":"8e70a9f5-1154-40e9-a487-21e36e7f420a","Type":"ContainerStarted","Data":"97b35331462aaa0a369d9be117443f796eda569592d8ad8fbb17987616408b1a"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716078 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" event={"ID":"8e70a9f5-1154-40e9-a487-21e36e7f420a","Type":"ContainerStarted","Data":"f39565eb2bd7d13e3f60e289b141d73bd5aa5e4222b88ea5807cf96c776110cb"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716089 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" event={"ID":"8e70a9f5-1154-40e9-a487-21e36e7f420a","Type":"ContainerStarted","Data":"85b8f7888ec204fa9fea6a7b1efb488127963819c9272f83acc89eb73dc0b286"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716099 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" event={"ID":"8e70a9f5-1154-40e9-a487-21e36e7f420a","Type":"ContainerDied","Data":"092cbfe8313c87ea7a79610f389e04195756c11c4aca575ebaf70dbe1a3f496d"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716111 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" event={"ID":"8e70a9f5-1154-40e9-a487-21e36e7f420a","Type":"ContainerDied","Data":"d769b62bae7da248060974b37bc61a65e0831df5e231d5c8e62a89bc58f3df85"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716121 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" event={"ID":"8e70a9f5-1154-40e9-a487-21e36e7f420a","Type":"ContainerStarted","Data":"1cf9c3623efa047b3c733a0c601bd847d659d71e97fdf999c590347704f0d5c3"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716132 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"38979f2032cf905704224ecf95ea405c1d3628e64550fb0512de42cd82d16c3b"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716147 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"4c5a831827aa8c6629b7edf7fcbb96edd2fce87cb622d23171cd4cc0b00518a0"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716156 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"dd9e8ab8ba4d28f1a7531541449124d4c22497cafc8d64913441d5478c0d7774"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716166 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"39edb4f2fd57b036ce39c1f95caba6b35ee9046cbc47aee42bae09ac48747aa5"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716162 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716177 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"0c6beafb153b866ff3e8e8fd1b01b6ddbde73e4585489b844b97c2df21c90765"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716187 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"bb9a80ed6d7d7eb83242571f651240d13b6fe2b3ccfaff6770e496961a1600a5"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716198 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"e47b414c847a54539f28e830435dfe61ba2d4309c2e9e84ac24e938ca23917ff"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716208 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"e6d11eb8af0756a7414e361a0a41884731f78257822ffdb122c02d11a1914c35"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716218 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"c28e00882d5ec6f229538a77e2756ba9244f00b81361306d5103b5a9571bb19f"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716229 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"bd02da41-8a48-4436-ae58-6363e7554898","Type":"ContainerDied","Data":"beff9cdd09dcda0a6932e333a63d749970c5574701c511858c571df2f87fa178"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716242 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"bd02da41-8a48-4436-ae58-6363e7554898","Type":"ContainerDied","Data":"66e7c8048611a89e5a8013562e4ce272875677cb82b5701298ff4ad3f7e01366"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716251 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66e7c8048611a89e5a8013562e4ce272875677cb82b5701298ff4ad3f7e01366" Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716261 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"5508683b-09ae-47a1-89fd-b0891a881e09","Type":"ContainerDied","Data":"6bd403605e79109075e7b61bac31b57ae266809e2fcec35f73761229b419851f"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716274 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"5508683b-09ae-47a1-89fd-b0891a881e09","Type":"ContainerDied","Data":"84c61cc1c9db5c98d72530e3929e9be3931d553cf93f1ab6621e14c53b3476fa"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716286 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84c61cc1c9db5c98d72530e3929e9be3931d553cf93f1ab6621e14c53b3476fa" Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716296 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59" event={"ID":"74a7801b-b7a4-4292-91b3-6285c239aeb7","Type":"ContainerStarted","Data":"ac010ae93fa7a3f9d57b4980dd10c5273055c70374932bda4fb37b79384ffe47"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716307 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59" event={"ID":"74a7801b-b7a4-4292-91b3-6285c239aeb7","Type":"ContainerStarted","Data":"613df6266ab2db0a40595ffadff232bad8adba1e1c946c35a0e200ec0ca8ec5a"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716316 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59" event={"ID":"74a7801b-b7a4-4292-91b3-6285c239aeb7","Type":"ContainerStarted","Data":"89afab5096911f8752c791dc8598fa3869a80370cd36dec7298c9b9d91c19d81"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716326 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" event={"ID":"c84dc269-43ae-4083-9998-a0b3c90bb681","Type":"ContainerStarted","Data":"f2168e99f1f05c5d55e4f3c5a9f0f43a42237ceed5d8da4d7ab8c9252dfaf352"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716337 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" event={"ID":"c84dc269-43ae-4083-9998-a0b3c90bb681","Type":"ContainerDied","Data":"d334785d9219b7d8b6844162f83a93171c2b15443bfd41ab182a8af1bf2c16e9"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716349 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" event={"ID":"c84dc269-43ae-4083-9998-a0b3c90bb681","Type":"ContainerStarted","Data":"1902deada2be96bfa5d915252c7df17f18da007080acab9c3aa02ba85365b1cc"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716359 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" event={"ID":"fcbda577-b943-4b5c-b041-948aece8e40f","Type":"ContainerStarted","Data":"4d0f5f1383f3fd6438bc21f29c7714007c4b1b3f11506fd58ea51a3c14b41a68"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716372 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" event={"ID":"fcbda577-b943-4b5c-b041-948aece8e40f","Type":"ContainerDied","Data":"b052a287fe47bb0fa4d703a983f03a367ec9eff6f9c816432ac44ee4c3a812f3"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716385 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" event={"ID":"fcbda577-b943-4b5c-b041-948aece8e40f","Type":"ContainerStarted","Data":"e6d28a4266f3905d697e133577d4e67e6ee815cccb7f5ef59b536b8c0d26cb94"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716394 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-t5rgn" event={"ID":"f807f33c-8132-48a8-ab12-4b54c1cd2b10","Type":"ContainerStarted","Data":"edd50eb47e10b09f1c0c971bc402155dbd7033b3e0dee0c8f6cc4bb8c1175ca2"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716408 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-t5rgn" event={"ID":"f807f33c-8132-48a8-ab12-4b54c1cd2b10","Type":"ContainerStarted","Data":"b349be3a51fa8e9742ffa9ffec1fca593b97c110bfc1659b3565ff20080159a9"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716417 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-t5rgn" event={"ID":"f807f33c-8132-48a8-ab12-4b54c1cd2b10","Type":"ContainerStarted","Data":"90a0d6bae4f861a78e1bdfe5f47cce060c508d92cdd797bd0fed3982c351779c"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716427 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"c12652f5-003f-4b77-b2bb-b666c9d7bb53","Type":"ContainerDied","Data":"50801a56a4404416a44874540419cd05a4a4bedf1fb5022f9e0b4725f3c11f4d"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716440 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"c12652f5-003f-4b77-b2bb-b666c9d7bb53","Type":"ContainerDied","Data":"d91c1d25f97f5902e0cd98da21fb3d84dc557631fbc1bb6bed501fad908da85d"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716440 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716449 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d91c1d25f97f5902e0cd98da21fb3d84dc557631fbc1bb6bed501fad908da85d" Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716460 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-nqcs2" event={"ID":"c6153510-452b-4726-8b63-8cc894daa168","Type":"ContainerStarted","Data":"744949b5aa1c1c17d025fe44f4d0b2efdedae3bce2dd2885b36b07a915024ace"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716472 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-nqcs2" event={"ID":"c6153510-452b-4726-8b63-8cc894daa168","Type":"ContainerDied","Data":"cc397ba4850c1884796fd99f77165bb7223cb379b9ddec9b0da649d7c4abf92f"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716484 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-nqcs2" event={"ID":"c6153510-452b-4726-8b63-8cc894daa168","Type":"ContainerStarted","Data":"586631f1005e0eec9e04637dd3347ca45f3a799902b12b7ee4c09257fef0aee9"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716494 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"683deae1-94b1-4c17-a73f-ad628a09134b","Type":"ContainerDied","Data":"94401de1842b75a4dd153e2d7cb3bd01f3f26706beddf59514cdea6c0eb4a139"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716508 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"683deae1-94b1-4c17-a73f-ad628a09134b","Type":"ContainerDied","Data":"63cac87aa9f86fe69782f0e078c00d8a3d420e25f4f78bbbbc9cfebe09080f84"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716516 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63cac87aa9f86fe69782f0e078c00d8a3d420e25f4f78bbbbc9cfebe09080f84" Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716527 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr" event={"ID":"9b5620d6-a5fe-45d7-b39e-8bed7f602a17","Type":"ContainerStarted","Data":"0c62d0c7c4600387db3c442e781dcbd028ad9bd230843d85d89b3999a7a687b8"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716540 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr" event={"ID":"9b5620d6-a5fe-45d7-b39e-8bed7f602a17","Type":"ContainerDied","Data":"18e36dffd25cf50db60c55874ae5e83aa35faa4fa1dff2c477ec4899a01aa1f0"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716551 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr" event={"ID":"9b5620d6-a5fe-45d7-b39e-8bed7f602a17","Type":"ContainerStarted","Data":"83490c1a955fe6b943eda48c6b81b0120dda14df023aa9b81ab0e80b7e90cadf"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716562 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jtdht" event={"ID":"57811d07-ae8a-44b7-8efb-dafc5afad31e","Type":"ContainerStarted","Data":"577e534e7b46ead634ade626be31364b87f35a324373685d74e9e47dc0da5b44"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716589 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jtdht" event={"ID":"57811d07-ae8a-44b7-8efb-dafc5afad31e","Type":"ContainerDied","Data":"0261b05dc86f44c57d1260d8e9e574b7afb0942396c397b4be98f1486a4e967b"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716601 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jtdht" event={"ID":"57811d07-ae8a-44b7-8efb-dafc5afad31e","Type":"ContainerDied","Data":"46f1df0f3044924b6c94bc53975525ce01b17baddc32b6007d1fff90c64f595f"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716611 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jtdht" event={"ID":"57811d07-ae8a-44b7-8efb-dafc5afad31e","Type":"ContainerDied","Data":"cfdb24d0d0b1a9e1ffe1c98259396806799adff6a318a37a19e4e31ee02f6987"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716620 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jtdht" event={"ID":"57811d07-ae8a-44b7-8efb-dafc5afad31e","Type":"ContainerDied","Data":"bdb96a50270730f3bce2e557a04b02a2063f4f2e15fbd55d5081bf5036b5f652"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716631 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jtdht" event={"ID":"57811d07-ae8a-44b7-8efb-dafc5afad31e","Type":"ContainerDied","Data":"73bcd3ba04771dbfaf54cb795e59bd88d55d88d355f426be066ffb50beee1f86"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716642 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jtdht" event={"ID":"57811d07-ae8a-44b7-8efb-dafc5afad31e","Type":"ContainerDied","Data":"78fb207cbc767c0fee7b7d210f99c9aaf3165a7c791dd4e586c95fb618507ed8"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716653 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jtdht" event={"ID":"57811d07-ae8a-44b7-8efb-dafc5afad31e","Type":"ContainerStarted","Data":"db3e2d765c6b8a0f8e83a15ab78326f0bd14411e923e027c42dbca04e32ebad8"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716663 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-2qn8m" event={"ID":"24765ff1-5e7d-4100-ad81-8f73555fc0a2","Type":"ContainerStarted","Data":"505153a6b58ee5fdb40b64cf1449d1ab8536604ceefcf028d98144e91d2cd947"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716675 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-2qn8m" event={"ID":"24765ff1-5e7d-4100-ad81-8f73555fc0a2","Type":"ContainerStarted","Data":"634ece1d92bdb1ceb44b0e5c54c19504b4ec18f00008defdfe406f50026a70a8"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716685 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-2qn8m" event={"ID":"24765ff1-5e7d-4100-ad81-8f73555fc0a2","Type":"ContainerDied","Data":"1f52501726ed970c81fbc87519c42dbbcb0a0375319ca30b25aeac0dc7303da1"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716696 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-2qn8m" event={"ID":"24765ff1-5e7d-4100-ad81-8f73555fc0a2","Type":"ContainerStarted","Data":"a547b5d4267673c7d0d24b2e2ba4109ca6066121198db068b1fa1a5a39df064e"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716706 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" event={"ID":"f2e9cdff-8c15-43df-b8df-7fe3a73fda86","Type":"ContainerStarted","Data":"2c84a94f2a6a9cb8677b242aead424cd42f233786a43d4ea77fa8c1270383306"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716718 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" event={"ID":"f2e9cdff-8c15-43df-b8df-7fe3a73fda86","Type":"ContainerStarted","Data":"f571f0a4aeeadbbb146ec437860c7a57c1e485b485fb0691d9981c6a2b22a120"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716729 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-drrqm" event={"ID":"3332acec-1553-4594-a903-a322399f6d9d","Type":"ContainerStarted","Data":"557c7ff2e8e380b71d6ccd161c67b68838831e347351c85dd62355bb14a7036c"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716743 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-drrqm" event={"ID":"3332acec-1553-4594-a903-a322399f6d9d","Type":"ContainerDied","Data":"ab02a48b627b8d33941cca436e701fe8cca5e7a818343839afa499d5ab3abe6a"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716755 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-drrqm" event={"ID":"3332acec-1553-4594-a903-a322399f6d9d","Type":"ContainerStarted","Data":"46f23e74184a869450a53e076049b086fc11c3d08fab3acc813aa63061b356f3"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716766 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" event={"ID":"6a9ccd8e-d964-4c03-8ffc-51b464030c25","Type":"ContainerStarted","Data":"5a7ad2a781522a5adfd41c8ba931c6dfc84f053a55b73cc07dafd244e7f970cc"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716777 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" event={"ID":"6a9ccd8e-d964-4c03-8ffc-51b464030c25","Type":"ContainerDied","Data":"3b252dadf775881151b12a03ae689a3184f813df8c5304f84973c53cfd29954e"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716790 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" event={"ID":"6a9ccd8e-d964-4c03-8ffc-51b464030c25","Type":"ContainerStarted","Data":"56b5dc5b3e9740ae05d95dc7b2a84307e363cddd956bef52b197b1f840f462b7"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716799 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" event={"ID":"df2b8111-41c6-4333-b473-4c08fb836f70","Type":"ContainerStarted","Data":"43bd616c2dbad772613b397d816c6f3ebc1ceb3dea2da9e16799c92367bf939a"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716811 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" event={"ID":"df2b8111-41c6-4333-b473-4c08fb836f70","Type":"ContainerStarted","Data":"94827312ec38c6658c55e209ea3bcc5483bed338d5de6a56306adc1c033c902b"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716821 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" event={"ID":"df2b8111-41c6-4333-b473-4c08fb836f70","Type":"ContainerStarted","Data":"513f9261949841afe139d9cdba0a1314c71b8cc3ca522e4a37e97a5c0f7cd056"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716833 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" event={"ID":"55a2662a-d672-4a46-9b81-bfcaf334eedb","Type":"ContainerStarted","Data":"bcd82e1ea303c732e8cd1c96072c832298944aee61e13de8101c6575d136f541"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716844 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" event={"ID":"55a2662a-d672-4a46-9b81-bfcaf334eedb","Type":"ContainerStarted","Data":"45681e7db0a00432167c0ceb01dfa150d4182b397673d5a0da048e4b9054ffea"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716854 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" event={"ID":"7b4e3ba0-5194-4e20-8f12-dea4b67504fe","Type":"ContainerStarted","Data":"6d16a4b4c8918a6f68fb2a5efd3e381184ca33865224072dbe4960214adf0d1a"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716866 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" event={"ID":"7b4e3ba0-5194-4e20-8f12-dea4b67504fe","Type":"ContainerDied","Data":"08908d93b30c563001e2ff25650203b4026284e8bf58ed4cc26c75825e885fed"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716878 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" event={"ID":"7b4e3ba0-5194-4e20-8f12-dea4b67504fe","Type":"ContainerStarted","Data":"8c06bdff0d8155655c32b3773e94d9d6596a89111686f4ac225c1d656438d2f6"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716887 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" event={"ID":"7b4e3ba0-5194-4e20-8f12-dea4b67504fe","Type":"ContainerStarted","Data":"7ed7554e0b6eb88f1840ed2eee6ab3bddc21f230340186b0439d63f9a885eb31"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716897 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" event={"ID":"8ebd1a97-ff7b-4a10-a1b5-956e427478a8","Type":"ContainerStarted","Data":"3595b27c901a436cc869a6f11c6c419c015d879fa7a8bd4cad8a61ebd21bfc83"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716910 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" event={"ID":"8ebd1a97-ff7b-4a10-a1b5-956e427478a8","Type":"ContainerDied","Data":"1ad25f42402e4374c8b94191386be9d7bc2003ced71ea11e24c2117025405399"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716921 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" event={"ID":"8ebd1a97-ff7b-4a10-a1b5-956e427478a8","Type":"ContainerStarted","Data":"b42c212cb365bd4dad9063fd3d49d84292444529270d009b20ccad68831b287a"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716930 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" event={"ID":"8ebd1a97-ff7b-4a10-a1b5-956e427478a8","Type":"ContainerStarted","Data":"276e47463c76f9595550735ca5a2eb97f44bfa685298a20ea61ee705f8a41bd4"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716940 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" event={"ID":"c92835f0-7f32-4584-8304-843d7979392a","Type":"ContainerStarted","Data":"e6ffae40e1742eb9b97a832429ff51e680fe1fd780f353261d9c3fa4f6b341dc"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716951 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" event={"ID":"c92835f0-7f32-4584-8304-843d7979392a","Type":"ContainerDied","Data":"31574f7edd04f096f094395019ad492b65bb9b76a514603c20ad3eb658100f5f"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716962 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" event={"ID":"c92835f0-7f32-4584-8304-843d7979392a","Type":"ContainerDied","Data":"ec5c26bb0883484781a82be8d7bf1a6eb78e1cb6c0192ee0fe34ebba8f9531c4"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716972 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" event={"ID":"c92835f0-7f32-4584-8304-843d7979392a","Type":"ContainerStarted","Data":"a6e4933443321f6f827221301b84c881881ea51343c84ac3ad457e15891f86d0"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716982 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-54b95" event={"ID":"e3a675b9-feaa-4456-b7b4-0cd3afc42a42","Type":"ContainerStarted","Data":"ca461cd5846178a42e36d7c5be475acd0be7b72129a007ae8d0fef2ce6b0c63e"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716992 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-54b95" event={"ID":"e3a675b9-feaa-4456-b7b4-0cd3afc42a42","Type":"ContainerStarted","Data":"3d94304059d808624e692a18999e46c1ed32aa07c16bb3ea5a63de6a687dd377"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717002 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" event={"ID":"f5463fbf-ac21-4058-9a3b-30d0e5ea31b7","Type":"ContainerStarted","Data":"e94e209be7ac1e9c90f8d05540fb0675399616e7598a1415ddddef916110f47e"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717014 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" event={"ID":"f5463fbf-ac21-4058-9a3b-30d0e5ea31b7","Type":"ContainerDied","Data":"5d2191d76e599e2fb849008551eb68ecc941d4be08a06831c47e2e57561783d8"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717025 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" event={"ID":"f5463fbf-ac21-4058-9a3b-30d0e5ea31b7","Type":"ContainerStarted","Data":"1cc1996551692c223eb12edcadd4f14bef06fed859ebb6d00f4391944783b38d"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717035 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" event={"ID":"303d5058-84df-40d1-a941-896b093ae470","Type":"ContainerStarted","Data":"5ade2e4f1fdc37ba74fb08a73d1b48600d369e60d30927c9f48ef0e5d4fba55a"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717046 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" event={"ID":"303d5058-84df-40d1-a941-896b093ae470","Type":"ContainerDied","Data":"af016520fee3befe328cfcacf1e61661d632e7bab3a83e84890042b8512881e1"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717069 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" event={"ID":"303d5058-84df-40d1-a941-896b093ae470","Type":"ContainerDied","Data":"3765f830d9a5fe9077b8e56d63e0f2189d75d32a461453b1f0db5a0b05c13f47"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717079 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" event={"ID":"303d5058-84df-40d1-a941-896b093ae470","Type":"ContainerDied","Data":"4d87ace597126f6a6c5b7ecfae7ff8d57f99cad256a801b2bb6027c85887bf7c"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717090 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" event={"ID":"303d5058-84df-40d1-a941-896b093ae470","Type":"ContainerStarted","Data":"f64a1a9e81543288c082ba54493b536ca2db47fef63c0b6ea8e2ecd8d4fc6a3b"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717100 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qqt7p" event={"ID":"b085f760-0e24-41a8-af09-538396aad935","Type":"ContainerStarted","Data":"f5b9142d347cbc969aebf3a8b0f790729154ed617740186897333ed97fd30b72"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717112 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qqt7p" event={"ID":"b085f760-0e24-41a8-af09-538396aad935","Type":"ContainerDied","Data":"532c8330be4e1bb00f3b8c98db49eb86ee33fea1c47fd0eb58ed9999c987cc56"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717124 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qqt7p" event={"ID":"b085f760-0e24-41a8-af09-538396aad935","Type":"ContainerDied","Data":"aab5f16cb62468cbd33a1b962837194ed256ccf00334d577f86ad2e704134976"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717134 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qqt7p" event={"ID":"b085f760-0e24-41a8-af09-538396aad935","Type":"ContainerStarted","Data":"f0367a6433cb34322b034ae4858460e50d6150d575d08858a19734f838b6527f"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717144 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-c5wlk" event={"ID":"011c6603-d533-4449-b409-f6f698a3bd50","Type":"ContainerStarted","Data":"188661fad9f6ca0ff77605f5232fde5303986f995916e8cce064c3bbfe8c7e01"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717155 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-c5wlk" event={"ID":"011c6603-d533-4449-b409-f6f698a3bd50","Type":"ContainerStarted","Data":"da0959fc5c7a27175270ce726463fd3e9e8da5aff2a8a6bf45a477613fc17349"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717165 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-9gkp2" event={"ID":"22a83952-32ec-48f7-85cd-209b62362ae2","Type":"ContainerStarted","Data":"48f55c332467fced473c4c4e91af307834aa39f6e1f504defa9413e83cb73702"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717178 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-9gkp2" event={"ID":"22a83952-32ec-48f7-85cd-209b62362ae2","Type":"ContainerStarted","Data":"0a122f4d5531f3489b5545a54ec94812a3a4adf1ceb59316f98f88f87840e7dc"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717189 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"070ebb2d-57a2-4c76-8c93-e09d398f3b73","Type":"ContainerStarted","Data":"51a3db0d894d96bab79a718a222631106e9405e2deb8d971fa5341ac8b946184"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717201 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"070ebb2d-57a2-4c76-8c93-e09d398f3b73","Type":"ContainerStarted","Data":"b9883c9e4f2305d2772b0fcadf9ca3936959b05b72ec07b2a03edcd3558e0737"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717211 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" event={"ID":"608a8a56-daee-4fa1-8300-42155217c68b","Type":"ContainerStarted","Data":"860e1ea0a41f00b850cab433b6728eb3878d47cbf363a792c5a1a2425dd74bf4"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717223 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" event={"ID":"608a8a56-daee-4fa1-8300-42155217c68b","Type":"ContainerStarted","Data":"20f9b5f75c2cde17a1d4633c252f670c4f9f5295d80a5639b06ba7c15a2a2e27"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717234 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" event={"ID":"608a8a56-daee-4fa1-8300-42155217c68b","Type":"ContainerStarted","Data":"aec27e3292c40382740e058d73f54f54825380bfa9c5ef79af6e0003ccd5e974"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717243 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" event={"ID":"608a8a56-daee-4fa1-8300-42155217c68b","Type":"ContainerStarted","Data":"9538eb885cdee2fa9a588e118eb2c741ad080c47591f7c5e45f680a3f6d76460"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717253 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brpmb" event={"ID":"0cb042de-c873-408c-a4c4-ef9f7e546a08","Type":"ContainerStarted","Data":"810cd243cc24c7b21ab849373cf57f7831cbca0a7bcf82441855e145620041e9"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717271 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brpmb" event={"ID":"0cb042de-c873-408c-a4c4-ef9f7e546a08","Type":"ContainerDied","Data":"d8bc7a70a71673332c516e84a549e1618f6a4d5aacc90397bac38d952ac62d70"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717283 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brpmb" event={"ID":"0cb042de-c873-408c-a4c4-ef9f7e546a08","Type":"ContainerDied","Data":"700bebabab614b094ae34ffb33548f3295723695a9fad8972c8014d17036eac5"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717293 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brpmb" event={"ID":"0cb042de-c873-408c-a4c4-ef9f7e546a08","Type":"ContainerStarted","Data":"9867de2597cef8ea21b27065bc6c5ebb42eda67f6c0e55aa21a2e92ed8089c54"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717303 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4lwwp" event={"ID":"390a7aa5-c7f7-4baf-a2d2-e6da9a465042","Type":"ContainerStarted","Data":"1be30aec3fa0bec94f6864e2fd84027a688b746b1f841fb7b577e57ec8f40903"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717314 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4lwwp" event={"ID":"390a7aa5-c7f7-4baf-a2d2-e6da9a465042","Type":"ContainerStarted","Data":"21aa7b4dfda40f1610fd6b64e23f1c617ce7b50ea96960fc42e2a8aaa9a792b2"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717325 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" event={"ID":"a02536a3-7d3e-4e74-9625-aefed518ec35","Type":"ContainerStarted","Data":"99a135eec1fc60a023004a63b57bb9c9bebf117dd68bed38de221f8b6714663d"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717336 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" event={"ID":"a02536a3-7d3e-4e74-9625-aefed518ec35","Type":"ContainerDied","Data":"fe82991d620c953a66413bff375a2b214f4a5b8652aa8341d49741ccabb41961"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717349 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" event={"ID":"a02536a3-7d3e-4e74-9625-aefed518ec35","Type":"ContainerStarted","Data":"a75855ac22ad61c526e140082a63e50802db589f96d5c1f8fe72f371e5c93069"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717360 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" event={"ID":"fb39fcc8-beb4-410e-b2a4-0b3e150719cc","Type":"ContainerStarted","Data":"5380c680490e259bb66eb660b6baa7d7340e0ee146b1b9cd483ce9f97fef3094"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717370 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" event={"ID":"fb39fcc8-beb4-410e-b2a4-0b3e150719cc","Type":"ContainerStarted","Data":"53aef3176bd11ce32053e4c2256ae3bd19adf8061abe89a3f26ff52596748dc6"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717380 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" event={"ID":"fb39fcc8-beb4-410e-b2a4-0b3e150719cc","Type":"ContainerStarted","Data":"4f5b06a0a1103084565e7f3fed74736cb11f62b92bf6867022587965f1a2caaf"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717389 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" event={"ID":"fb39fcc8-beb4-410e-b2a4-0b3e150719cc","Type":"ContainerStarted","Data":"c880f756d05774fc1f954066039c7ec198c9da869a02c1a619e01fcc3885fb5a"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717399 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" event={"ID":"fb39fcc8-beb4-410e-b2a4-0b3e150719cc","Type":"ContainerStarted","Data":"9bb3aaaf98b3e613aa38c174dae3a871e1597827859f13849e7bd01ad03bb7bb"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717408 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" event={"ID":"fb39fcc8-beb4-410e-b2a4-0b3e150719cc","Type":"ContainerStarted","Data":"1c39255c92a2233ed6cf746e8b12d337977d9bba6a9424c402e1eeeb4d639e30"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717418 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" event={"ID":"fb39fcc8-beb4-410e-b2a4-0b3e150719cc","Type":"ContainerStarted","Data":"059228c2a27c3aed25af8917cccf482faf03f812c73e457a250e417c4a568a0c"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717428 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" event={"ID":"fb39fcc8-beb4-410e-b2a4-0b3e150719cc","Type":"ContainerStarted","Data":"4cabdcb3ddc56ae97f7b4649fc91fc0a40b0adb8f619c78d4eb6d40afa505204"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717438 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" event={"ID":"fb39fcc8-beb4-410e-b2a4-0b3e150719cc","Type":"ContainerDied","Data":"c3063a301534062c954aa79867d0cc96573d7146ccda3bfb83406935c96bf2b9"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717450 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" event={"ID":"fb39fcc8-beb4-410e-b2a4-0b3e150719cc","Type":"ContainerStarted","Data":"5961c2c1ae4747bec6388a9fbe96dacb27b6a52832bcc7c5d12c3091d629abab"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717461 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-drf28" event={"ID":"9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1","Type":"ContainerStarted","Data":"2a11c76f0140f7173ebd4cdba2e1203079bbe90c60c221d6fa54a28e0ae0592e"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717472 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-drf28" event={"ID":"9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1","Type":"ContainerStarted","Data":"560cc2fc6affd50d504fd0043ec0076b50148137946f51caca10417e2832ae2a"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717482 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" event={"ID":"25190a18-bdac-479b-b526-840d28636be3","Type":"ContainerStarted","Data":"63c62ec4ef85b454c2032773739c7fd21c21de0158d8f07f7e5dd6a835789cb3"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717494 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" event={"ID":"25190a18-bdac-479b-b526-840d28636be3","Type":"ContainerStarted","Data":"b92739f4eec45b9fa61d0a87a22d8bd988c1f4a14cd1e3cd849380cb57883acc"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717508 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" event={"ID":"25190a18-bdac-479b-b526-840d28636be3","Type":"ContainerDied","Data":"2bd08832a83f0b581af5dd0d4502909325c97e4a1b072cf713d68506345db86b"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717519 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" event={"ID":"25190a18-bdac-479b-b526-840d28636be3","Type":"ContainerStarted","Data":"229e86eaf4e88e77ef5e6c4bc8577da2618b97072b856ff2c58bb725165574ff"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717528 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531655-kw6fn" event={"ID":"06fb1d82-f9e9-473b-80c5-767ec3948bd4","Type":"ContainerDied","Data":"23e5ece2a1174ce846ce41906ef5a0fcc35a5f58a900b96b34aee280e09c4850"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717540 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531655-kw6fn" event={"ID":"06fb1d82-f9e9-473b-80c5-767ec3948bd4","Type":"ContainerDied","Data":"d422768badc0a68fd4cf2f302f097d6619f8211838023083d72164e3cae439f7"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717548 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d422768badc0a68fd4cf2f302f097d6619f8211838023083d72164e3cae439f7" Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717557 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" event={"ID":"eb9f7dc4-e69f-4fc1-bb1a-1878971d279d","Type":"ContainerDied","Data":"7141c97ac4f89de3381db3a918874906220134e85f0b183e0b4eaacb0434c063"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717582 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" event={"ID":"eb9f7dc4-e69f-4fc1-bb1a-1878971d279d","Type":"ContainerDied","Data":"37f8d2ab136a840c907679de284c242a764649e281bfa4d2ebf7dc5249d87848"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717593 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37f8d2ab136a840c907679de284c242a764649e281bfa4d2ebf7dc5249d87848" Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717601 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-bkc9s" event={"ID":"0127e0d5-9961-4ff6-851d-884e71e1dcf2","Type":"ContainerStarted","Data":"c48f95c28d464405a814f3600c47bc4d976bf90d59f1a44943118946c66b1b12"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717611 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-bkc9s" event={"ID":"0127e0d5-9961-4ff6-851d-884e71e1dcf2","Type":"ContainerStarted","Data":"73a9bec7b7f8ef1aa34fa536a5b424811fa7916bba904a12f88c7fc64ea1d064"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717620 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-bkc9s" event={"ID":"0127e0d5-9961-4ff6-851d-884e71e1dcf2","Type":"ContainerStarted","Data":"c8a44d641739b0edde589e3cc2ab82e120d1f854cda8b41d7ab46952d705c4b9"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717631 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97c357985567dbd0d0a6d267a8cc4448d666a74cc353c0643a50d8ab3f6c2302" Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717639 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" event={"ID":"02f1d753-983a-4c4a-b1a0-560de173859a","Type":"ContainerStarted","Data":"3b203d15747d627cfdb5de00e47a4742a40d9cb938d42607f4724f640a852526"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717650 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" event={"ID":"02f1d753-983a-4c4a-b1a0-560de173859a","Type":"ContainerStarted","Data":"f5861a89c1b826c96f8d7eb1735da2b4cdf59be101852d074de82cd32893d879"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717659 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd","Type":"ContainerStarted","Data":"a1bd23ca02400c09ae750684bb9e9e78e05cea2070ce8f8f95459966c9e876eb"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717670 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd","Type":"ContainerStarted","Data":"87721a77ed25537b4640a4bb3b51b25ccc9baed9db06541dd0ac651dd7e4b7bb"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717679 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" event={"ID":"2cb764f6-40f8-4e87-8be0-b9d7b0364201","Type":"ContainerStarted","Data":"0c2041bc3003c23bdf033e8d4336b3793e7dde4d2a89e2fa38af3e920180f589"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717690 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" event={"ID":"2cb764f6-40f8-4e87-8be0-b9d7b0364201","Type":"ContainerStarted","Data":"a4e2b660500e8f18a668256b1dac8c7a8ab77c9c1715967242e37dc5cc945cda"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717699 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" event={"ID":"2cb764f6-40f8-4e87-8be0-b9d7b0364201","Type":"ContainerStarted","Data":"583b21f55c4eaab72f3731e41c571ee4872bace9012dd0a496219d9d98220f85"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717708 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4znnj" event={"ID":"e56a17d6-d740-4349-833e-b5279f7db2d4","Type":"ContainerStarted","Data":"15aeb6521953635bf29090f8919f8b80363ab8e1ccf1d84c06bc5a39df964852"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717720 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4znnj" event={"ID":"e56a17d6-d740-4349-833e-b5279f7db2d4","Type":"ContainerDied","Data":"5c2bae7a5f82ac2dacb3d782b5c120ab2d48eaa30503f169922755bafd417358"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717731 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4znnj" event={"ID":"e56a17d6-d740-4349-833e-b5279f7db2d4","Type":"ContainerDied","Data":"1e7c0eb2fdff11adf850fda4f441e025c2724fb8123e10487c2648065fa6f259"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717742 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4znnj" event={"ID":"e56a17d6-d740-4349-833e-b5279f7db2d4","Type":"ContainerStarted","Data":"29c6111030d71a276fc5ae8422a3897c52faae1bbf5d2f44516c595b0829852b"} Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.717427 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3332acec-1553-4594-a903-a322399f6d9d-metrics-tls\") pod \"network-operator-7d7db75979-drrqm\" (UID: \"3332acec-1553-4594-a903-a322399f6d9d\") " pod="openshift-network-operator/network-operator-7d7db75979-drrqm" Feb 24 02:20:58.728127 master-0 kubenswrapper[31411]: I0224 02:20:58.716272 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.717948 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8tttg\" (UID: \"f5463fbf-ac21-4058-9a3b-30d0e5ea31b7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.719109 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-run-systemd\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.720249 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f85222bf-f51a-4232-8db1-1e6ee593617b-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-2492q\" (UID: \"f85222bf-f51a-4232-8db1-1e6ee593617b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.720938 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nwzm\" (UniqueName: \"kubernetes.io/projected/c92835f0-7f32-4584-8304-843d7979392a-kube-api-access-6nwzm\") pod \"openshift-config-operator-6f47d587d6-ccrxg\" (UID: \"c92835f0-7f32-4584-8304-843d7979392a\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.721019 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8rjx\" (UniqueName: \"kubernetes.io/projected/91d16f7b-390a-4d9d-99d6-cc8e210801d1-kube-api-access-b8rjx\") pod \"marketplace-operator-6f5488b997-4qf9p\" (UID: \"91d16f7b-390a-4d9d-99d6-cc8e210801d1\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.721076 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a02536a3-7d3e-4e74-9625-aefed518ec35-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-tl97n\" (UID: \"a02536a3-7d3e-4e74-9625-aefed518ec35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.721479 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c84dc269-43ae-4083-9998-a0b3c90bb681-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.721536 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-serving-cert\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.721591 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-cni-binary-copy\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.721619 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgjlz\" (UniqueName: \"kubernetes.io/projected/6320dbb5-b84d-4a57-8c65-fbed8421f84a-kube-api-access-pgjlz\") pod \"package-server-manager-5c75f78c8b-2hllb\" (UID: \"6320dbb5-b84d-4a57-8c65-fbed8421f84a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.721644 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-images\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.723100 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-serving-cert\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.723338 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-images\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.723500 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-cni-binary-copy\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.723928 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c6153510-452b-4726-8b63-8cc894daa168-signing-cabundle\") pod \"service-ca-576b4d78bd-nqcs2\" (UID: \"c6153510-452b-4726-8b63-8cc894daa168\") " pod="openshift-service-ca/service-ca-576b4d78bd-nqcs2" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.724015 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.724076 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-system-cni-dir\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.724114 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-os-release\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.724454 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-env-overrides\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.726626 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.727581 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc8jx\" (UniqueName: \"kubernetes.io/projected/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-kube-api-access-rc8jx\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.727623 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86pcb\" (UniqueName: \"kubernetes.io/projected/7b098bd4-5751-4b01-8409-0688fd29233e-kube-api-access-86pcb\") pod \"csi-snapshot-controller-operator-6fb4df594f-c95qc\" (UID: \"7b098bd4-5751-4b01-8409-0688fd29233e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-c95qc" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.727655 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/57811d07-ae8a-44b7-8efb-dafc5afad31e-whereabouts-configmap\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.728949 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d8e20d47-aeb6-41bf-9715-c437beb8e9e4-host-slash\") pod \"iptables-alerter-rjbl5\" (UID: \"d8e20d47-aeb6-41bf-9715-c437beb8e9e4\") " pod="openshift-network-operator/iptables-alerter-rjbl5" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.729920 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv6t5\" (UniqueName: \"kubernetes.io/projected/d8e20d47-aeb6-41bf-9715-c437beb8e9e4-kube-api-access-qv6t5\") pod \"iptables-alerter-rjbl5\" (UID: \"d8e20d47-aeb6-41bf-9715-c437beb8e9e4\") " pod="openshift-network-operator/iptables-alerter-rjbl5" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.729958 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.729994 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssz8p\" (UniqueName: \"kubernetes.io/projected/6a9ccd8e-d964-4c03-8ffc-51b464030c25-kube-api-access-ssz8p\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.730022 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7fgn\" (UID: \"7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.730048 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-slash\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.730075 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-system-cni-dir\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.730104 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn8hz\" (UniqueName: \"kubernetes.io/projected/e3a675b9-feaa-4456-b7b4-0cd3afc42a42-kube-api-access-nn8hz\") pod \"network-check-target-54b95\" (UID: \"e3a675b9-feaa-4456-b7b4-0cd3afc42a42\") " pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.730133 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-etcd-ca\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.730155 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7fgn\" (UID: \"7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.730181 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2b65\" (UniqueName: \"kubernetes.io/projected/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c-kube-api-access-n2b65\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7fgn\" (UID: \"7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.730208 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls\") pod \"dns-operator-8c7d49845-hxcn2\" (UID: \"2cb764f6-40f8-4e87-8be0-b9d7b0364201\") " pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.730235 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.730257 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/986668ae1bbdf9cce9dceeca068e9031-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"986668ae1bbdf9cce9dceeca068e9031\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.730285 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.730313 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-fkzdb\" (UID: \"f2e9cdff-8c15-43df-b8df-7fe3a73fda86\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.730337 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-run-k8s-cni-cncf-io\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.730363 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-kubelet\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.730384 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.730413 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-cnibin\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.730440 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8742\" (UniqueName: \"kubernetes.io/projected/f807f33c-8132-48a8-ab12-4b54c1cd2b10-kube-api-access-g8742\") pod \"migrator-5c85bff57-t5rgn\" (UID: \"f807f33c-8132-48a8-ab12-4b54c1cd2b10\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-t5rgn" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.730472 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6lsp\" (UniqueName: \"kubernetes.io/projected/db8d6627-394c-4087-bfa4-bf7580f6bb4b-kube-api-access-x6lsp\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.730499 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-socket-dir-parent\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.730524 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/3332acec-1553-4594-a903-a322399f6d9d-host-etc-kube\") pod \"network-operator-7d7db75979-drrqm\" (UID: \"3332acec-1553-4594-a903-a322399f6d9d\") " pod="openshift-network-operator/network-operator-7d7db75979-drrqm" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.730547 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.730843 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.728742 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.731121 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2cb764f6-40f8-4e87-8be0-b9d7b0364201-metrics-tls\") pod \"dns-operator-8c7d49845-hxcn2\" (UID: \"2cb764f6-40f8-4e87-8be0-b9d7b0364201\") " pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.728495 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-env-overrides\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.728895 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.731290 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7fgn\" (UID: \"7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.729194 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.731421 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a9ccd8e-d964-4c03-8ffc-51b464030c25-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.731441 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7fgn\" (UID: \"7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.729380 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.731475 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-etcd-ca\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.731611 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sp95\" (UniqueName: \"kubernetes.io/projected/c84dc269-43ae-4083-9998-a0b3c90bb681-kube-api-access-9sp95\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.731704 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-fkzdb\" (UID: \"f2e9cdff-8c15-43df-b8df-7fe3a73fda86\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.732203 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-var-lib-kubelet\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.732268 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-4qf9p\" (UID: \"91d16f7b-390a-4d9d-99d6-cc8e210801d1\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.732295 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-8cg5c\" (UID: \"12b89e05-a503-47aa-90b2-4d741e015b19\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.732369 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02536a3-7d3e-4e74-9625-aefed518ec35-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-tl97n\" (UID: \"a02536a3-7d3e-4e74-9625-aefed518ec35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.732395 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-daemon-config\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.732422 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-ovnkube-script-lib\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.732651 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-5g6nc\" (UID: \"02f1d753-983a-4c4a-b1a0-560de173859a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.732828 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-4qf9p\" (UID: \"91d16f7b-390a-4d9d-99d6-cc8e210801d1\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.732848 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-daemon-config\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.732918 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/523033b8-4101-4a55-8320-55bef04ddaaf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-bb22k\" (UID: \"523033b8-4101-4a55-8320-55bef04ddaaf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.732951 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxz8j\" (UniqueName: \"kubernetes.io/projected/c6153510-452b-4726-8b63-8cc894daa168-kube-api-access-lxz8j\") pod \"service-ca-576b4d78bd-nqcs2\" (UID: \"c6153510-452b-4726-8b63-8cc894daa168\") " pod="openshift-service-ca/service-ca-576b4d78bd-nqcs2" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.732978 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c6153510-452b-4726-8b63-8cc894daa168-signing-key\") pod \"service-ca-576b4d78bd-nqcs2\" (UID: \"c6153510-452b-4726-8b63-8cc894daa168\") " pod="openshift-service-ca/service-ca-576b4d78bd-nqcs2" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.733008 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/986668ae1bbdf9cce9dceeca068e9031-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"986668ae1bbdf9cce9dceeca068e9031\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.733019 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02536a3-7d3e-4e74-9625-aefed518ec35-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-tl97n\" (UID: \"a02536a3-7d3e-4e74-9625-aefed518ec35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.733074 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-5g6nc\" (UID: \"02f1d753-983a-4c4a-b1a0-560de173859a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.733068 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-8cg5c\" (UID: \"12b89e05-a503-47aa-90b2-4d741e015b19\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.733029 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82hfh\" (UniqueName: \"kubernetes.io/projected/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-kube-api-access-82hfh\") pod \"cluster-monitoring-operator-6bb6d78bf-fkzdb\" (UID: \"f2e9cdff-8c15-43df-b8df-7fe3a73fda86\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.733145 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-run-netns\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.733184 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79bl6\" (UniqueName: \"kubernetes.io/projected/303d5058-84df-40d1-a941-896b093ae470-kube-api-access-79bl6\") pod \"cluster-olm-operator-5bd7768f54-7wc6k\" (UID: \"303d5058-84df-40d1-a941-896b093ae470\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.733222 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-cni-netd\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.733294 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-ovnkube-config\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.733322 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8tttg\" (UID: \"f5463fbf-ac21-4058-9a3b-30d0e5ea31b7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.733328 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/523033b8-4101-4a55-8320-55bef04ddaaf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-bb22k\" (UID: \"523033b8-4101-4a55-8320-55bef04ddaaf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.733350 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/523033b8-4101-4a55-8320-55bef04ddaaf-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-bb22k\" (UID: \"523033b8-4101-4a55-8320-55bef04ddaaf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.733376 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.733546 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-ovnkube-config\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.733662 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-4qf9p\" (UID: \"91d16f7b-390a-4d9d-99d6-cc8e210801d1\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.733696 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8tttg\" (UID: \"f5463fbf-ac21-4058-9a3b-30d0e5ea31b7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.733721 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/adc1097b-c1ab-4f09-965d-1c819671475b-env-overrides\") pod \"network-node-identity-p5b6q\" (UID: \"adc1097b-c1ab-4f09-965d-1c819671475b\") " pod="openshift-network-node-identity/network-node-identity-p5b6q" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.733730 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/523033b8-4101-4a55-8320-55bef04ddaaf-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-bb22k\" (UID: \"523033b8-4101-4a55-8320-55bef04ddaaf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.733755 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-run-ovn\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.733782 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c92835f0-7f32-4584-8304-843d7979392a-serving-cert\") pod \"openshift-config-operator-6f47d587d6-ccrxg\" (UID: \"c92835f0-7f32-4584-8304-843d7979392a\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:20:58.733667 master-0 kubenswrapper[31411]: I0224 02:20:58.733811 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.733838 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a02536a3-7d3e-4e74-9625-aefed518ec35-config\") pod \"kube-controller-manager-operator-7bcfbc574b-tl97n\" (UID: \"a02536a3-7d3e-4e74-9625-aefed518ec35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.733869 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.733901 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.733927 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbnd2\" (UniqueName: \"kubernetes.io/projected/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-kube-api-access-bbnd2\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.733955 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njjq8\" (UniqueName: \"kubernetes.io/projected/b36d8451-0fda-4d9d-a850-d05c8f847016-kube-api-access-njjq8\") pod \"openshift-apiserver-operator-8586dccc9b-sl5hz\" (UID: \"b36d8451-0fda-4d9d-a850-d05c8f847016\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.733984 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqqwj\" (UniqueName: \"kubernetes.io/projected/ca1250a6-30f0-4cc0-b9b0-eabde42aefcf-kube-api-access-fqqwj\") pod \"network-check-source-58fb6744f5-l4wh6\" (UID: \"ca1250a6-30f0-4cc0-b9b0-eabde42aefcf\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-l4wh6" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.734005 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.734010 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlg2j\" (UniqueName: \"kubernetes.io/projected/523033b8-4101-4a55-8320-55bef04ddaaf-kube-api-access-dlg2j\") pod \"ovnkube-control-plane-5d8dfcdc87-bb22k\" (UID: \"523033b8-4101-4a55-8320-55bef04ddaaf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.734139 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7lsb\" (UniqueName: \"kubernetes.io/projected/f6e7b773-7ecd-4a5c-8bef-d672f371e7e5-kube-api-access-q7lsb\") pod \"csi-snapshot-controller-6847bb4785-8l58x\" (UID: \"f6e7b773-7ecd-4a5c-8bef-d672f371e7e5\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.734224 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cabdddba-5507-4e47-98ef-a00c6d0f305d-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.734255 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c92835f0-7f32-4584-8304-843d7979392a-serving-cert\") pod \"openshift-config-operator-6f47d587d6-ccrxg\" (UID: \"c92835f0-7f32-4584-8304-843d7979392a\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.734405 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6f7j\" (UniqueName: \"kubernetes.io/projected/cabdddba-5507-4e47-98ef-a00c6d0f305d-kube-api-access-h6f7j\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.734475 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.734497 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-metrics-tls\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.734557 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cabdddba-5507-4e47-98ef-a00c6d0f305d-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.734641 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a02536a3-7d3e-4e74-9625-aefed518ec35-config\") pod \"kube-controller-manager-operator-7bcfbc574b-tl97n\" (UID: \"a02536a3-7d3e-4e74-9625-aefed518ec35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.734650 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/303d5058-84df-40d1-a941-896b093ae470-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-7wc6k\" (UID: \"303d5058-84df-40d1-a941-896b093ae470\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.734728 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp8hv\" (UniqueName: \"kubernetes.io/projected/2cb764f6-40f8-4e87-8be0-b9d7b0364201-kube-api-access-sp8hv\") pod \"dns-operator-8c7d49845-hxcn2\" (UID: \"2cb764f6-40f8-4e87-8be0-b9d7b0364201\") " pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.734801 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpg26\" (UniqueName: \"kubernetes.io/projected/fcbda577-b943-4b5c-b041-948aece8e40f-kube-api-access-vpg26\") pod \"kube-storage-version-migrator-operator-fc889cfd5-xdws2\" (UID: \"fcbda577-b943-4b5c-b041-948aece8e40f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.734900 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/91d16f7b-390a-4d9d-99d6-cc8e210801d1-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-4qf9p\" (UID: \"91d16f7b-390a-4d9d-99d6-cc8e210801d1\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.735040 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtf52\" (UniqueName: \"kubernetes.io/projected/9b5620d6-a5fe-45d7-b39e-8bed7f602a17-kube-api-access-jtf52\") pod \"service-ca-operator-c48c8bf7c-6fqkr\" (UID: \"9b5620d6-a5fe-45d7-b39e-8bed7f602a17\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.735071 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/303d5058-84df-40d1-a941-896b093ae470-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-7wc6k\" (UID: \"303d5058-84df-40d1-a941-896b093ae470\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.735121 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nmd6\" (UniqueName: \"kubernetes.io/projected/70e2ba24-4871-4d1d-9935-156fdbeb2810-kube-api-access-4nmd6\") pod \"network-metrics-daemon-tntcf\" (UID: \"70e2ba24-4871-4d1d-9935-156fdbeb2810\") " pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.735160 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cabdddba-5507-4e47-98ef-a00c6d0f305d-serving-cert\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.735183 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.735233 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a9ccd8e-d964-4c03-8ffc-51b464030c25-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.735300 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-config\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.735335 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2hllb\" (UID: \"6320dbb5-b84d-4a57-8c65-fbed8421f84a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.735361 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.735387 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.735415 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cabdddba-5507-4e47-98ef-a00c6d0f305d-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.735421 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cabdddba-5507-4e47-98ef-a00c6d0f305d-serving-cert\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.735443 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.735505 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-config\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.735585 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-ovnkube-script-lib\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.735621 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.735650 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-cni-dir\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.735680 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/db8d6627-394c-4087-bfa4-bf7580f6bb4b-proxy-tls\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.735699 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-var-lib-cni-bin\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.735767 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6320dbb5-b84d-4a57-8c65-fbed8421f84a-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2hllb\" (UID: \"6320dbb5-b84d-4a57-8c65-fbed8421f84a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.735908 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb52w\" (UniqueName: \"kubernetes.io/projected/02f1d753-983a-4c4a-b1a0-560de173859a-kube-api-access-mb52w\") pod \"olm-operator-5499d7f7bb-5g6nc\" (UID: \"02f1d753-983a-4c4a-b1a0-560de173859a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.735941 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert\") pod \"catalog-operator-596f79dd6f-8cg5c\" (UID: \"12b89e05-a503-47aa-90b2-4d741e015b19\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.735971 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-etc-openvswitch\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.735996 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert\") pod \"olm-operator-5499d7f7bb-5g6nc\" (UID: \"02f1d753-983a-4c4a-b1a0-560de173859a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.736020 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/303d5058-84df-40d1-a941-896b093ae470-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-7wc6k\" (UID: \"303d5058-84df-40d1-a941-896b093ae470\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.736086 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-var-lib-cni-multus\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.736128 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-bound-sa-token\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.736150 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-run-openvswitch\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.736170 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-node-log\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.736175 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cabdddba-5507-4e47-98ef-a00c6d0f305d-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.736186 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/12b89e05-a503-47aa-90b2-4d741e015b19-srv-cert\") pod \"catalog-operator-596f79dd6f-8cg5c\" (UID: \"12b89e05-a503-47aa-90b2-4d741e015b19\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.736192 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-run-ovn-kubernetes\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.736254 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/303d5058-84df-40d1-a941-896b093ae470-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-7wc6k\" (UID: \"303d5058-84df-40d1-a941-896b093ae470\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" Feb 24 02:20:58.736213 master-0 kubenswrapper[31411]: I0224 02:20:58.736267 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcbda577-b943-4b5c-b041-948aece8e40f-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-xdws2\" (UID: \"fcbda577-b943-4b5c-b041-948aece8e40f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.736308 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.736402 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a9ccd8e-d964-4c03-8ffc-51b464030c25-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.736404 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.736532 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c92835f0-7f32-4584-8304-843d7979392a-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-ccrxg\" (UID: \"c92835f0-7f32-4584-8304-843d7979392a\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.736784 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8tttg\" (UID: \"f5463fbf-ac21-4058-9a3b-30d0e5ea31b7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.736821 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c92835f0-7f32-4584-8304-843d7979392a-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-ccrxg\" (UID: \"c92835f0-7f32-4584-8304-843d7979392a\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.736840 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.736860 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c84dc269-43ae-4083-9998-a0b3c90bb681-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.736872 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.736889 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-etcd-client\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.736923 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.736966 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-hostroot\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.736483 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/02f1d753-983a-4c4a-b1a0-560de173859a-srv-cert\") pod \"olm-operator-5499d7f7bb-5g6nc\" (UID: \"02f1d753-983a-4c4a-b1a0-560de173859a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.737000 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.737162 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twgrj\" (UniqueName: \"kubernetes.io/projected/12b89e05-a503-47aa-90b2-4d741e015b19-kube-api-access-twgrj\") pod \"catalog-operator-596f79dd6f-8cg5c\" (UID: \"12b89e05-a503-47aa-90b2-4d741e015b19\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.737193 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6qs2\" (UniqueName: \"kubernetes.io/projected/3332acec-1553-4594-a903-a322399f6d9d-kube-api-access-x6qs2\") pod \"network-operator-7d7db75979-drrqm\" (UID: \"3332acec-1553-4594-a903-a322399f6d9d\") " pod="openshift-network-operator/network-operator-7d7db75979-drrqm" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.737220 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.737248 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-etcd-client\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.737264 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-cnibin\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.737302 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-systemd-units\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.737332 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/db8d6627-394c-4087-bfa4-bf7580f6bb4b-images\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.737355 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/db8d6627-394c-4087-bfa4-bf7580f6bb4b-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.737387 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-os-release\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.737496 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/adc1097b-c1ab-4f09-965d-1c819671475b-webhook-cert\") pod \"network-node-identity-p5b6q\" (UID: \"adc1097b-c1ab-4f09-965d-1c819671475b\") " pod="openshift-network-node-identity/network-node-identity-p5b6q" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.737516 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/adc1097b-c1ab-4f09-965d-1c819671475b-ovnkube-identity-cm\") pod \"network-node-identity-p5b6q\" (UID: \"adc1097b-c1ab-4f09-965d-1c819671475b\") " pod="openshift-network-node-identity/network-node-identity-p5b6q" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.737549 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-conf-dir\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.737620 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/db8d6627-394c-4087-bfa4-bf7580f6bb4b-images\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.737628 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-run-multus-certs\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.737669 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/db8d6627-394c-4087-bfa4-bf7580f6bb4b-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.737738 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-etc-kubernetes\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.737786 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-var-lib-openvswitch\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.737834 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/adc1097b-c1ab-4f09-965d-1c819671475b-ovnkube-identity-cm\") pod \"network-node-identity-p5b6q\" (UID: \"adc1097b-c1ab-4f09-965d-1c819671475b\") " pod="openshift-network-node-identity/network-node-identity-p5b6q" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.737861 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqqkv\" (UniqueName: \"kubernetes.io/projected/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-kube-api-access-dqqkv\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.737917 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.737969 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c84dc269-43ae-4083-9998-a0b3c90bb681-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.737998 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-run-netns\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.738025 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-config\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.738047 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d8e20d47-aeb6-41bf-9715-c437beb8e9e4-iptables-alerter-script\") pod \"iptables-alerter-rjbl5\" (UID: \"d8e20d47-aeb6-41bf-9715-c437beb8e9e4\") " pod="openshift-network-operator/iptables-alerter-rjbl5" Feb 24 02:20:58.738163 master-0 kubenswrapper[31411]: I0224 02:20:58.738068 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-trusted-ca\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:20:58.739220 master-0 kubenswrapper[31411]: I0224 02:20:58.738271 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-cni-bin\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.739220 master-0 kubenswrapper[31411]: I0224 02:20:58.738310 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:20:58.739220 master-0 kubenswrapper[31411]: I0224 02:20:58.738333 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/523033b8-4101-4a55-8320-55bef04ddaaf-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-bb22k\" (UID: \"523033b8-4101-4a55-8320-55bef04ddaaf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" Feb 24 02:20:58.739220 master-0 kubenswrapper[31411]: I0224 02:20:58.738356 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/57811d07-ae8a-44b7-8efb-dafc5afad31e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:20:58.739220 master-0 kubenswrapper[31411]: I0224 02:20:58.738380 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwjpw\" (UniqueName: \"kubernetes.io/projected/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-kube-api-access-pwjpw\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:20:58.739220 master-0 kubenswrapper[31411]: I0224 02:20:58.738405 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-fkzdb\" (UID: \"f2e9cdff-8c15-43df-b8df-7fe3a73fda86\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:20:58.739220 master-0 kubenswrapper[31411]: I0224 02:20:58.738425 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qph4g\" (UniqueName: \"kubernetes.io/projected/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-kube-api-access-qph4g\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:20:58.739220 master-0 kubenswrapper[31411]: I0224 02:20:58.738450 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqtld\" (UniqueName: \"kubernetes.io/projected/adc1097b-c1ab-4f09-965d-1c819671475b-kube-api-access-nqtld\") pod \"network-node-identity-p5b6q\" (UID: \"adc1097b-c1ab-4f09-965d-1c819671475b\") " pod="openshift-network-node-identity/network-node-identity-p5b6q" Feb 24 02:20:58.739220 master-0 kubenswrapper[31411]: I0224 02:20:58.738475 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b5620d6-a5fe-45d7-b39e-8bed7f602a17-serving-cert\") pod \"service-ca-operator-c48c8bf7c-6fqkr\" (UID: \"9b5620d6-a5fe-45d7-b39e-8bed7f602a17\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr" Feb 24 02:20:58.739220 master-0 kubenswrapper[31411]: I0224 02:20:58.738497 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-log-socket\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.739220 master-0 kubenswrapper[31411]: I0224 02:20:58.738521 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcbda577-b943-4b5c-b041-948aece8e40f-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-xdws2\" (UID: \"fcbda577-b943-4b5c-b041-948aece8e40f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" Feb 24 02:20:58.739220 master-0 kubenswrapper[31411]: I0224 02:20:58.738540 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:20:58.739220 master-0 kubenswrapper[31411]: I0224 02:20:58.738565 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b36d8451-0fda-4d9d-a850-d05c8f847016-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-sl5hz\" (UID: \"b36d8451-0fda-4d9d-a850-d05c8f847016\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" Feb 24 02:20:58.739220 master-0 kubenswrapper[31411]: I0224 02:20:58.738815 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b5620d6-a5fe-45d7-b39e-8bed7f602a17-serving-cert\") pod \"service-ca-operator-c48c8bf7c-6fqkr\" (UID: \"9b5620d6-a5fe-45d7-b39e-8bed7f602a17\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr" Feb 24 02:20:58.739220 master-0 kubenswrapper[31411]: I0224 02:20:58.738916 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c84dc269-43ae-4083-9998-a0b3c90bb681-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:20:58.739220 master-0 kubenswrapper[31411]: I0224 02:20:58.738378 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-config\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:20:58.739220 master-0 kubenswrapper[31411]: I0224 02:20:58.739155 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b36d8451-0fda-4d9d-a850-d05c8f847016-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-sl5hz\" (UID: \"b36d8451-0fda-4d9d-a850-d05c8f847016\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" Feb 24 02:20:58.739220 master-0 kubenswrapper[31411]: I0224 02:20:58.739239 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-trusted-ca\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:20:58.739697 master-0 kubenswrapper[31411]: I0224 02:20:58.739367 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:20:58.739697 master-0 kubenswrapper[31411]: I0224 02:20:58.739388 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/523033b8-4101-4a55-8320-55bef04ddaaf-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-bb22k\" (UID: \"523033b8-4101-4a55-8320-55bef04ddaaf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" Feb 24 02:20:58.739697 master-0 kubenswrapper[31411]: I0224 02:20:58.739557 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/57811d07-ae8a-44b7-8efb-dafc5afad31e-whereabouts-configmap\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:20:58.740944 master-0 kubenswrapper[31411]: I0224 02:20:58.740906 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-fkzdb\" (UID: \"f2e9cdff-8c15-43df-b8df-7fe3a73fda86\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:20:58.754028 master-0 kubenswrapper[31411]: I0224 02:20:58.753993 31411 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 24 02:20:58.754632 master-0 kubenswrapper[31411]: I0224 02:20:58.754598 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 24 02:20:58.761506 master-0 kubenswrapper[31411]: I0224 02:20:58.761470 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/57811d07-ae8a-44b7-8efb-dafc5afad31e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:20:58.774997 master-0 kubenswrapper[31411]: I0224 02:20:58.774957 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 24 02:20:58.796038 master-0 kubenswrapper[31411]: I0224 02:20:58.795974 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 24 02:20:58.805140 master-0 kubenswrapper[31411]: I0224 02:20:58.805104 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/adc1097b-c1ab-4f09-965d-1c819671475b-env-overrides\") pod \"network-node-identity-p5b6q\" (UID: \"adc1097b-c1ab-4f09-965d-1c819671475b\") " pod="openshift-network-node-identity/network-node-identity-p5b6q" Feb 24 02:20:58.814332 master-0 kubenswrapper[31411]: I0224 02:20:58.814290 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 24 02:20:58.834848 master-0 kubenswrapper[31411]: I0224 02:20:58.834802 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 24 02:20:58.841516 master-0 kubenswrapper[31411]: I0224 02:20:58.841317 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/a5305004-5311-4bc4-ad7c-6670f97c89cb-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:20:58.841516 master-0 kubenswrapper[31411]: I0224 02:20:58.841443 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-os-release\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.841516 master-0 kubenswrapper[31411]: I0224 02:20:58.841349 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/a5305004-5311-4bc4-ad7c-6670f97c89cb-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:20:58.841978 master-0 kubenswrapper[31411]: I0224 02:20:58.841688 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt2q4\" (UniqueName: \"kubernetes.io/projected/91168f3d-70eb-4351-bb83-5411a96ad29d-kube-api-access-rt2q4\") pod \"cluster-autoscaler-operator-86b8dc6d6-mtrdk\" (UID: \"91168f3d-70eb-4351-bb83-5411a96ad29d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk" Feb 24 02:20:58.841978 master-0 kubenswrapper[31411]: I0224 02:20:58.841879 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-os-release\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.842166 master-0 kubenswrapper[31411]: I0224 02:20:58.841807 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb7jb\" (UniqueName: \"kubernetes.io/projected/8e70a9f5-1154-40e9-a487-21e36e7f420a-kube-api-access-mb7jb\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-8znkt\" (UID: \"8e70a9f5-1154-40e9-a487-21e36e7f420a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:20:58.842266 master-0 kubenswrapper[31411]: I0224 02:20:58.842213 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-run\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.842393 master-0 kubenswrapper[31411]: I0224 02:20:58.842348 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-host\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.842509 master-0 kubenswrapper[31411]: I0224 02:20:58.842477 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-kubelet\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.842697 master-0 kubenswrapper[31411]: I0224 02:20:58.842652 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-kubelet\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.842854 master-0 kubenswrapper[31411]: I0224 02:20:58.842759 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/24765ff1-5e7d-4100-ad81-8f73555fc0a2-metrics-client-ca\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:20:58.842923 master-0 kubenswrapper[31411]: I0224 02:20:58.842895 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-config\") pod \"controller-manager-56b6d9c5b7-lxwt6\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:20:58.843080 master-0 kubenswrapper[31411]: I0224 02:20:58.843050 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd-var-lock\") pod \"installer-4-master-0\" (UID: \"5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 24 02:20:58.843210 master-0 kubenswrapper[31411]: I0224 02:20:58.843182 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 02:20:58.843325 master-0 kubenswrapper[31411]: I0224 02:20:58.843283 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 02:20:58.843417 master-0 kubenswrapper[31411]: I0224 02:20:58.843389 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/986668ae1bbdf9cce9dceeca068e9031-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"986668ae1bbdf9cce9dceeca068e9031\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:20:58.843484 master-0 kubenswrapper[31411]: I0224 02:20:58.843441 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4f5b3b93-a59d-495c-a311-8913fa6000fc-cache\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:20:58.843823 master-0 kubenswrapper[31411]: I0224 02:20:58.843521 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/986668ae1bbdf9cce9dceeca068e9031-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"986668ae1bbdf9cce9dceeca068e9031\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:20:58.843916 master-0 kubenswrapper[31411]: I0224 02:20:58.843662 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4f5b3b93-a59d-495c-a311-8913fa6000fc-cache\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:20:58.843916 master-0 kubenswrapper[31411]: I0224 02:20:58.843745 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/df42c69b-1a0e-41f5-9006-17540369b9ad-proxy-tls\") pod \"machine-config-daemon-hfpql\" (UID: \"df42c69b-1a0e-41f5-9006-17540369b9ad\") " pod="openshift-machine-config-operator/machine-config-daemon-hfpql" Feb 24 02:20:58.844070 master-0 kubenswrapper[31411]: I0224 02:20:58.843946 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-socket-dir-parent\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.844143 master-0 kubenswrapper[31411]: I0224 02:20:58.844078 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-socket-dir-parent\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.844277 master-0 kubenswrapper[31411]: I0224 02:20:58.844188 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9cad383a-cb69-41a8-aec8-23ee1c930430-apiservice-cert\") pod \"packageserver-597975fc65-xcl6c\" (UID: \"9cad383a-cb69-41a8-aec8-23ee1c930430\") " pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" Feb 24 02:20:58.844487 master-0 kubenswrapper[31411]: I0224 02:20:58.844439 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbzsl\" (UniqueName: \"kubernetes.io/projected/3e36c9eb-0368-46dc-af84-9c602a15555d-kube-api-access-lbzsl\") pod \"ingress-canary-jjpsc\" (UID: \"3e36c9eb-0368-46dc-af84-9c602a15555d\") " pod="openshift-ingress-canary/ingress-canary-jjpsc" Feb 24 02:20:58.844557 master-0 kubenswrapper[31411]: I0224 02:20:58.844520 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98725\" (UniqueName: \"kubernetes.io/projected/24765ff1-5e7d-4100-ad81-8f73555fc0a2-kube-api-access-98725\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:20:58.844650 master-0 kubenswrapper[31411]: I0224 02:20:58.844606 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5508683b-09ae-47a1-89fd-b0891a881e09-var-lock\") pod \"installer-2-master-0\" (UID: \"5508683b-09ae-47a1-89fd-b0891a881e09\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:20:58.844716 master-0 kubenswrapper[31411]: I0224 02:20:58.844671 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/3332acec-1553-4594-a903-a322399f6d9d-host-etc-kube\") pod \"network-operator-7d7db75979-drrqm\" (UID: \"3332acec-1553-4594-a903-a322399f6d9d\") " pod="openshift-network-operator/network-operator-7d7db75979-drrqm" Feb 24 02:20:58.844780 master-0 kubenswrapper[31411]: I0224 02:20:58.844735 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/3332acec-1553-4594-a903-a322399f6d9d-host-etc-kube\") pod \"network-operator-7d7db75979-drrqm\" (UID: \"3332acec-1553-4594-a903-a322399f6d9d\") " pod="openshift-network-operator/network-operator-7d7db75979-drrqm" Feb 24 02:20:58.844866 master-0 kubenswrapper[31411]: I0224 02:20:58.844763 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg7sb\" (UniqueName: \"kubernetes.io/projected/e56a17d6-d740-4349-833e-b5279f7db2d4-kube-api-access-gg7sb\") pod \"redhat-operators-4znnj\" (UID: \"e56a17d6-d740-4349-833e-b5279f7db2d4\") " pod="openshift-marketplace/redhat-operators-4znnj" Feb 24 02:20:58.844866 master-0 kubenswrapper[31411]: I0224 02:20:58.844827 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b176946a-c056-441c-9145-b88ca4d75758-encryption-config\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:20:58.845172 master-0 kubenswrapper[31411]: I0224 02:20:58.845136 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-sysctl-d\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.845293 master-0 kubenswrapper[31411]: I0224 02:20:58.845255 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/608a8a56-daee-4fa1-8300-42155217c68b-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-swtr6\" (UID: \"608a8a56-daee-4fa1-8300-42155217c68b\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" Feb 24 02:20:58.845441 master-0 kubenswrapper[31411]: I0224 02:20:58.845393 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/986668ae1bbdf9cce9dceeca068e9031-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"986668ae1bbdf9cce9dceeca068e9031\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:20:58.845546 master-0 kubenswrapper[31411]: I0224 02:20:58.845495 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/986668ae1bbdf9cce9dceeca068e9031-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"986668ae1bbdf9cce9dceeca068e9031\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:20:58.845704 master-0 kubenswrapper[31411]: I0224 02:20:58.845551 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c396c41-c617-4631-9700-a7052af5a276-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:20:58.846040 master-0 kubenswrapper[31411]: I0224 02:20:58.845819 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:20:58.846307 master-0 kubenswrapper[31411]: I0224 02:20:58.846124 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/91168f3d-70eb-4351-bb83-5411a96ad29d-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-mtrdk\" (UID: \"91168f3d-70eb-4351-bb83-5411a96ad29d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk" Feb 24 02:20:58.846452 master-0 kubenswrapper[31411]: I0224 02:20:58.846382 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74a7801b-b7a4-4292-91b3-6285c239aeb7-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-fcr59\" (UID: \"74a7801b-b7a4-4292-91b3-6285c239aeb7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59" Feb 24 02:20:58.846657 master-0 kubenswrapper[31411]: I0224 02:20:58.846525 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gckc2\" (UniqueName: \"kubernetes.io/projected/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-kube-api-access-gckc2\") pod \"machine-approver-7dd9c7d7b9-sjqsx\" (UID: \"8ebd1a97-ff7b-4a10-a1b5-956e427478a8\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" Feb 24 02:20:58.846753 master-0 kubenswrapper[31411]: I0224 02:20:58.846720 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-tuned\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.846920 master-0 kubenswrapper[31411]: I0224 02:20:58.846781 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:20:58.846989 master-0 kubenswrapper[31411]: I0224 02:20:58.846940 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b176946a-c056-441c-9145-b88ca4d75758-audit-policies\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:20:58.847070 master-0 kubenswrapper[31411]: I0224 02:20:58.846888 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-tuned\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.847070 master-0 kubenswrapper[31411]: I0224 02:20:58.846872 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:20:58.847070 master-0 kubenswrapper[31411]: I0224 02:20:58.846983 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e68b3061-c9d2-469d-babf-7ccac0ad9b14-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-ctssl\" (UID: \"e68b3061-c9d2-469d-babf-7ccac0ad9b14\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-ctssl" Feb 24 02:20:58.847248 master-0 kubenswrapper[31411]: I0224 02:20:58.847142 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/4a2d8ef6-14ac-490d-a931-7082344d3f46-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-hvr8b\" (UID: \"4a2d8ef6-14ac-490d-a931-7082344d3f46\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:20:58.847367 master-0 kubenswrapper[31411]: I0224 02:20:58.847336 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-lib-modules\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.847622 master-0 kubenswrapper[31411]: I0224 02:20:58.847451 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cb042de-c873-408c-a4c4-ef9f7e546a08-catalog-content\") pod \"certified-operators-brpmb\" (UID: \"0cb042de-c873-408c-a4c4-ef9f7e546a08\") " pod="openshift-marketplace/certified-operators-brpmb" Feb 24 02:20:58.847697 master-0 kubenswrapper[31411]: I0224 02:20:58.847633 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:20:58.847697 master-0 kubenswrapper[31411]: I0224 02:20:58.847686 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/732a3831-20e0-47dc-a29a-8bb4659541b7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-9cjj5\" (UID: \"732a3831-20e0-47dc-a29a-8bb4659541b7\") " pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" Feb 24 02:20:58.847826 master-0 kubenswrapper[31411]: I0224 02:20:58.847715 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-client-ca-bundle\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:20:58.847826 master-0 kubenswrapper[31411]: I0224 02:20:58.847756 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/070ebb2d-57a2-4c76-8c93-e09d398f3b73-kube-api-access\") pod \"installer-3-master-0\" (UID: \"070ebb2d-57a2-4c76-8c93-e09d398f3b73\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 02:20:58.847952 master-0 kubenswrapper[31411]: I0224 02:20:58.847836 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-wtmp\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:20:58.847952 master-0 kubenswrapper[31411]: I0224 02:20:58.847926 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1-certs\") pod \"machine-config-server-drf28\" (UID: \"9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1\") " pod="openshift-machine-config-operator/machine-config-server-drf28" Feb 24 02:20:58.848097 master-0 kubenswrapper[31411]: I0224 02:20:58.848056 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/732a3831-20e0-47dc-a29a-8bb4659541b7-service-ca\") pod \"cluster-version-operator-57476485-9cjj5\" (UID: \"732a3831-20e0-47dc-a29a-8bb4659541b7\") " pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" Feb 24 02:20:58.848181 master-0 kubenswrapper[31411]: I0224 02:20:58.848153 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8e70a9f5-1154-40e9-a487-21e36e7f420a-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-8znkt\" (UID: \"8e70a9f5-1154-40e9-a487-21e36e7f420a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:20:58.848260 master-0 kubenswrapper[31411]: I0224 02:20:58.848220 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d-proxy-tls\") pod \"machine-config-controller-54cb48566c-xzpl4\" (UID: \"e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-xzpl4" Feb 24 02:20:58.848326 master-0 kubenswrapper[31411]: I0224 02:20:58.848300 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:20:58.848391 master-0 kubenswrapper[31411]: I0224 02:20:58.848350 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cb042de-c873-408c-a4c4-ef9f7e546a08-catalog-content\") pod \"certified-operators-brpmb\" (UID: \"0cb042de-c873-408c-a4c4-ef9f7e546a08\") " pod="openshift-marketplace/certified-operators-brpmb" Feb 24 02:20:58.848453 master-0 kubenswrapper[31411]: I0224 02:20:58.848391 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:20:58.848517 master-0 kubenswrapper[31411]: I0224 02:20:58.848366 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-cni-dir\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.848517 master-0 kubenswrapper[31411]: I0224 02:20:58.848509 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-var-lib-cni-bin\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.848687 master-0 kubenswrapper[31411]: I0224 02:20:58.848543 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/25190a18-bdac-479b-b526-840d28636be3-image-import-ca\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:20:58.848687 master-0 kubenswrapper[31411]: I0224 02:20:58.848593 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-etc-openvswitch\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.848687 master-0 kubenswrapper[31411]: I0224 02:20:58.848621 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e70a9f5-1154-40e9-a487-21e36e7f420a-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-8znkt\" (UID: \"8e70a9f5-1154-40e9-a487-21e36e7f420a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:20:58.848687 master-0 kubenswrapper[31411]: I0224 02:20:58.848628 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-cni-dir\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.848687 master-0 kubenswrapper[31411]: I0224 02:20:58.848648 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-var-lib-cni-bin\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.848687 master-0 kubenswrapper[31411]: I0224 02:20:58.848649 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-996wg\" (UniqueName: \"kubernetes.io/projected/638b3f88-0386-4f30-8ca5-6255e8f936fc-kube-api-access-996wg\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.849095 master-0 kubenswrapper[31411]: I0224 02:20:58.848749 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-etc-openvswitch\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.849095 master-0 kubenswrapper[31411]: I0224 02:20:58.848758 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/91168f3d-70eb-4351-bb83-5411a96ad29d-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-mtrdk\" (UID: \"91168f3d-70eb-4351-bb83-5411a96ad29d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk" Feb 24 02:20:58.849095 master-0 kubenswrapper[31411]: I0224 02:20:58.848830 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:20:58.849095 master-0 kubenswrapper[31411]: I0224 02:20:58.848888 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-var-lib-cni-multus\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.849095 master-0 kubenswrapper[31411]: I0224 02:20:58.848980 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-var-lib-cni-multus\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.849095 master-0 kubenswrapper[31411]: I0224 02:20:58.848962 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:20:58.849544 master-0 kubenswrapper[31411]: I0224 02:20:58.849074 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-node-log\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.849544 master-0 kubenswrapper[31411]: I0224 02:20:58.849138 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-node-log\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.849544 master-0 kubenswrapper[31411]: I0224 02:20:58.849269 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-run-ovn-kubernetes\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.849544 master-0 kubenswrapper[31411]: I0224 02:20:58.849370 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-run-ovn-kubernetes\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.849855 master-0 kubenswrapper[31411]: I0224 02:20:58.849558 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-sjqsx\" (UID: \"8ebd1a97-ff7b-4a10-a1b5-956e427478a8\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" Feb 24 02:20:58.849855 master-0 kubenswrapper[31411]: I0224 02:20:58.849716 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-proxy-ca-bundles\") pod \"controller-manager-56b6d9c5b7-lxwt6\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:20:58.849855 master-0 kubenswrapper[31411]: I0224 02:20:58.849765 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4q7n\" (UniqueName: \"kubernetes.io/projected/df42c69b-1a0e-41f5-9006-17540369b9ad-kube-api-access-f4q7n\") pod \"machine-config-daemon-hfpql\" (UID: \"df42c69b-1a0e-41f5-9006-17540369b9ad\") " pod="openshift-machine-config-operator/machine-config-daemon-hfpql" Feb 24 02:20:58.849855 master-0 kubenswrapper[31411]: I0224 02:20:58.849809 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.850099 master-0 kubenswrapper[31411]: I0224 02:20:58.849854 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55a2662a-d672-4a46-9b81-bfcaf334eedb-client-ca\") pod \"route-controller-manager-676fddcd58-49xzd\" (UID: \"55a2662a-d672-4a46-9b81-bfcaf334eedb\") " pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:20:58.850099 master-0 kubenswrapper[31411]: I0224 02:20:58.849899 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b176946a-c056-441c-9145-b88ca4d75758-audit-dir\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:20:58.850099 master-0 kubenswrapper[31411]: I0224 02:20:58.849938 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5508683b-09ae-47a1-89fd-b0891a881e09\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:20:58.850099 master-0 kubenswrapper[31411]: I0224 02:20:58.850044 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-cnibin\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.850099 master-0 kubenswrapper[31411]: I0224 02:20:58.850065 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.850099 master-0 kubenswrapper[31411]: I0224 02:20:58.850084 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-systemd-units\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.850451 master-0 kubenswrapper[31411]: I0224 02:20:58.850129 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/df2b8111-41c6-4333-b473-4c08fb836f70-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-66lml\" (UID: \"df2b8111-41c6-4333-b473-4c08fb836f70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" Feb 24 02:20:58.850451 master-0 kubenswrapper[31411]: I0224 02:20:58.850201 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-cnibin\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.850451 master-0 kubenswrapper[31411]: I0224 02:20:58.850242 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/25190a18-bdac-479b-b526-840d28636be3-encryption-config\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:20:58.850451 master-0 kubenswrapper[31411]: I0224 02:20:58.850294 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:20:58.850451 master-0 kubenswrapper[31411]: I0224 02:20:58.850335 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-systemd-units\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.850451 master-0 kubenswrapper[31411]: I0224 02:20:58.850345 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xz68\" (UniqueName: \"kubernetes.io/projected/e68b3061-c9d2-469d-babf-7ccac0ad9b14-kube-api-access-6xz68\") pod \"multus-admission-controller-5f54bf67d4-ctssl\" (UID: \"e68b3061-c9d2-469d-babf-7ccac0ad9b14\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-ctssl" Feb 24 02:20:58.850451 master-0 kubenswrapper[31411]: I0224 02:20:58.850398 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:20:58.850451 master-0 kubenswrapper[31411]: I0224 02:20:58.850397 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-sysconfig\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.850971 master-0 kubenswrapper[31411]: I0224 02:20:58.850470 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-dsjgm\" (UID: \"0ce6dd93-084c-4e15-8b7c-e0829a6df14e\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" Feb 24 02:20:58.850971 master-0 kubenswrapper[31411]: I0224 02:20:58.850498 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-conf-dir\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.850971 master-0 kubenswrapper[31411]: I0224 02:20:58.850527 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-run-multus-certs\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.850971 master-0 kubenswrapper[31411]: I0224 02:20:58.850552 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-etc-kubernetes\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.850971 master-0 kubenswrapper[31411]: I0224 02:20:58.850615 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-var-lib-openvswitch\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.850971 master-0 kubenswrapper[31411]: I0224 02:20:58.850643 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a08a1e4-cf92-4733-a8af-c7ac5b21e925-metrics-certs\") pod \"router-default-7b65dc9fcb-22sgl\" (UID: \"6a08a1e4-cf92-4733-a8af-c7ac5b21e925\") " pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:20:58.850971 master-0 kubenswrapper[31411]: I0224 02:20:58.850667 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-run-netns\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.850971 master-0 kubenswrapper[31411]: I0224 02:20:58.850708 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-run-netns\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.850971 master-0 kubenswrapper[31411]: I0224 02:20:58.850848 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-multus-conf-dir\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.850971 master-0 kubenswrapper[31411]: I0224 02:20:58.850963 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-etc-kubernetes\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.851553 master-0 kubenswrapper[31411]: I0224 02:20:58.851031 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/732a3831-20e0-47dc-a29a-8bb4659541b7-kube-api-access\") pod \"cluster-version-operator-57476485-9cjj5\" (UID: \"732a3831-20e0-47dc-a29a-8bb4659541b7\") " pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" Feb 24 02:20:58.851553 master-0 kubenswrapper[31411]: I0224 02:20:58.851075 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/df2b8111-41c6-4333-b473-4c08fb836f70-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-66lml\" (UID: \"df2b8111-41c6-4333-b473-4c08fb836f70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" Feb 24 02:20:58.851553 master-0 kubenswrapper[31411]: I0224 02:20:58.851151 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-var-lib-openvswitch\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.851553 master-0 kubenswrapper[31411]: I0224 02:20:58.851238 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-run-multus-certs\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.851826 master-0 kubenswrapper[31411]: I0224 02:20:58.851683 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57x9m\" (UniqueName: \"kubernetes.io/projected/a4cea44a-1c6e-465f-97df-2c951056cb85-kube-api-access-57x9m\") pod \"control-plane-machine-set-operator-686847ff5f-ckntz\" (UID: \"a4cea44a-1c6e-465f-97df-2c951056cb85\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-ckntz" Feb 24 02:20:58.851826 master-0 kubenswrapper[31411]: I0224 02:20:58.851758 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/22a83952-32ec-48f7-85cd-209b62362ae2-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-9gkp2\" (UID: \"22a83952-32ec-48f7-85cd-209b62362ae2\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-9gkp2" Feb 24 02:20:58.851953 master-0 kubenswrapper[31411]: I0224 02:20:58.851834 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/25190a18-bdac-479b-b526-840d28636be3-audit-dir\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:20:58.851953 master-0 kubenswrapper[31411]: I0224 02:20:58.851869 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0127e0d5-9961-4ff6-851d-884e71e1dcf2-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-bkc9s\" (UID: \"0127e0d5-9961-4ff6-851d-884e71e1dcf2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-bkc9s" Feb 24 02:20:58.851953 master-0 kubenswrapper[31411]: I0224 02:20:58.851905 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:20:58.852138 master-0 kubenswrapper[31411]: I0224 02:20:58.851971 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:20:58.852138 master-0 kubenswrapper[31411]: I0224 02:20:58.852010 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-var-lib-kubelet\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.852138 master-0 kubenswrapper[31411]: I0224 02:20:58.852043 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9ngc\" (UniqueName: \"kubernetes.io/projected/0cb042de-c873-408c-a4c4-ef9f7e546a08-kube-api-access-p9ngc\") pod \"certified-operators-brpmb\" (UID: \"0cb042de-c873-408c-a4c4-ef9f7e546a08\") " pod="openshift-marketplace/certified-operators-brpmb" Feb 24 02:20:58.852138 master-0 kubenswrapper[31411]: I0224 02:20:58.852072 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55a2662a-d672-4a46-9b81-bfcaf334eedb-config\") pod \"route-controller-manager-676fddcd58-49xzd\" (UID: \"55a2662a-d672-4a46-9b81-bfcaf334eedb\") " pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:20:58.852138 master-0 kubenswrapper[31411]: I0224 02:20:58.852101 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9cad383a-cb69-41a8-aec8-23ee1c930430-webhook-cert\") pod \"packageserver-597975fc65-xcl6c\" (UID: \"9cad383a-cb69-41a8-aec8-23ee1c930430\") " pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" Feb 24 02:20:58.852451 master-0 kubenswrapper[31411]: I0224 02:20:58.852174 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:20:58.852451 master-0 kubenswrapper[31411]: I0224 02:20:58.852204 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-sysctl-conf\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.852451 master-0 kubenswrapper[31411]: I0224 02:20:58.852246 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:20:58.852451 master-0 kubenswrapper[31411]: I0224 02:20:58.852278 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5508683b-09ae-47a1-89fd-b0891a881e09-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"5508683b-09ae-47a1-89fd-b0891a881e09\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:20:58.852451 master-0 kubenswrapper[31411]: I0224 02:20:58.852309 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdmhx\" (UniqueName: \"kubernetes.io/projected/74a7801b-b7a4-4292-91b3-6285c239aeb7-kube-api-access-pdmhx\") pod \"cloud-credential-operator-6968c58f46-fcr59\" (UID: \"74a7801b-b7a4-4292-91b3-6285c239aeb7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59" Feb 24 02:20:58.852451 master-0 kubenswrapper[31411]: I0224 02:20:58.852309 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:20:58.852451 master-0 kubenswrapper[31411]: I0224 02:20:58.852338 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:58.852451 master-0 kubenswrapper[31411]: I0224 02:20:58.852382 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:58.852451 master-0 kubenswrapper[31411]: I0224 02:20:58.852434 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:20:58.852451 master-0 kubenswrapper[31411]: I0224 02:20:58.852454 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e70a9f5-1154-40e9-a487-21e36e7f420a-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-8znkt\" (UID: \"8e70a9f5-1154-40e9-a487-21e36e7f420a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:20:58.853125 master-0 kubenswrapper[31411]: I0224 02:20:58.852544 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxjtb\" (UniqueName: \"kubernetes.io/projected/e8d6a6c0-b944-4206-9178-9a9930b303b9-kube-api-access-zxjtb\") pod \"controller-manager-56b6d9c5b7-lxwt6\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:20:58.853125 master-0 kubenswrapper[31411]: I0224 02:20:58.852647 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25190a18-bdac-479b-b526-840d28636be3-serving-cert\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:20:58.853125 master-0 kubenswrapper[31411]: I0224 02:20:58.852694 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b085f760-0e24-41a8-af09-538396aad935-utilities\") pod \"redhat-marketplace-qqt7p\" (UID: \"b085f760-0e24-41a8-af09-538396aad935\") " pod="openshift-marketplace/redhat-marketplace-qqt7p" Feb 24 02:20:58.853125 master-0 kubenswrapper[31411]: I0224 02:20:58.852777 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b176946a-c056-441c-9145-b88ca4d75758-etcd-client\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:20:58.853125 master-0 kubenswrapper[31411]: I0224 02:20:58.852823 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-images\") pod \"machine-api-operator-5c7cf458b4-dsjgm\" (UID: \"0ce6dd93-084c-4e15-8b7c-e0829a6df14e\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" Feb 24 02:20:58.853125 master-0 kubenswrapper[31411]: I0224 02:20:58.852863 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/74a7801b-b7a4-4292-91b3-6285c239aeb7-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-fcr59\" (UID: \"74a7801b-b7a4-4292-91b3-6285c239aeb7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59" Feb 24 02:20:58.853125 master-0 kubenswrapper[31411]: I0224 02:20:58.852907 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:20:58.853125 master-0 kubenswrapper[31411]: I0224 02:20:58.852950 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/24765ff1-5e7d-4100-ad81-8f73555fc0a2-sys\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:20:58.853125 master-0 kubenswrapper[31411]: I0224 02:20:58.853025 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-tls\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:20:58.853125 master-0 kubenswrapper[31411]: I0224 02:20:58.853071 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/4f5b3b93-a59d-495c-a311-8913fa6000fc-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:20:58.853125 master-0 kubenswrapper[31411]: I0224 02:20:58.853116 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d8e20d47-aeb6-41bf-9715-c437beb8e9e4-host-slash\") pod \"iptables-alerter-rjbl5\" (UID: \"d8e20d47-aeb6-41bf-9715-c437beb8e9e4\") " pod="openshift-network-operator/iptables-alerter-rjbl5" Feb 24 02:20:58.853922 master-0 kubenswrapper[31411]: I0224 02:20:58.853152 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:58.853922 master-0 kubenswrapper[31411]: I0224 02:20:58.853351 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d8e20d47-aeb6-41bf-9715-c437beb8e9e4-host-slash\") pod \"iptables-alerter-rjbl5\" (UID: \"d8e20d47-aeb6-41bf-9715-c437beb8e9e4\") " pod="openshift-network-operator/iptables-alerter-rjbl5" Feb 24 02:20:58.853922 master-0 kubenswrapper[31411]: I0224 02:20:58.853370 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b085f760-0e24-41a8-af09-538396aad935-utilities\") pod \"redhat-marketplace-qqt7p\" (UID: \"b085f760-0e24-41a8-af09-538396aad935\") " pod="openshift-marketplace/redhat-marketplace-qqt7p" Feb 24 02:20:58.853922 master-0 kubenswrapper[31411]: I0224 02:20:58.853456 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:58.853922 master-0 kubenswrapper[31411]: I0224 02:20:58.853699 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-system-cni-dir\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.853922 master-0 kubenswrapper[31411]: I0224 02:20:58.853754 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/4f5b3b93-a59d-495c-a311-8913fa6000fc-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:20:58.853922 master-0 kubenswrapper[31411]: I0224 02:20:58.853768 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-system-cni-dir\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.853922 master-0 kubenswrapper[31411]: I0224 02:20:58.853788 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:20:58.853922 master-0 kubenswrapper[31411]: I0224 02:20:58.853852 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:20:58.853922 master-0 kubenswrapper[31411]: I0224 02:20:58.853869 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-slash\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.853922 master-0 kubenswrapper[31411]: I0224 02:20:58.853898 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-system-cni-dir\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:20:58.853922 master-0 kubenswrapper[31411]: I0224 02:20:58.853928 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkpjn\" (UniqueName: \"kubernetes.io/projected/390a7aa5-c7f7-4baf-a2d2-e6da9a465042-kube-api-access-dkpjn\") pod \"node-resolver-4lwwp\" (UID: \"390a7aa5-c7f7-4baf-a2d2-e6da9a465042\") " pod="openshift-dns/node-resolver-4lwwp" Feb 24 02:20:58.854762 master-0 kubenswrapper[31411]: I0224 02:20:58.854003 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-system-cni-dir\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:20:58.854762 master-0 kubenswrapper[31411]: I0224 02:20:58.854013 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-slash\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.854762 master-0 kubenswrapper[31411]: I0224 02:20:58.854057 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-run-k8s-cni-cncf-io\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.854762 master-0 kubenswrapper[31411]: I0224 02:20:58.854112 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/24765ff1-5e7d-4100-ad81-8f73555fc0a2-root\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:20:58.854762 master-0 kubenswrapper[31411]: I0224 02:20:58.854171 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-run-k8s-cni-cncf-io\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.854762 master-0 kubenswrapper[31411]: I0224 02:20:58.854255 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9cad383a-cb69-41a8-aec8-23ee1c930430-tmpfs\") pod \"packageserver-597975fc65-xcl6c\" (UID: \"9cad383a-cb69-41a8-aec8-23ee1c930430\") " pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" Feb 24 02:20:58.854762 master-0 kubenswrapper[31411]: I0224 02:20:58.854332 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1-node-bootstrap-token\") pod \"machine-config-server-drf28\" (UID: \"9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1\") " pod="openshift-machine-config-operator/machine-config-server-drf28" Feb 24 02:20:58.854762 master-0 kubenswrapper[31411]: I0224 02:20:58.854401 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-cnibin\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:20:58.854762 master-0 kubenswrapper[31411]: I0224 02:20:58.854453 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9cad383a-cb69-41a8-aec8-23ee1c930430-tmpfs\") pod \"packageserver-597975fc65-xcl6c\" (UID: \"9cad383a-cb69-41a8-aec8-23ee1c930430\") " pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" Feb 24 02:20:58.854762 master-0 kubenswrapper[31411]: I0224 02:20:58.854486 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-cnibin\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:20:58.854762 master-0 kubenswrapper[31411]: I0224 02:20:58.854550 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25190a18-bdac-479b-b526-840d28636be3-trusted-ca-bundle\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:20:58.854762 master-0 kubenswrapper[31411]: I0224 02:20:58.854762 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8d6a6c0-b944-4206-9178-9a9930b303b9-serving-cert\") pod \"controller-manager-56b6d9c5b7-lxwt6\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:20:58.855540 master-0 kubenswrapper[31411]: I0224 02:20:58.854810 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a4cea44a-1c6e-465f-97df-2c951056cb85-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-ckntz\" (UID: \"a4cea44a-1c6e-465f-97df-2c951056cb85\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-ckntz" Feb 24 02:20:58.855540 master-0 kubenswrapper[31411]: I0224 02:20:58.854855 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b085f760-0e24-41a8-af09-538396aad935-catalog-content\") pod \"redhat-marketplace-qqt7p\" (UID: \"b085f760-0e24-41a8-af09-538396aad935\") " pod="openshift-marketplace/redhat-marketplace-qqt7p" Feb 24 02:20:58.855540 master-0 kubenswrapper[31411]: I0224 02:20:58.854887 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b176946a-c056-441c-9145-b88ca4d75758-etcd-serving-ca\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:20:58.855540 master-0 kubenswrapper[31411]: I0224 02:20:58.854921 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-xzpl4\" (UID: \"e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-xzpl4" Feb 24 02:20:58.855540 master-0 kubenswrapper[31411]: I0224 02:20:58.854951 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 24 02:20:58.855540 master-0 kubenswrapper[31411]: I0224 02:20:58.854983 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 02:20:58.855540 master-0 kubenswrapper[31411]: I0224 02:20:58.855056 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-var-lib-kubelet\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.855540 master-0 kubenswrapper[31411]: I0224 02:20:58.855106 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55a2662a-d672-4a46-9b81-bfcaf334eedb-serving-cert\") pod \"route-controller-manager-676fddcd58-49xzd\" (UID: \"55a2662a-d672-4a46-9b81-bfcaf334eedb\") " pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:20:58.855540 master-0 kubenswrapper[31411]: I0224 02:20:58.855127 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-var-lib-kubelet\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.855540 master-0 kubenswrapper[31411]: I0224 02:20:58.855137 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 02:20:58.855540 master-0 kubenswrapper[31411]: I0224 02:20:58.855167 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b085f760-0e24-41a8-af09-538396aad935-catalog-content\") pod \"redhat-marketplace-qqt7p\" (UID: \"b085f760-0e24-41a8-af09-538396aad935\") " pod="openshift-marketplace/redhat-marketplace-qqt7p" Feb 24 02:20:58.855540 master-0 kubenswrapper[31411]: I0224 02:20:58.855188 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/8c396c41-c617-4631-9700-a7052af5a276-audit-log\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:20:58.855540 master-0 kubenswrapper[31411]: I0224 02:20:58.855218 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/4f5b3b93-a59d-495c-a311-8913fa6000fc-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:20:58.855540 master-0 kubenswrapper[31411]: I0224 02:20:58.855260 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e0c87ae-6387-4c00-b03d-582566907fb6-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-dbkwd\" (UID: \"8e0c87ae-6387-4c00-b03d-582566907fb6\") " pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" Feb 24 02:20:58.855540 master-0 kubenswrapper[31411]: I0224 02:20:58.855295 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-config\") pod \"machine-api-operator-5c7cf458b4-dsjgm\" (UID: \"0ce6dd93-084c-4e15-8b7c-e0829a6df14e\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" Feb 24 02:20:58.855540 master-0 kubenswrapper[31411]: I0224 02:20:58.855323 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-run-netns\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.855540 master-0 kubenswrapper[31411]: I0224 02:20:58.855337 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/8c396c41-c617-4631-9700-a7052af5a276-audit-log\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:20:58.855540 master-0 kubenswrapper[31411]: I0224 02:20:58.855349 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-cni-netd\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.855540 master-0 kubenswrapper[31411]: I0224 02:20:58.855284 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-xzpl4\" (UID: \"e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-xzpl4" Feb 24 02:20:58.855540 master-0 kubenswrapper[31411]: I0224 02:20:58.855418 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a5305004-5311-4bc4-ad7c-6670f97c89cb-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:20:58.855540 master-0 kubenswrapper[31411]: I0224 02:20:58.855436 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-cni-netd\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.855540 master-0 kubenswrapper[31411]: I0224 02:20:58.855434 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-host-run-netns\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.855540 master-0 kubenswrapper[31411]: I0224 02:20:58.855460 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/25190a18-bdac-479b-b526-840d28636be3-etcd-client\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.855618 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/4a2d8ef6-14ac-490d-a931-7082344d3f46-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-hvr8b\" (UID: \"4a2d8ef6-14ac-490d-a931-7082344d3f46\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.855673 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd-kube-api-access\") pod \"installer-4-master-0\" (UID: \"5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.855718 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4grf\" (UniqueName: \"kubernetes.io/projected/8c396c41-c617-4631-9700-a7052af5a276-kube-api-access-n4grf\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.855766 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-client-ca\") pod \"controller-manager-56b6d9c5b7-lxwt6\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.855836 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-run-ovn\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.855878 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh4wr\" (UniqueName: \"kubernetes.io/projected/011c6603-d533-4449-b409-f6f698a3bd50-kube-api-access-xh4wr\") pod \"cluster-storage-operator-f94476f49-c5wlk\" (UID: \"011c6603-d533-4449-b409-f6f698a3bd50\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-c5wlk" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.855919 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6wvl\" (UniqueName: \"kubernetes.io/projected/e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d-kube-api-access-w6wvl\") pod \"machine-config-controller-54cb48566c-xzpl4\" (UID: \"e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-xzpl4" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.855961 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tszx\" (UniqueName: \"kubernetes.io/projected/4f5b3b93-a59d-495c-a311-8913fa6000fc-kube-api-access-2tszx\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.855999 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pc72\" (UniqueName: \"kubernetes.io/projected/a4267e3a-aaaf-4b2f-a37c-0f097a35783f-kube-api-access-5pc72\") pod \"community-operators-kkwwl\" (UID: \"a4267e3a-aaaf-4b2f-a37c-0f097a35783f\") " pod="openshift-marketplace/community-operators-kkwwl" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.856048 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.856098 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-systemd\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.856122 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-run-ovn\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.856135 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-modprobe-d\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.856173 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/25190a18-bdac-479b-b526-840d28636be3-node-pullsecrets\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.856224 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/25190a18-bdac-479b-b526-840d28636be3-audit\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.856265 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dcvb\" (UniqueName: \"kubernetes.io/projected/8e0c87ae-6387-4c00-b03d-582566907fb6-kube-api-access-5dcvb\") pod \"insights-operator-59b498fcfb-dbkwd\" (UID: \"8e0c87ae-6387-4c00-b03d-582566907fb6\") " pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.856307 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.856328 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/732a3831-20e0-47dc-a29a-8bb4659541b7-serving-cert\") pod \"cluster-version-operator-57476485-9cjj5\" (UID: \"732a3831-20e0-47dc-a29a-8bb4659541b7\") " pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.856369 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzghr\" (UniqueName: \"kubernetes.io/projected/55a2662a-d672-4a46-9b81-bfcaf334eedb-kube-api-access-gzghr\") pod \"route-controller-manager-676fddcd58-49xzd\" (UID: \"55a2662a-d672-4a46-9b81-bfcaf334eedb\") " pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.856408 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/df42c69b-1a0e-41f5-9006-17540369b9ad-mcd-auth-proxy-config\") pod \"machine-config-daemon-hfpql\" (UID: \"df42c69b-1a0e-41f5-9006-17540369b9ad\") " pod="openshift-machine-config-operator/machine-config-daemon-hfpql" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.856445 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.856484 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-config\") pod \"machine-approver-7dd9c7d7b9-sjqsx\" (UID: \"8ebd1a97-ff7b-4a10-a1b5-956e427478a8\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.856526 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56a17d6-d740-4349-833e-b5279f7db2d4-catalog-content\") pod \"redhat-operators-4znnj\" (UID: \"e56a17d6-d740-4349-833e-b5279f7db2d4\") " pod="openshift-marketplace/redhat-operators-4znnj" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.856567 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b176946a-c056-441c-9145-b88ca4d75758-trusted-ca-bundle\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.856629 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e0c87ae-6387-4c00-b03d-582566907fb6-service-ca-bundle\") pod \"insights-operator-59b498fcfb-dbkwd\" (UID: \"8e0c87ae-6387-4c00-b03d-582566907fb6\") " pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.856658 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/df42c69b-1a0e-41f5-9006-17540369b9ad-mcd-auth-proxy-config\") pod \"machine-config-daemon-hfpql\" (UID: \"df42c69b-1a0e-41f5-9006-17540369b9ad\") " pod="openshift-machine-config-operator/machine-config-daemon-hfpql" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.856663 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.856713 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/070ebb2d-57a2-4c76-8c93-e09d398f3b73-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"070ebb2d-57a2-4c76-8c93-e09d398f3b73\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.856753 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpg44\" (UniqueName: \"kubernetes.io/projected/8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c-kube-api-access-qpg44\") pod \"dns-default-5rf6m\" (UID: \"8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c\") " pod="openshift-dns/dns-default-5rf6m" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.856762 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56a17d6-d740-4349-833e-b5279f7db2d4-catalog-content\") pod \"redhat-operators-4znnj\" (UID: \"e56a17d6-d740-4349-833e-b5279f7db2d4\") " pod="openshift-marketplace/redhat-operators-4znnj" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.856909 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.856908 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.856950 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.856972 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kznmr\" (UniqueName: \"kubernetes.io/projected/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-api-access-kznmr\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.857017 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/df2b8111-41c6-4333-b473-4c08fb836f70-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-66lml\" (UID: \"df2b8111-41c6-4333-b473-4c08fb836f70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.857040 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.857053 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.857045 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:20:58.856983 master-0 kubenswrapper[31411]: I0224 02:20:58.857120 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcq24\" (UniqueName: \"kubernetes.io/projected/b176946a-c056-441c-9145-b88ca4d75758-kube-api-access-kcq24\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.857159 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/638b3f88-0386-4f30-8ca5-6255e8f936fc-tmp\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.857198 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cb042de-c873-408c-a4c4-ef9f7e546a08-utilities\") pod \"certified-operators-brpmb\" (UID: \"0cb042de-c873-408c-a4c4-ef9f7e546a08\") " pod="openshift-marketplace/certified-operators-brpmb" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.857237 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b176946a-c056-441c-9145-b88ca4d75758-serving-cert\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.857283 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/638b3f88-0386-4f30-8ca5-6255e8f936fc-tmp\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.857287 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-run-openvswitch\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.857341 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-sjqsx\" (UID: \"8ebd1a97-ff7b-4a10-a1b5-956e427478a8\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.857351 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cb042de-c873-408c-a4c4-ef9f7e546a08-utilities\") pod \"certified-operators-brpmb\" (UID: \"0cb042de-c873-408c-a4c4-ef9f7e546a08\") " pod="openshift-marketplace/certified-operators-brpmb" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.857380 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-textfile\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.857393 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-run-openvswitch\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.857424 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/25190a18-bdac-479b-b526-840d28636be3-etcd-serving-ca\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.857469 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-textfile\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.857520 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbc5w\" (UniqueName: \"kubernetes.io/projected/0127e0d5-9961-4ff6-851d-884e71e1dcf2-kube-api-access-nbc5w\") pod \"cluster-samples-operator-65c5c48b9b-bkc9s\" (UID: \"0127e0d5-9961-4ff6-851d-884e71e1dcf2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-bkc9s" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.857621 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/732a3831-20e0-47dc-a29a-8bb4659541b7-etc-ssl-certs\") pod \"cluster-version-operator-57476485-9cjj5\" (UID: \"732a3831-20e0-47dc-a29a-8bb4659541b7\") " pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.857665 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8e70a9f5-1154-40e9-a487-21e36e7f420a-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-8znkt\" (UID: \"8e70a9f5-1154-40e9-a487-21e36e7f420a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.857712 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-hostroot\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.857775 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56a17d6-d740-4349-833e-b5279f7db2d4-utilities\") pod \"redhat-operators-4znnj\" (UID: \"e56a17d6-d740-4349-833e-b5279f7db2d4\") " pod="openshift-marketplace/redhat-operators-4znnj" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.857821 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsb4q\" (UniqueName: \"kubernetes.io/projected/25190a18-bdac-479b-b526-840d28636be3-kube-api-access-bsb4q\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.857863 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6a08a1e4-cf92-4733-a8af-c7ac5b21e925-default-certificate\") pod \"router-default-7b65dc9fcb-22sgl\" (UID: \"6a08a1e4-cf92-4733-a8af-c7ac5b21e925\") " pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.857912 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.858024 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.858067 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-os-release\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.858106 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/390a7aa5-c7f7-4baf-a2d2-e6da9a465042-hosts-file\") pod \"node-resolver-4lwwp\" (UID: \"390a7aa5-c7f7-4baf-a2d2-e6da9a465042\") " pod="openshift-dns/node-resolver-4lwwp" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.858156 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddtsj\" (UniqueName: \"kubernetes.io/projected/4a2d8ef6-14ac-490d-a931-7082344d3f46-kube-api-access-ddtsj\") pod \"operator-controller-controller-manager-9cc7d7bb-hvr8b\" (UID: \"4a2d8ef6-14ac-490d-a931-7082344d3f46\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.858220 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.858276 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56a17d6-d740-4349-833e-b5279f7db2d4-utilities\") pod \"redhat-operators-4znnj\" (UID: \"e56a17d6-d740-4349-833e-b5279f7db2d4\") " pod="openshift-marketplace/redhat-operators-4znnj" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.858285 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/57811d07-ae8a-44b7-8efb-dafc5afad31e-os-release\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.858344 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.858391 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.858457 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-hostroot\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.858531 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-kubernetes\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.858555 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.858668 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/4f5b3b93-a59d-495c-a311-8913fa6000fc-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.858673 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/70e2ba24-4871-4d1d-9935-156fdbeb2810-metrics-certs\") pod \"network-metrics-daemon-tntcf\" (UID: \"70e2ba24-4871-4d1d-9935-156fdbeb2810\") " pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.858751 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px2vd\" (UniqueName: \"kubernetes.io/projected/608a8a56-daee-4fa1-8300-42155217c68b-kube-api-access-px2vd\") pod \"openshift-state-metrics-6dbff8cb4c-swtr6\" (UID: \"608a8a56-daee-4fa1-8300-42155217c68b\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.858836 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4267e3a-aaaf-4b2f-a37c-0f097a35783f-catalog-content\") pod \"community-operators-kkwwl\" (UID: \"a4267e3a-aaaf-4b2f-a37c-0f097a35783f\") " pod="openshift-marketplace/community-operators-kkwwl" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.858891 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svc78\" (UniqueName: \"kubernetes.io/projected/9cad383a-cb69-41a8-aec8-23ee1c930430-kube-api-access-svc78\") pod \"packageserver-597975fc65-xcl6c\" (UID: \"9cad383a-cb69-41a8-aec8-23ee1c930430\") " pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.858925 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25190a18-bdac-479b-b526-840d28636be3-config\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.858957 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3e36c9eb-0368-46dc-af84-9c602a15555d-cert\") pod \"ingress-canary-jjpsc\" (UID: \"3e36c9eb-0368-46dc-af84-9c602a15555d\") " pod="openshift-ingress-canary/ingress-canary-jjpsc" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.859012 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4267e3a-aaaf-4b2f-a37c-0f097a35783f-catalog-content\") pod \"community-operators-kkwwl\" (UID: \"a4267e3a-aaaf-4b2f-a37c-0f097a35783f\") " pod="openshift-marketplace/community-operators-kkwwl" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.859692 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv6zq\" (UniqueName: \"kubernetes.io/projected/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1-kube-api-access-rv6zq\") pod \"machine-config-server-drf28\" (UID: \"9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1\") " pod="openshift-machine-config-operator/machine-config-server-drf28" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.859763 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c-config-volume\") pod \"dns-default-5rf6m\" (UID: \"8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c\") " pod="openshift-dns/dns-default-5rf6m" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.859810 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-cni-bin\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.859867 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/608a8a56-daee-4fa1-8300-42155217c68b-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-swtr6\" (UID: \"608a8a56-daee-4fa1-8300-42155217c68b\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.859903 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-host-cni-bin\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.859918 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-secret-metrics-client-certs\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.859969 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-log-socket\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.860040 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-log-socket\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.860068 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q86gx\" (UniqueName: \"kubernetes.io/projected/b085f760-0e24-41a8-af09-538396aad935-kube-api-access-q86gx\") pod \"redhat-marketplace-qqt7p\" (UID: \"b085f760-0e24-41a8-af09-538396aad935\") " pod="openshift-marketplace/redhat-marketplace-qqt7p" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.860113 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/8c396c41-c617-4631-9700-a7052af5a276-metrics-server-audit-profiles\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.860150 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd796\" (UniqueName: \"kubernetes.io/projected/df2b8111-41c6-4333-b473-4c08fb836f70-kube-api-access-cd796\") pod \"prometheus-operator-754bc4d665-66lml\" (UID: \"df2b8111-41c6-4333-b473-4c08fb836f70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.860182 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c-metrics-tls\") pod \"dns-default-5rf6m\" (UID: \"8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c\") " pod="openshift-dns/dns-default-5rf6m" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.860213 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-secret-metrics-server-tls\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.860243 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e0c87ae-6387-4c00-b03d-582566907fb6-serving-cert\") pod \"insights-operator-59b498fcfb-dbkwd\" (UID: \"8e0c87ae-6387-4c00-b03d-582566907fb6\") " pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.860277 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/df42c69b-1a0e-41f5-9006-17540369b9ad-rootfs\") pod \"machine-config-daemon-hfpql\" (UID: \"df42c69b-1a0e-41f5-9006-17540369b9ad\") " pod="openshift-machine-config-operator/machine-config-daemon-hfpql" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.860342 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/608a8a56-daee-4fa1-8300-42155217c68b-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-swtr6\" (UID: \"608a8a56-daee-4fa1-8300-42155217c68b\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.860381 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc5kx\" (UniqueName: \"kubernetes.io/projected/6a08a1e4-cf92-4733-a8af-c7ac5b21e925-kube-api-access-qc5kx\") pod \"router-default-7b65dc9fcb-22sgl\" (UID: \"6a08a1e4-cf92-4733-a8af-c7ac5b21e925\") " pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:20:58.860415 master-0 kubenswrapper[31411]: I0224 02:20:58.860411 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4a2d8ef6-14ac-490d-a931-7082344d3f46-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-hvr8b\" (UID: \"4a2d8ef6-14ac-490d-a931-7082344d3f46\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:20:58.864143 master-0 kubenswrapper[31411]: I0224 02:20:58.860560 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/8e0c87ae-6387-4c00-b03d-582566907fb6-snapshots\") pod \"insights-operator-59b498fcfb-dbkwd\" (UID: \"8e0c87ae-6387-4c00-b03d-582566907fb6\") " pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" Feb 24 02:20:58.864143 master-0 kubenswrapper[31411]: I0224 02:20:58.860631 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8msx\" (UniqueName: \"kubernetes.io/projected/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-kube-api-access-q8msx\") pod \"machine-api-operator-5c7cf458b4-dsjgm\" (UID: \"0ce6dd93-084c-4e15-8b7c-e0829a6df14e\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" Feb 24 02:20:58.864143 master-0 kubenswrapper[31411]: I0224 02:20:58.860658 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-sys\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.864143 master-0 kubenswrapper[31411]: I0224 02:20:58.860711 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-run-systemd\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.864143 master-0 kubenswrapper[31411]: I0224 02:20:58.860708 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4a2d8ef6-14ac-490d-a931-7082344d3f46-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-hvr8b\" (UID: \"4a2d8ef6-14ac-490d-a931-7082344d3f46\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:20:58.864143 master-0 kubenswrapper[31411]: I0224 02:20:58.860745 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/070ebb2d-57a2-4c76-8c93-e09d398f3b73-var-lock\") pod \"installer-3-master-0\" (UID: \"070ebb2d-57a2-4c76-8c93-e09d398f3b73\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 02:20:58.864143 master-0 kubenswrapper[31411]: I0224 02:20:58.860685 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/8e0c87ae-6387-4c00-b03d-582566907fb6-snapshots\") pod \"insights-operator-59b498fcfb-dbkwd\" (UID: \"8e0c87ae-6387-4c00-b03d-582566907fb6\") " pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" Feb 24 02:20:58.864143 master-0 kubenswrapper[31411]: I0224 02:20:58.860794 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6a08a1e4-cf92-4733-a8af-c7ac5b21e925-stats-auth\") pod \"router-default-7b65dc9fcb-22sgl\" (UID: \"6a08a1e4-cf92-4733-a8af-c7ac5b21e925\") " pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:20:58.864143 master-0 kubenswrapper[31411]: I0224 02:20:58.860838 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4267e3a-aaaf-4b2f-a37c-0f097a35783f-utilities\") pod \"community-operators-kkwwl\" (UID: \"a4267e3a-aaaf-4b2f-a37c-0f097a35783f\") " pod="openshift-marketplace/community-operators-kkwwl" Feb 24 02:20:58.864143 master-0 kubenswrapper[31411]: I0224 02:20:58.860878 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-run-systemd\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:20:58.864143 master-0 kubenswrapper[31411]: I0224 02:20:58.860865 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/011c6603-d533-4449-b409-f6f698a3bd50-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-c5wlk\" (UID: \"011c6603-d533-4449-b409-f6f698a3bd50\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-c5wlk" Feb 24 02:20:58.864143 master-0 kubenswrapper[31411]: I0224 02:20:58.860924 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/4a2d8ef6-14ac-490d-a931-7082344d3f46-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-hvr8b\" (UID: \"4a2d8ef6-14ac-490d-a931-7082344d3f46\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:20:58.864143 master-0 kubenswrapper[31411]: I0224 02:20:58.860955 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a08a1e4-cf92-4733-a8af-c7ac5b21e925-service-ca-bundle\") pod \"router-default-7b65dc9fcb-22sgl\" (UID: \"6a08a1e4-cf92-4733-a8af-c7ac5b21e925\") " pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:20:58.864143 master-0 kubenswrapper[31411]: I0224 02:20:58.861084 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4267e3a-aaaf-4b2f-a37c-0f097a35783f-utilities\") pod \"community-operators-kkwwl\" (UID: \"a4267e3a-aaaf-4b2f-a37c-0f097a35783f\") " pod="openshift-marketplace/community-operators-kkwwl" Feb 24 02:20:58.876510 master-0 kubenswrapper[31411]: I0224 02:20:58.876457 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 24 02:20:58.895245 master-0 kubenswrapper[31411]: I0224 02:20:58.895169 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 24 02:20:58.915723 master-0 kubenswrapper[31411]: I0224 02:20:58.915647 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 24 02:20:58.937989 master-0 kubenswrapper[31411]: I0224 02:20:58.937922 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 24 02:20:58.947182 master-0 kubenswrapper[31411]: I0224 02:20:58.947117 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcbda577-b943-4b5c-b041-948aece8e40f-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-xdws2\" (UID: \"fcbda577-b943-4b5c-b041-948aece8e40f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" Feb 24 02:20:58.955377 master-0 kubenswrapper[31411]: I0224 02:20:58.955313 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 24 02:20:58.962791 master-0 kubenswrapper[31411]: I0224 02:20:58.962742 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-systemd\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.962791 master-0 kubenswrapper[31411]: I0224 02:20:58.962785 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-modprobe-d\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.962980 master-0 kubenswrapper[31411]: I0224 02:20:58.962904 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-systemd\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.963103 master-0 kubenswrapper[31411]: I0224 02:20:58.963001 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/25190a18-bdac-479b-b526-840d28636be3-node-pullsecrets\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:20:58.963103 master-0 kubenswrapper[31411]: I0224 02:20:58.963034 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/25190a18-bdac-479b-b526-840d28636be3-node-pullsecrets\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:20:58.963265 master-0 kubenswrapper[31411]: I0224 02:20:58.963227 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/070ebb2d-57a2-4c76-8c93-e09d398f3b73-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"070ebb2d-57a2-4c76-8c93-e09d398f3b73\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 02:20:58.963345 master-0 kubenswrapper[31411]: I0224 02:20:58.963271 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-modprobe-d\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.963425 master-0 kubenswrapper[31411]: I0224 02:20:58.963298 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/070ebb2d-57a2-4c76-8c93-e09d398f3b73-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"070ebb2d-57a2-4c76-8c93-e09d398f3b73\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 02:20:58.963813 master-0 kubenswrapper[31411]: I0224 02:20:58.963746 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/732a3831-20e0-47dc-a29a-8bb4659541b7-etc-ssl-certs\") pod \"cluster-version-operator-57476485-9cjj5\" (UID: \"732a3831-20e0-47dc-a29a-8bb4659541b7\") " pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" Feb 24 02:20:58.963898 master-0 kubenswrapper[31411]: I0224 02:20:58.963827 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8e70a9f5-1154-40e9-a487-21e36e7f420a-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-8znkt\" (UID: \"8e70a9f5-1154-40e9-a487-21e36e7f420a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:20:58.963973 master-0 kubenswrapper[31411]: I0224 02:20:58.963894 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8e70a9f5-1154-40e9-a487-21e36e7f420a-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-8znkt\" (UID: \"8e70a9f5-1154-40e9-a487-21e36e7f420a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:20:58.963973 master-0 kubenswrapper[31411]: I0224 02:20:58.963832 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/732a3831-20e0-47dc-a29a-8bb4659541b7-etc-ssl-certs\") pod \"cluster-version-operator-57476485-9cjj5\" (UID: \"732a3831-20e0-47dc-a29a-8bb4659541b7\") " pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" Feb 24 02:20:58.963973 master-0 kubenswrapper[31411]: I0224 02:20:58.963941 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/390a7aa5-c7f7-4baf-a2d2-e6da9a465042-hosts-file\") pod \"node-resolver-4lwwp\" (UID: \"390a7aa5-c7f7-4baf-a2d2-e6da9a465042\") " pod="openshift-dns/node-resolver-4lwwp" Feb 24 02:20:58.964193 master-0 kubenswrapper[31411]: I0224 02:20:58.963994 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/390a7aa5-c7f7-4baf-a2d2-e6da9a465042-hosts-file\") pod \"node-resolver-4lwwp\" (UID: \"390a7aa5-c7f7-4baf-a2d2-e6da9a465042\") " pod="openshift-dns/node-resolver-4lwwp" Feb 24 02:20:58.964193 master-0 kubenswrapper[31411]: I0224 02:20:58.964033 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-kubernetes\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.964326 master-0 kubenswrapper[31411]: I0224 02:20:58.964210 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-kubernetes\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.964626 master-0 kubenswrapper[31411]: I0224 02:20:58.964550 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/df42c69b-1a0e-41f5-9006-17540369b9ad-rootfs\") pod \"machine-config-daemon-hfpql\" (UID: \"df42c69b-1a0e-41f5-9006-17540369b9ad\") " pod="openshift-machine-config-operator/machine-config-daemon-hfpql" Feb 24 02:20:58.964733 master-0 kubenswrapper[31411]: I0224 02:20:58.964696 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-sys\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.964801 master-0 kubenswrapper[31411]: I0224 02:20:58.964752 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/070ebb2d-57a2-4c76-8c93-e09d398f3b73-var-lock\") pod \"installer-3-master-0\" (UID: \"070ebb2d-57a2-4c76-8c93-e09d398f3b73\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 02:20:58.965026 master-0 kubenswrapper[31411]: I0224 02:20:58.964930 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-run\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.965026 master-0 kubenswrapper[31411]: I0224 02:20:58.965007 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-host\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.965222 master-0 kubenswrapper[31411]: I0224 02:20:58.965079 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd-var-lock\") pod \"installer-4-master-0\" (UID: \"5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 24 02:20:58.965222 master-0 kubenswrapper[31411]: I0224 02:20:58.965193 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5508683b-09ae-47a1-89fd-b0891a881e09-var-lock\") pod \"installer-2-master-0\" (UID: \"5508683b-09ae-47a1-89fd-b0891a881e09\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:20:58.965417 master-0 kubenswrapper[31411]: I0224 02:20:58.965276 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-sysctl-d\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.965556 master-0 kubenswrapper[31411]: I0224 02:20:58.965479 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/4a2d8ef6-14ac-490d-a931-7082344d3f46-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-hvr8b\" (UID: \"4a2d8ef6-14ac-490d-a931-7082344d3f46\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:20:58.965689 master-0 kubenswrapper[31411]: I0224 02:20:58.965598 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-lib-modules\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.965758 master-0 kubenswrapper[31411]: I0224 02:20:58.965735 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/732a3831-20e0-47dc-a29a-8bb4659541b7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-9cjj5\" (UID: \"732a3831-20e0-47dc-a29a-8bb4659541b7\") " pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" Feb 24 02:20:58.965842 master-0 kubenswrapper[31411]: I0224 02:20:58.965825 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-wtmp\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:20:58.966232 master-0 kubenswrapper[31411]: I0224 02:20:58.966192 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/4a2d8ef6-14ac-490d-a931-7082344d3f46-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-hvr8b\" (UID: \"4a2d8ef6-14ac-490d-a931-7082344d3f46\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:20:58.966322 master-0 kubenswrapper[31411]: I0224 02:20:58.966295 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/732a3831-20e0-47dc-a29a-8bb4659541b7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-9cjj5\" (UID: \"732a3831-20e0-47dc-a29a-8bb4659541b7\") " pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" Feb 24 02:20:58.966322 master-0 kubenswrapper[31411]: I0224 02:20:58.966305 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-sys\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.966450 master-0 kubenswrapper[31411]: I0224 02:20:58.966381 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-wtmp\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:20:58.966450 master-0 kubenswrapper[31411]: I0224 02:20:58.966385 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/070ebb2d-57a2-4c76-8c93-e09d398f3b73-var-lock\") pod \"installer-3-master-0\" (UID: \"070ebb2d-57a2-4c76-8c93-e09d398f3b73\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 02:20:58.966450 master-0 kubenswrapper[31411]: I0224 02:20:58.966198 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/df42c69b-1a0e-41f5-9006-17540369b9ad-rootfs\") pod \"machine-config-daemon-hfpql\" (UID: \"df42c69b-1a0e-41f5-9006-17540369b9ad\") " pod="openshift-machine-config-operator/machine-config-daemon-hfpql" Feb 24 02:20:58.966694 master-0 kubenswrapper[31411]: I0224 02:20:58.966460 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-sysctl-d\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.966694 master-0 kubenswrapper[31411]: I0224 02:20:58.966521 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd-var-lock\") pod \"installer-4-master-0\" (UID: \"5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 24 02:20:58.966694 master-0 kubenswrapper[31411]: I0224 02:20:58.966535 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-run\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.966694 master-0 kubenswrapper[31411]: I0224 02:20:58.966629 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-host\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.967270 master-0 kubenswrapper[31411]: I0224 02:20:58.966662 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b176946a-c056-441c-9145-b88ca4d75758-audit-dir\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:20:58.967270 master-0 kubenswrapper[31411]: I0224 02:20:58.966764 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-lib-modules\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.967270 master-0 kubenswrapper[31411]: I0224 02:20:58.966795 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5508683b-09ae-47a1-89fd-b0891a881e09\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:20:58.967270 master-0 kubenswrapper[31411]: I0224 02:20:58.966827 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5508683b-09ae-47a1-89fd-b0891a881e09-var-lock\") pod \"installer-2-master-0\" (UID: \"5508683b-09ae-47a1-89fd-b0891a881e09\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:20:58.967270 master-0 kubenswrapper[31411]: I0224 02:20:58.966913 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-sysconfig\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.967270 master-0 kubenswrapper[31411]: I0224 02:20:58.966702 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b176946a-c056-441c-9145-b88ca4d75758-audit-dir\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:20:58.967270 master-0 kubenswrapper[31411]: I0224 02:20:58.967271 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/25190a18-bdac-479b-b526-840d28636be3-audit-dir\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:20:58.967270 master-0 kubenswrapper[31411]: I0224 02:20:58.967330 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-var-lib-kubelet\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.967880 master-0 kubenswrapper[31411]: I0224 02:20:58.967357 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-sysconfig\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.967880 master-0 kubenswrapper[31411]: I0224 02:20:58.967492 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-var-lib-kubelet\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.967880 master-0 kubenswrapper[31411]: I0224 02:20:58.967599 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-sysctl-conf\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.967880 master-0 kubenswrapper[31411]: I0224 02:20:58.967628 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/25190a18-bdac-479b-b526-840d28636be3-audit-dir\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:20:58.967880 master-0 kubenswrapper[31411]: I0224 02:20:58.967667 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5508683b-09ae-47a1-89fd-b0891a881e09-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"5508683b-09ae-47a1-89fd-b0891a881e09\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:20:58.967880 master-0 kubenswrapper[31411]: I0224 02:20:58.967732 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5508683b-09ae-47a1-89fd-b0891a881e09-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"5508683b-09ae-47a1-89fd-b0891a881e09\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:20:58.968449 master-0 kubenswrapper[31411]: I0224 02:20:58.967911 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/638b3f88-0386-4f30-8ca5-6255e8f936fc-etc-sysctl-conf\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:20:58.968449 master-0 kubenswrapper[31411]: I0224 02:20:58.968358 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/24765ff1-5e7d-4100-ad81-8f73555fc0a2-sys\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:20:58.968675 master-0 kubenswrapper[31411]: I0224 02:20:58.968519 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/4f5b3b93-a59d-495c-a311-8913fa6000fc-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:20:58.968760 master-0 kubenswrapper[31411]: I0224 02:20:58.968694 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/24765ff1-5e7d-4100-ad81-8f73555fc0a2-root\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:20:58.969029 master-0 kubenswrapper[31411]: I0224 02:20:58.968903 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/24765ff1-5e7d-4100-ad81-8f73555fc0a2-sys\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:20:58.969310 master-0 kubenswrapper[31411]: I0224 02:20:58.969120 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 24 02:20:58.969310 master-0 kubenswrapper[31411]: I0224 02:20:58.969170 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/4f5b3b93-a59d-495c-a311-8913fa6000fc-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:20:58.969310 master-0 kubenswrapper[31411]: I0224 02:20:58.969239 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/4f5b3b93-a59d-495c-a311-8913fa6000fc-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:20:58.969310 master-0 kubenswrapper[31411]: I0224 02:20:58.969255 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 24 02:20:58.969310 master-0 kubenswrapper[31411]: I0224 02:20:58.969308 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/24765ff1-5e7d-4100-ad81-8f73555fc0a2-root\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:20:58.970297 master-0 kubenswrapper[31411]: I0224 02:20:58.969345 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/4f5b3b93-a59d-495c-a311-8913fa6000fc-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:20:58.970297 master-0 kubenswrapper[31411]: I0224 02:20:58.969764 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/4a2d8ef6-14ac-490d-a931-7082344d3f46-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-hvr8b\" (UID: \"4a2d8ef6-14ac-490d-a931-7082344d3f46\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:20:58.970297 master-0 kubenswrapper[31411]: I0224 02:20:58.969774 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/4a2d8ef6-14ac-490d-a931-7082344d3f46-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-hvr8b\" (UID: \"4a2d8ef6-14ac-490d-a931-7082344d3f46\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:20:58.977872 master-0 kubenswrapper[31411]: I0224 02:20:58.975504 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 24 02:20:58.979720 master-0 kubenswrapper[31411]: I0224 02:20:58.979655 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcbda577-b943-4b5c-b041-948aece8e40f-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-xdws2\" (UID: \"fcbda577-b943-4b5c-b041-948aece8e40f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" Feb 24 02:20:58.997546 master-0 kubenswrapper[31411]: I0224 02:20:58.995460 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 24 02:20:58.999787 master-0 kubenswrapper[31411]: I0224 02:20:58.999726 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d8e20d47-aeb6-41bf-9715-c437beb8e9e4-iptables-alerter-script\") pod \"iptables-alerter-rjbl5\" (UID: \"d8e20d47-aeb6-41bf-9715-c437beb8e9e4\") " pod="openshift-network-operator/iptables-alerter-rjbl5" Feb 24 02:20:59.017708 master-0 kubenswrapper[31411]: I0224 02:20:59.017570 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 24 02:20:59.035807 master-0 kubenswrapper[31411]: I0224 02:20:59.035353 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 24 02:20:59.055497 master-0 kubenswrapper[31411]: I0224 02:20:59.055429 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 24 02:20:59.064758 master-0 kubenswrapper[31411]: I0224 02:20:59.064707 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c6153510-452b-4726-8b63-8cc894daa168-signing-key\") pod \"service-ca-576b4d78bd-nqcs2\" (UID: \"c6153510-452b-4726-8b63-8cc894daa168\") " pod="openshift-service-ca/service-ca-576b4d78bd-nqcs2" Feb 24 02:20:59.076022 master-0 kubenswrapper[31411]: I0224 02:20:59.075978 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 24 02:20:59.094837 master-0 kubenswrapper[31411]: I0224 02:20:59.094682 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 24 02:20:59.098877 master-0 kubenswrapper[31411]: I0224 02:20:59.098827 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c6153510-452b-4726-8b63-8cc894daa168-signing-cabundle\") pod \"service-ca-576b4d78bd-nqcs2\" (UID: \"c6153510-452b-4726-8b63-8cc894daa168\") " pod="openshift-service-ca/service-ca-576b4d78bd-nqcs2" Feb 24 02:20:59.105901 master-0 kubenswrapper[31411]: I0224 02:20:59.105847 31411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 24 02:20:59.115753 master-0 kubenswrapper[31411]: I0224 02:20:59.115703 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 24 02:20:59.136060 master-0 kubenswrapper[31411]: I0224 02:20:59.135616 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 24 02:20:59.139855 master-0 kubenswrapper[31411]: I0224 02:20:59.139799 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25190a18-bdac-479b-b526-840d28636be3-config\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:20:59.154186 master-0 kubenswrapper[31411]: I0224 02:20:59.154132 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 24 02:20:59.175338 master-0 kubenswrapper[31411]: I0224 02:20:59.175172 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 24 02:20:59.176070 master-0 kubenswrapper[31411]: I0224 02:20:59.176021 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/25190a18-bdac-479b-b526-840d28636be3-etcd-client\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:20:59.196621 master-0 kubenswrapper[31411]: I0224 02:20:59.196535 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 24 02:20:59.204660 master-0 kubenswrapper[31411]: I0224 02:20:59.204604 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25190a18-bdac-479b-b526-840d28636be3-serving-cert\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:20:59.215379 master-0 kubenswrapper[31411]: I0224 02:20:59.215320 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 24 02:20:59.221350 master-0 kubenswrapper[31411]: I0224 02:20:59.221281 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/25190a18-bdac-479b-b526-840d28636be3-encryption-config\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:20:59.236621 master-0 kubenswrapper[31411]: I0224 02:20:59.235661 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 24 02:20:59.240617 master-0 kubenswrapper[31411]: I0224 02:20:59.237303 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/25190a18-bdac-479b-b526-840d28636be3-audit\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:20:59.257971 master-0 kubenswrapper[31411]: I0224 02:20:59.257764 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 24 02:20:59.270606 master-0 kubenswrapper[31411]: I0224 02:20:59.267858 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/25190a18-bdac-479b-b526-840d28636be3-etcd-serving-ca\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:20:59.280264 master-0 kubenswrapper[31411]: I0224 02:20:59.280210 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 24 02:20:59.282646 master-0 kubenswrapper[31411]: I0224 02:20:59.281003 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/25190a18-bdac-479b-b526-840d28636be3-image-import-ca\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:20:59.303419 master-0 kubenswrapper[31411]: I0224 02:20:59.303096 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 24 02:20:59.306753 master-0 kubenswrapper[31411]: I0224 02:20:59.305934 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25190a18-bdac-479b-b526-840d28636be3-trusted-ca-bundle\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:20:59.315533 master-0 kubenswrapper[31411]: I0224 02:20:59.315480 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 24 02:20:59.336772 master-0 kubenswrapper[31411]: I0224 02:20:59.336708 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 24 02:20:59.354503 master-0 kubenswrapper[31411]: I0224 02:20:59.354403 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 24 02:20:59.362218 master-0 kubenswrapper[31411]: I0224 02:20:59.362174 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c-metrics-tls\") pod \"dns-default-5rf6m\" (UID: \"8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c\") " pod="openshift-dns/dns-default-5rf6m" Feb 24 02:20:59.376061 master-0 kubenswrapper[31411]: I0224 02:20:59.375060 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 24 02:20:59.382112 master-0 kubenswrapper[31411]: I0224 02:20:59.382060 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c-config-volume\") pod \"dns-default-5rf6m\" (UID: \"8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c\") " pod="openshift-dns/dns-default-5rf6m" Feb 24 02:20:59.396301 master-0 kubenswrapper[31411]: I0224 02:20:59.396250 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 24 02:20:59.417551 master-0 kubenswrapper[31411]: I0224 02:20:59.415783 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 24 02:20:59.437139 master-0 kubenswrapper[31411]: I0224 02:20:59.437085 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 24 02:20:59.455478 master-0 kubenswrapper[31411]: I0224 02:20:59.455400 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 24 02:20:59.457895 master-0 kubenswrapper[31411]: I0224 02:20:59.457851 31411 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 02:20:59.462123 master-0 kubenswrapper[31411]: I0224 02:20:59.462050 31411 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 02:20:59.462123 master-0 kubenswrapper[31411]: I0224 02:20:59.462117 31411 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 02:20:59.462328 master-0 kubenswrapper[31411]: I0224 02:20:59.462132 31411 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 02:20:59.463676 master-0 kubenswrapper[31411]: I0224 02:20:59.462700 31411 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 24 02:20:59.470211 master-0 kubenswrapper[31411]: I0224 02:20:59.470141 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:20:59.471392 master-0 kubenswrapper[31411]: I0224 02:20:59.471345 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-retry-1-master-0" Feb 24 02:20:59.474710 master-0 kubenswrapper[31411]: I0224 02:20:59.474667 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 24 02:20:59.486156 master-0 kubenswrapper[31411]: I0224 02:20:59.486085 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b176946a-c056-441c-9145-b88ca4d75758-etcd-client\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:20:59.496495 master-0 kubenswrapper[31411]: I0224 02:20:59.496433 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 24 02:20:59.498161 master-0 kubenswrapper[31411]: I0224 02:20:59.498105 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b176946a-c056-441c-9145-b88ca4d75758-serving-cert\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:20:59.501184 master-0 kubenswrapper[31411]: I0224 02:20:59.501144 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:20:59.521214 master-0 kubenswrapper[31411]: I0224 02:20:59.521171 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 24 02:20:59.525798 master-0 kubenswrapper[31411]: I0224 02:20:59.525744 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b176946a-c056-441c-9145-b88ca4d75758-encryption-config\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:20:59.534834 master-0 kubenswrapper[31411]: I0224 02:20:59.534627 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 24 02:20:59.540375 master-0 kubenswrapper[31411]: I0224 02:20:59.539903 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b176946a-c056-441c-9145-b88ca4d75758-audit-policies\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:20:59.557598 master-0 kubenswrapper[31411]: I0224 02:20:59.556562 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 24 02:20:59.567623 master-0 kubenswrapper[31411]: I0224 02:20:59.566745 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b176946a-c056-441c-9145-b88ca4d75758-etcd-serving-ca\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:20:59.576592 master-0 kubenswrapper[31411]: I0224 02:20:59.575561 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 24 02:20:59.578180 master-0 kubenswrapper[31411]: I0224 02:20:59.578140 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b176946a-c056-441c-9145-b88ca4d75758-trusted-ca-bundle\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:20:59.595420 master-0 kubenswrapper[31411]: I0224 02:20:59.595369 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 24 02:20:59.615905 master-0 kubenswrapper[31411]: I0224 02:20:59.615774 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 24 02:20:59.620824 master-0 kubenswrapper[31411]: I0224 02:20:59.620759 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6a08a1e4-cf92-4733-a8af-c7ac5b21e925-default-certificate\") pod \"router-default-7b65dc9fcb-22sgl\" (UID: \"6a08a1e4-cf92-4733-a8af-c7ac5b21e925\") " pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:20:59.623882 master-0 kubenswrapper[31411]: I0224 02:20:59.623825 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5508683b-09ae-47a1-89fd-b0891a881e09-kubelet-dir\") pod \"5508683b-09ae-47a1-89fd-b0891a881e09\" (UID: \"5508683b-09ae-47a1-89fd-b0891a881e09\") " Feb 24 02:20:59.623993 master-0 kubenswrapper[31411]: I0224 02:20:59.623943 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5508683b-09ae-47a1-89fd-b0891a881e09-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5508683b-09ae-47a1-89fd-b0891a881e09" (UID: "5508683b-09ae-47a1-89fd-b0891a881e09"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:20:59.624095 master-0 kubenswrapper[31411]: I0224 02:20:59.624067 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5508683b-09ae-47a1-89fd-b0891a881e09-var-lock\") pod \"5508683b-09ae-47a1-89fd-b0891a881e09\" (UID: \"5508683b-09ae-47a1-89fd-b0891a881e09\") " Feb 24 02:20:59.624233 master-0 kubenswrapper[31411]: I0224 02:20:59.624190 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5508683b-09ae-47a1-89fd-b0891a881e09-var-lock" (OuterVolumeSpecName: "var-lock") pod "5508683b-09ae-47a1-89fd-b0891a881e09" (UID: "5508683b-09ae-47a1-89fd-b0891a881e09"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:20:59.625849 master-0 kubenswrapper[31411]: I0224 02:20:59.625812 31411 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5508683b-09ae-47a1-89fd-b0891a881e09-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 02:20:59.625849 master-0 kubenswrapper[31411]: I0224 02:20:59.625841 31411 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5508683b-09ae-47a1-89fd-b0891a881e09-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:20:59.634678 master-0 kubenswrapper[31411]: I0224 02:20:59.634645 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 24 02:20:59.641684 master-0 kubenswrapper[31411]: I0224 02:20:59.641641 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6a08a1e4-cf92-4733-a8af-c7ac5b21e925-stats-auth\") pod \"router-default-7b65dc9fcb-22sgl\" (UID: \"6a08a1e4-cf92-4733-a8af-c7ac5b21e925\") " pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:20:59.656470 master-0 kubenswrapper[31411]: I0224 02:20:59.655697 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 24 02:20:59.682633 master-0 kubenswrapper[31411]: I0224 02:20:59.679879 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 24 02:20:59.682633 master-0 kubenswrapper[31411]: I0224 02:20:59.680427 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6a08a1e4-cf92-4733-a8af-c7ac5b21e925-metrics-certs\") pod \"router-default-7b65dc9fcb-22sgl\" (UID: \"6a08a1e4-cf92-4733-a8af-c7ac5b21e925\") " pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:20:59.691597 master-0 kubenswrapper[31411]: I0224 02:20:59.687908 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a08a1e4-cf92-4733-a8af-c7ac5b21e925-service-ca-bundle\") pod \"router-default-7b65dc9fcb-22sgl\" (UID: \"6a08a1e4-cf92-4733-a8af-c7ac5b21e925\") " pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:20:59.695773 master-0 kubenswrapper[31411]: I0224 02:20:59.695680 31411 request.go:700] Waited for 1.004158031s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 24 02:20:59.698994 master-0 kubenswrapper[31411]: I0224 02:20:59.698597 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 24 02:20:59.721543 master-0 kubenswrapper[31411]: I0224 02:20:59.721476 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 24 02:20:59.741619 master-0 kubenswrapper[31411]: I0224 02:20:59.740126 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 24 02:20:59.742312 master-0 kubenswrapper[31411]: I0224 02:20:59.742215 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/4a2d8ef6-14ac-490d-a931-7082344d3f46-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-hvr8b\" (UID: \"4a2d8ef6-14ac-490d-a931-7082344d3f46\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:20:59.760608 master-0 kubenswrapper[31411]: I0224 02:20:59.759739 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 24 02:20:59.787632 master-0 kubenswrapper[31411]: I0224 02:20:59.786961 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 24 02:20:59.787632 master-0 kubenswrapper[31411]: I0224 02:20:59.787237 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/4f5b3b93-a59d-495c-a311-8913fa6000fc-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:20:59.798474 master-0 kubenswrapper[31411]: I0224 02:20:59.794324 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 24 02:20:59.827839 master-0 kubenswrapper[31411]: I0224 02:20:59.824497 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 24 02:20:59.844703 master-0 kubenswrapper[31411]: E0224 02:20:59.844646 31411 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.844925 master-0 kubenswrapper[31411]: E0224 02:20:59.844763 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9cad383a-cb69-41a8-aec8-23ee1c930430-apiservice-cert podName:9cad383a-cb69-41a8-aec8-23ee1c930430 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.344737684 +0000 UTC m=+3.561935530 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/9cad383a-cb69-41a8-aec8-23ee1c930430-apiservice-cert") pod "packageserver-597975fc65-xcl6c" (UID: "9cad383a-cb69-41a8-aec8-23ee1c930430") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.844925 master-0 kubenswrapper[31411]: I0224 02:20:59.844865 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 24 02:20:59.847219 master-0 kubenswrapper[31411]: E0224 02:20:59.845042 31411 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.847219 master-0 kubenswrapper[31411]: E0224 02:20:59.845077 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-config podName:e8d6a6c0-b944-4206-9178-9a9930b303b9 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.345069973 +0000 UTC m=+3.562267819 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-config") pod "controller-manager-56b6d9c5b7-lxwt6" (UID: "e8d6a6c0-b944-4206-9178-9a9930b303b9") : failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.847219 master-0 kubenswrapper[31411]: E0224 02:20:59.845099 31411 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.847219 master-0 kubenswrapper[31411]: E0224 02:20:59.845173 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24765ff1-5e7d-4100-ad81-8f73555fc0a2-metrics-client-ca podName:24765ff1-5e7d-4100-ad81-8f73555fc0a2 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.345148876 +0000 UTC m=+3.562346722 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/24765ff1-5e7d-4100-ad81-8f73555fc0a2-metrics-client-ca") pod "node-exporter-2qn8m" (UID: "24765ff1-5e7d-4100-ad81-8f73555fc0a2") : failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.847219 master-0 kubenswrapper[31411]: E0224 02:20:59.845268 31411 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.847219 master-0 kubenswrapper[31411]: E0224 02:20:59.845406 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df42c69b-1a0e-41f5-9006-17540369b9ad-proxy-tls podName:df42c69b-1a0e-41f5-9006-17540369b9ad nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.345375902 +0000 UTC m=+3.562573758 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/df42c69b-1a0e-41f5-9006-17540369b9ad-proxy-tls") pod "machine-config-daemon-hfpql" (UID: "df42c69b-1a0e-41f5-9006-17540369b9ad") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.847219 master-0 kubenswrapper[31411]: E0224 02:20:59.845635 31411 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.847219 master-0 kubenswrapper[31411]: E0224 02:20:59.845763 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/608a8a56-daee-4fa1-8300-42155217c68b-metrics-client-ca podName:608a8a56-daee-4fa1-8300-42155217c68b nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.345728142 +0000 UTC m=+3.562925998 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/608a8a56-daee-4fa1-8300-42155217c68b-metrics-client-ca") pod "openshift-state-metrics-6dbff8cb4c-swtr6" (UID: "608a8a56-daee-4fa1-8300-42155217c68b") : failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.847219 master-0 kubenswrapper[31411]: E0224 02:20:59.845770 31411 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.847219 master-0 kubenswrapper[31411]: E0224 02:20:59.845824 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8c396c41-c617-4631-9700-a7052af5a276-configmap-kubelet-serving-ca-bundle podName:8c396c41-c617-4631-9700-a7052af5a276 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.345813864 +0000 UTC m=+3.563011720 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/8c396c41-c617-4631-9700-a7052af5a276-configmap-kubelet-serving-ca-bundle") pod "metrics-server-7b9cc5984b-smpdl" (UID: "8c396c41-c617-4631-9700-a7052af5a276") : failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.847672 master-0 kubenswrapper[31411]: E0224 02:20:59.847633 31411 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.847712 master-0 kubenswrapper[31411]: E0224 02:20:59.847702 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/91168f3d-70eb-4351-bb83-5411a96ad29d-auth-proxy-config podName:91168f3d-70eb-4351-bb83-5411a96ad29d nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.347688327 +0000 UTC m=+3.564886193 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/91168f3d-70eb-4351-bb83-5411a96ad29d-auth-proxy-config") pod "cluster-autoscaler-operator-86b8dc6d6-mtrdk" (UID: "91168f3d-70eb-4351-bb83-5411a96ad29d") : failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.847747 master-0 kubenswrapper[31411]: E0224 02:20:59.847646 31411 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.847787 master-0 kubenswrapper[31411]: E0224 02:20:59.847755 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-state-metrics-tls podName:a5305004-5311-4bc4-ad7c-6670f97c89cb nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.347747188 +0000 UTC m=+3.564945044 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-state-metrics-tls") pod "kube-state-metrics-59584d565f-f6f26" (UID: "a5305004-5311-4bc4-ad7c-6670f97c89cb") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.847820 master-0 kubenswrapper[31411]: E0224 02:20:59.847803 31411 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.847851 master-0 kubenswrapper[31411]: E0224 02:20:59.847832 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/74a7801b-b7a4-4292-91b3-6285c239aeb7-cco-trusted-ca podName:74a7801b-b7a4-4292-91b3-6285c239aeb7 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.347824201 +0000 UTC m=+3.565022057 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/74a7801b-b7a4-4292-91b3-6285c239aeb7-cco-trusted-ca") pod "cloud-credential-operator-6968c58f46-fcr59" (UID: "74a7801b-b7a4-4292-91b3-6285c239aeb7") : failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.847884 master-0 kubenswrapper[31411]: E0224 02:20:59.847866 31411 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.847914 master-0 kubenswrapper[31411]: E0224 02:20:59.847881 31411 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.847914 master-0 kubenswrapper[31411]: E0224 02:20:59.847900 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e68b3061-c9d2-469d-babf-7ccac0ad9b14-webhook-certs podName:e68b3061-c9d2-469d-babf-7ccac0ad9b14 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.347890993 +0000 UTC m=+3.565088849 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e68b3061-c9d2-469d-babf-7ccac0ad9b14-webhook-certs") pod "multus-admission-controller-5f54bf67d4-ctssl" (UID: "e68b3061-c9d2-469d-babf-7ccac0ad9b14") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.847975 master-0 kubenswrapper[31411]: E0224 02:20:59.847927 31411 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-n76llk2nkkst: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.847975 master-0 kubenswrapper[31411]: E0224 02:20:59.847947 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-state-metrics-kube-rbac-proxy-config podName:a5305004-5311-4bc4-ad7c-6670f97c89cb nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.347926854 +0000 UTC m=+3.565124710 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-59584d565f-f6f26" (UID: "a5305004-5311-4bc4-ad7c-6670f97c89cb") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.848031 master-0 kubenswrapper[31411]: E0224 02:20:59.847976 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-client-ca-bundle podName:8c396c41-c617-4631-9700-a7052af5a276 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.347964365 +0000 UTC m=+3.565162231 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-client-ca-bundle") pod "metrics-server-7b9cc5984b-smpdl" (UID: "8c396c41-c617-4631-9700-a7052af5a276") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.848692 master-0 kubenswrapper[31411]: E0224 02:20:59.848666 31411 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.848742 master-0 kubenswrapper[31411]: E0224 02:20:59.848728 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1-certs podName:9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.348716626 +0000 UTC m=+3.565914482 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1-certs") pod "machine-config-server-drf28" (UID: "9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.852554 master-0 kubenswrapper[31411]: E0224 02:20:59.852505 31411 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.852641 master-0 kubenswrapper[31411]: E0224 02:20:59.852601 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55a2662a-d672-4a46-9b81-bfcaf334eedb-config podName:55a2662a-d672-4a46-9b81-bfcaf334eedb nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.352588964 +0000 UTC m=+3.569786820 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/55a2662a-d672-4a46-9b81-bfcaf334eedb-config") pod "route-controller-manager-676fddcd58-49xzd" (UID: "55a2662a-d672-4a46-9b81-bfcaf334eedb") : failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.852641 master-0 kubenswrapper[31411]: E0224 02:20:59.852627 31411 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.852715 master-0 kubenswrapper[31411]: E0224 02:20:59.852660 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e70a9f5-1154-40e9-a487-21e36e7f420a-cloud-controller-manager-operator-tls podName:8e70a9f5-1154-40e9-a487-21e36e7f420a nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.352651326 +0000 UTC m=+3.569849182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/8e70a9f5-1154-40e9-a487-21e36e7f420a-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" (UID: "8e70a9f5-1154-40e9-a487-21e36e7f420a") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.852715 master-0 kubenswrapper[31411]: E0224 02:20:59.852661 31411 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.852715 master-0 kubenswrapper[31411]: E0224 02:20:59.852693 31411 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.852715 master-0 kubenswrapper[31411]: E0224 02:20:59.852699 31411 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.852846 master-0 kubenswrapper[31411]: E0224 02:20:59.852724 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9cad383a-cb69-41a8-aec8-23ee1c930430-webhook-cert podName:9cad383a-cb69-41a8-aec8-23ee1c930430 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.352709488 +0000 UTC m=+3.569907344 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/9cad383a-cb69-41a8-aec8-23ee1c930430-webhook-cert") pod "packageserver-597975fc65-xcl6c" (UID: "9cad383a-cb69-41a8-aec8-23ee1c930430") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.852846 master-0 kubenswrapper[31411]: E0224 02:20:59.852734 31411 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.852846 master-0 kubenswrapper[31411]: E0224 02:20:59.852755 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-machine-api-operator-tls podName:0ce6dd93-084c-4e15-8b7c-e0829a6df14e nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.352744169 +0000 UTC m=+3.569942015 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-machine-api-operator-tls") pod "machine-api-operator-5c7cf458b4-dsjgm" (UID: "0ce6dd93-084c-4e15-8b7c-e0829a6df14e") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.852846 master-0 kubenswrapper[31411]: E0224 02:20:59.852760 31411 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.852846 master-0 kubenswrapper[31411]: E0224 02:20:59.852773 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df2b8111-41c6-4333-b473-4c08fb836f70-prometheus-operator-tls podName:df2b8111-41c6-4333-b473-4c08fb836f70 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.352765309 +0000 UTC m=+3.569963165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/df2b8111-41c6-4333-b473-4c08fb836f70-prometheus-operator-tls") pod "prometheus-operator-754bc4d665-66lml" (UID: "df2b8111-41c6-4333-b473-4c08fb836f70") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.852846 master-0 kubenswrapper[31411]: E0224 02:20:59.852790 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-proxy-ca-bundles podName:e8d6a6c0-b944-4206-9178-9a9930b303b9 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.35278304 +0000 UTC m=+3.569980896 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-proxy-ca-bundles") pod "controller-manager-56b6d9c5b7-lxwt6" (UID: "e8d6a6c0-b944-4206-9178-9a9930b303b9") : failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.852846 master-0 kubenswrapper[31411]: E0224 02:20:59.852807 31411 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.853037 master-0 kubenswrapper[31411]: E0224 02:20:59.852802 31411 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.853037 master-0 kubenswrapper[31411]: E0224 02:20:59.852810 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22a83952-32ec-48f7-85cd-209b62362ae2-tls-certificates podName:22a83952-32ec-48f7-85cd-209b62362ae2 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.35280076 +0000 UTC m=+3.569998616 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/22a83952-32ec-48f7-85cd-209b62362ae2-tls-certificates") pod "prometheus-operator-admission-webhook-75d56db95f-9gkp2" (UID: "22a83952-32ec-48f7-85cd-209b62362ae2") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.853098 master-0 kubenswrapper[31411]: E0224 02:20:59.853062 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0127e0d5-9961-4ff6-851d-884e71e1dcf2-samples-operator-tls podName:0127e0d5-9961-4ff6-851d-884e71e1dcf2 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.353021786 +0000 UTC m=+3.570219632 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/0127e0d5-9961-4ff6-851d-884e71e1dcf2-samples-operator-tls") pod "cluster-samples-operator-65c5c48b9b-bkc9s" (UID: "0127e0d5-9961-4ff6-851d-884e71e1dcf2") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.853098 master-0 kubenswrapper[31411]: E0224 02:20:59.852820 31411 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.853098 master-0 kubenswrapper[31411]: E0224 02:20:59.853086 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55a2662a-d672-4a46-9b81-bfcaf334eedb-client-ca podName:55a2662a-d672-4a46-9b81-bfcaf334eedb nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.353078508 +0000 UTC m=+3.570276354 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/55a2662a-d672-4a46-9b81-bfcaf334eedb-client-ca") pod "route-controller-manager-676fddcd58-49xzd" (UID: "55a2662a-d672-4a46-9b81-bfcaf334eedb") : failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.853188 master-0 kubenswrapper[31411]: E0224 02:20:59.853147 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91168f3d-70eb-4351-bb83-5411a96ad29d-cert podName:91168f3d-70eb-4351-bb83-5411a96ad29d nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.353114179 +0000 UTC m=+3.570312025 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/91168f3d-70eb-4351-bb83-5411a96ad29d-cert") pod "cluster-autoscaler-operator-86b8dc6d6-mtrdk" (UID: "91168f3d-70eb-4351-bb83-5411a96ad29d") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.853188 master-0 kubenswrapper[31411]: E0224 02:20:59.852819 31411 configmap.go:193] Couldn't get configMap openshift-cluster-version/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.853188 master-0 kubenswrapper[31411]: E0224 02:20:59.853188 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/732a3831-20e0-47dc-a29a-8bb4659541b7-service-ca podName:732a3831-20e0-47dc-a29a-8bb4659541b7 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.353181041 +0000 UTC m=+3.570378887 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/732a3831-20e0-47dc-a29a-8bb4659541b7-service-ca") pod "cluster-version-operator-57476485-9cjj5" (UID: "732a3831-20e0-47dc-a29a-8bb4659541b7") : failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.853283 master-0 kubenswrapper[31411]: E0224 02:20:59.852846 31411 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.853283 master-0 kubenswrapper[31411]: I0224 02:20:59.853210 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/4f5b3b93-a59d-495c-a311-8913fa6000fc-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:20:59.853283 master-0 kubenswrapper[31411]: E0224 02:20:59.853225 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d-proxy-tls podName:e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.353219322 +0000 UTC m=+3.570417168 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d-proxy-tls") pod "machine-config-controller-54cb48566c-xzpl4" (UID: "e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.853283 master-0 kubenswrapper[31411]: E0224 02:20:59.852851 31411 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.853283 master-0 kubenswrapper[31411]: E0224 02:20:59.853245 31411 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.853283 master-0 kubenswrapper[31411]: E0224 02:20:59.852851 31411 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.853283 master-0 kubenswrapper[31411]: E0224 02:20:59.853263 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/df2b8111-41c6-4333-b473-4c08fb836f70-metrics-client-ca podName:df2b8111-41c6-4333-b473-4c08fb836f70 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.353256663 +0000 UTC m=+3.570454509 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/df2b8111-41c6-4333-b473-4c08fb836f70-metrics-client-ca") pod "prometheus-operator-754bc4d665-66lml" (UID: "df2b8111-41c6-4333-b473-4c08fb836f70") : failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.853476 master-0 kubenswrapper[31411]: E0224 02:20:59.853303 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e70a9f5-1154-40e9-a487-21e36e7f420a-images podName:8e70a9f5-1154-40e9-a487-21e36e7f420a nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.353287614 +0000 UTC m=+3.570485460 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/8e70a9f5-1154-40e9-a487-21e36e7f420a-images") pod "cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" (UID: "8e70a9f5-1154-40e9-a487-21e36e7f420a") : failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.853476 master-0 kubenswrapper[31411]: E0224 02:20:59.853319 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-images podName:0ce6dd93-084c-4e15-8b7c-e0829a6df14e nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.353312734 +0000 UTC m=+3.570510580 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-images") pod "machine-api-operator-5c7cf458b4-dsjgm" (UID: "0ce6dd93-084c-4e15-8b7c-e0829a6df14e") : failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.853476 master-0 kubenswrapper[31411]: E0224 02:20:59.852888 31411 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.853476 master-0 kubenswrapper[31411]: E0224 02:20:59.853350 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e70a9f5-1154-40e9-a487-21e36e7f420a-auth-proxy-config podName:8e70a9f5-1154-40e9-a487-21e36e7f420a nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.353344715 +0000 UTC m=+3.570542561 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/8e70a9f5-1154-40e9-a487-21e36e7f420a-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" (UID: "8e70a9f5-1154-40e9-a487-21e36e7f420a") : failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.853476 master-0 kubenswrapper[31411]: E0224 02:20:59.852890 31411 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.853476 master-0 kubenswrapper[31411]: E0224 02:20:59.853386 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-machine-approver-tls podName:8ebd1a97-ff7b-4a10-a1b5-956e427478a8 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.353379336 +0000 UTC m=+3.570577182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-machine-approver-tls") pod "machine-approver-7dd9c7d7b9-sjqsx" (UID: "8ebd1a97-ff7b-4a10-a1b5-956e427478a8") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.856186 master-0 kubenswrapper[31411]: E0224 02:20:59.853672 31411 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.856186 master-0 kubenswrapper[31411]: E0224 02:20:59.854369 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-tls podName:24765ff1-5e7d-4100-ad81-8f73555fc0a2 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.354346823 +0000 UTC m=+3.571544679 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-tls") pod "node-exporter-2qn8m" (UID: "24765ff1-5e7d-4100-ad81-8f73555fc0a2") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.858076 master-0 kubenswrapper[31411]: E0224 02:20:59.857738 31411 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.858076 master-0 kubenswrapper[31411]: E0224 02:20:59.857792 31411 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.858076 master-0 kubenswrapper[31411]: E0224 02:20:59.857812 31411 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.858076 master-0 kubenswrapper[31411]: E0224 02:20:59.857845 31411 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.858076 master-0 kubenswrapper[31411]: E0224 02:20:59.857870 31411 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.858076 master-0 kubenswrapper[31411]: E0224 02:20:59.857828 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-auth-proxy-config podName:8ebd1a97-ff7b-4a10-a1b5-956e427478a8 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.357810991 +0000 UTC m=+3.575008837 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-auth-proxy-config") pod "machine-approver-7dd9c7d7b9-sjqsx" (UID: "8ebd1a97-ff7b-4a10-a1b5-956e427478a8") : failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.858076 master-0 kubenswrapper[31411]: E0224 02:20:59.857899 31411 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.858076 master-0 kubenswrapper[31411]: E0224 02:20:59.857933 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-config podName:8ebd1a97-ff7b-4a10-a1b5-956e427478a8 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.357909593 +0000 UTC m=+3.575107459 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-config") pod "machine-approver-7dd9c7d7b9-sjqsx" (UID: "8ebd1a97-ff7b-4a10-a1b5-956e427478a8") : failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.858076 master-0 kubenswrapper[31411]: E0224 02:20:59.857951 31411 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.858076 master-0 kubenswrapper[31411]: E0224 02:20:59.857951 31411 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.858076 master-0 kubenswrapper[31411]: E0224 02:20:59.857957 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a5305004-5311-4bc4-ad7c-6670f97c89cb-metrics-client-ca podName:a5305004-5311-4bc4-ad7c-6670f97c89cb nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.357946964 +0000 UTC m=+3.575144820 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/a5305004-5311-4bc4-ad7c-6670f97c89cb-metrics-client-ca") pod "kube-state-metrics-59584d565f-f6f26" (UID: "a5305004-5311-4bc4-ad7c-6670f97c89cb") : failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.858076 master-0 kubenswrapper[31411]: E0224 02:20:59.857994 31411 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.858076 master-0 kubenswrapper[31411]: E0224 02:20:59.858025 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e0c87ae-6387-4c00-b03d-582566907fb6-service-ca-bundle podName:8e0c87ae-6387-4c00-b03d-582566907fb6 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.358005176 +0000 UTC m=+3.575203022 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/8e0c87ae-6387-4c00-b03d-582566907fb6-service-ca-bundle") pod "insights-operator-59b498fcfb-dbkwd" (UID: "8e0c87ae-6387-4c00-b03d-582566907fb6") : failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.858076 master-0 kubenswrapper[31411]: E0224 02:20:59.857934 31411 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.858076 master-0 kubenswrapper[31411]: E0224 02:20:59.858064 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e8d6a6c0-b944-4206-9178-9a9930b303b9-serving-cert podName:e8d6a6c0-b944-4206-9178-9a9930b303b9 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.358037127 +0000 UTC m=+3.575234993 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e8d6a6c0-b944-4206-9178-9a9930b303b9-serving-cert") pod "controller-manager-56b6d9c5b7-lxwt6" (UID: "e8d6a6c0-b944-4206-9178-9a9930b303b9") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.858076 master-0 kubenswrapper[31411]: I0224 02:20:59.858063 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 24 02:20:59.858076 master-0 kubenswrapper[31411]: E0224 02:20:59.858089 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-client-ca podName:e8d6a6c0-b944-4206-9178-9a9930b303b9 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.358074958 +0000 UTC m=+3.575272804 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-client-ca") pod "controller-manager-56b6d9c5b7-lxwt6" (UID: "e8d6a6c0-b944-4206-9178-9a9930b303b9") : failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.858630 master-0 kubenswrapper[31411]: E0224 02:20:59.858109 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1-node-bootstrap-token podName:9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.358097789 +0000 UTC m=+3.575295635 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1-node-bootstrap-token") pod "machine-config-server-drf28" (UID: "9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.858630 master-0 kubenswrapper[31411]: E0224 02:20:59.858118 31411 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.858630 master-0 kubenswrapper[31411]: E0224 02:20:59.858139 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df2b8111-41c6-4333-b473-4c08fb836f70-prometheus-operator-kube-rbac-proxy-config podName:df2b8111-41c6-4333-b473-4c08fb836f70 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.358126859 +0000 UTC m=+3.575324705 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/df2b8111-41c6-4333-b473-4c08fb836f70-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-754bc4d665-66lml" (UID: "df2b8111-41c6-4333-b473-4c08fb836f70") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.858630 master-0 kubenswrapper[31411]: E0224 02:20:59.858170 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-kube-rbac-proxy-config podName:24765ff1-5e7d-4100-ad81-8f73555fc0a2 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.35815749 +0000 UTC m=+3.575355336 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-kube-rbac-proxy-config") pod "node-exporter-2qn8m" (UID: "24765ff1-5e7d-4100-ad81-8f73555fc0a2") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.858630 master-0 kubenswrapper[31411]: E0224 02:20:59.858199 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/732a3831-20e0-47dc-a29a-8bb4659541b7-serving-cert podName:732a3831-20e0-47dc-a29a-8bb4659541b7 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.358177511 +0000 UTC m=+3.575375357 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/732a3831-20e0-47dc-a29a-8bb4659541b7-serving-cert") pod "cluster-version-operator-57476485-9cjj5" (UID: "732a3831-20e0-47dc-a29a-8bb4659541b7") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.858630 master-0 kubenswrapper[31411]: E0224 02:20:59.858259 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-config podName:0ce6dd93-084c-4e15-8b7c-e0829a6df14e nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.358212282 +0000 UTC m=+3.575410138 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-config") pod "machine-api-operator-5c7cf458b4-dsjgm" (UID: "0ce6dd93-084c-4e15-8b7c-e0829a6df14e") : failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.858630 master-0 kubenswrapper[31411]: E0224 02:20:59.853688 31411 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.858630 master-0 kubenswrapper[31411]: E0224 02:20:59.858298 31411 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.858630 master-0 kubenswrapper[31411]: E0224 02:20:59.858360 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4cea44a-1c6e-465f-97df-2c951056cb85-control-plane-machine-set-operator-tls podName:a4cea44a-1c6e-465f-97df-2c951056cb85 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.358347966 +0000 UTC m=+3.575545812 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/a4cea44a-1c6e-465f-97df-2c951056cb85-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-686847ff5f-ckntz" (UID: "a4cea44a-1c6e-465f-97df-2c951056cb85") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.858630 master-0 kubenswrapper[31411]: E0224 02:20:59.858383 31411 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.858630 master-0 kubenswrapper[31411]: E0224 02:20:59.858451 31411 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.858630 master-0 kubenswrapper[31411]: E0224 02:20:59.858461 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e0c87ae-6387-4c00-b03d-582566907fb6-trusted-ca-bundle podName:8e0c87ae-6387-4c00-b03d-582566907fb6 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.358438568 +0000 UTC m=+3.575636424 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/8e0c87ae-6387-4c00-b03d-582566907fb6-trusted-ca-bundle") pod "insights-operator-59b498fcfb-dbkwd" (UID: "8e0c87ae-6387-4c00-b03d-582566907fb6") : failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.858630 master-0 kubenswrapper[31411]: E0224 02:20:59.853721 31411 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.859009 master-0 kubenswrapper[31411]: E0224 02:20:59.858673 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55a2662a-d672-4a46-9b81-bfcaf334eedb-serving-cert podName:55a2662a-d672-4a46-9b81-bfcaf334eedb nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.358633824 +0000 UTC m=+3.575831670 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/55a2662a-d672-4a46-9b81-bfcaf334eedb-serving-cert") pod "route-controller-manager-676fddcd58-49xzd" (UID: "55a2662a-d672-4a46-9b81-bfcaf334eedb") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.859009 master-0 kubenswrapper[31411]: E0224 02:20:59.858715 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-state-metrics-custom-resource-state-configmap podName:a5305004-5311-4bc4-ad7c-6670f97c89cb nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.358693605 +0000 UTC m=+3.575891451 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-59584d565f-f6f26" (UID: "a5305004-5311-4bc4-ad7c-6670f97c89cb") : failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.859799 master-0 kubenswrapper[31411]: E0224 02:20:59.859744 31411 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.860168 master-0 kubenswrapper[31411]: E0224 02:20:59.859904 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e36c9eb-0368-46dc-af84-9c602a15555d-cert podName:3e36c9eb-0368-46dc-af84-9c602a15555d nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.359863368 +0000 UTC m=+3.577061214 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3e36c9eb-0368-46dc-af84-9c602a15555d-cert") pod "ingress-canary-jjpsc" (UID: "3e36c9eb-0368-46dc-af84-9c602a15555d") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.860621 master-0 kubenswrapper[31411]: E0224 02:20:59.860361 31411 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.860621 master-0 kubenswrapper[31411]: E0224 02:20:59.860414 31411 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.860621 master-0 kubenswrapper[31411]: E0224 02:20:59.860420 31411 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.860621 master-0 kubenswrapper[31411]: E0224 02:20:59.860464 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-secret-metrics-client-certs podName:8c396c41-c617-4631-9700-a7052af5a276 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.360431684 +0000 UTC m=+3.577629530 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-secret-metrics-client-certs") pod "metrics-server-7b9cc5984b-smpdl" (UID: "8c396c41-c617-4631-9700-a7052af5a276") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.860621 master-0 kubenswrapper[31411]: E0224 02:20:59.860494 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8c396c41-c617-4631-9700-a7052af5a276-metrics-server-audit-profiles podName:8c396c41-c617-4631-9700-a7052af5a276 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.360483575 +0000 UTC m=+3.577681421 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/8c396c41-c617-4631-9700-a7052af5a276-metrics-server-audit-profiles") pod "metrics-server-7b9cc5984b-smpdl" (UID: "8c396c41-c617-4631-9700-a7052af5a276") : failed to sync configmap cache: timed out waiting for the condition Feb 24 02:20:59.860621 master-0 kubenswrapper[31411]: E0224 02:20:59.860530 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-secret-metrics-server-tls podName:8c396c41-c617-4631-9700-a7052af5a276 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.360506166 +0000 UTC m=+3.577704012 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-secret-metrics-server-tls") pod "metrics-server-7b9cc5984b-smpdl" (UID: "8c396c41-c617-4631-9700-a7052af5a276") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.860621 master-0 kubenswrapper[31411]: E0224 02:20:59.860507 31411 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.860621 master-0 kubenswrapper[31411]: E0224 02:20:59.860592 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/608a8a56-daee-4fa1-8300-42155217c68b-openshift-state-metrics-tls podName:608a8a56-daee-4fa1-8300-42155217c68b nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.360583498 +0000 UTC m=+3.577781344 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/608a8a56-daee-4fa1-8300-42155217c68b-openshift-state-metrics-tls") pod "openshift-state-metrics-6dbff8cb4c-swtr6" (UID: "608a8a56-daee-4fa1-8300-42155217c68b") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.860621 master-0 kubenswrapper[31411]: E0224 02:20:59.860593 31411 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.869411 master-0 kubenswrapper[31411]: E0224 02:20:59.860633 31411 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.869411 master-0 kubenswrapper[31411]: E0224 02:20:59.860649 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e0c87ae-6387-4c00-b03d-582566907fb6-serving-cert podName:8e0c87ae-6387-4c00-b03d-582566907fb6 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.36063312 +0000 UTC m=+3.577830966 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8e0c87ae-6387-4c00-b03d-582566907fb6-serving-cert") pod "insights-operator-59b498fcfb-dbkwd" (UID: "8e0c87ae-6387-4c00-b03d-582566907fb6") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.869411 master-0 kubenswrapper[31411]: E0224 02:20:59.860683 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/608a8a56-daee-4fa1-8300-42155217c68b-openshift-state-metrics-kube-rbac-proxy-config podName:608a8a56-daee-4fa1-8300-42155217c68b nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.360673741 +0000 UTC m=+3.577871587 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/608a8a56-daee-4fa1-8300-42155217c68b-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-6dbff8cb4c-swtr6" (UID: "608a8a56-daee-4fa1-8300-42155217c68b") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.869411 master-0 kubenswrapper[31411]: E0224 02:20:59.866994 31411 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.869411 master-0 kubenswrapper[31411]: E0224 02:20:59.867111 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/011c6603-d533-4449-b409-f6f698a3bd50-cluster-storage-operator-serving-cert podName:011c6603-d533-4449-b409-f6f698a3bd50 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.36707602 +0000 UTC m=+3.584273876 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/011c6603-d533-4449-b409-f6f698a3bd50-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-f94476f49-c5wlk" (UID: "011c6603-d533-4449-b409-f6f698a3bd50") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.879523 master-0 kubenswrapper[31411]: E0224 02:20:59.879484 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74a7801b-b7a4-4292-91b3-6285c239aeb7-cloud-credential-operator-serving-cert podName:74a7801b-b7a4-4292-91b3-6285c239aeb7 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:00.379455617 +0000 UTC m=+3.596653473 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/74a7801b-b7a4-4292-91b3-6285c239aeb7-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-6968c58f46-fcr59" (UID: "74a7801b-b7a4-4292-91b3-6285c239aeb7") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:20:59.879649 master-0 kubenswrapper[31411]: I0224 02:20:59.879595 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 24 02:20:59.895531 master-0 kubenswrapper[31411]: I0224 02:20:59.895492 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 24 02:20:59.919040 master-0 kubenswrapper[31411]: I0224 02:20:59.917875 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-thdws" Feb 24 02:20:59.935599 master-0 kubenswrapper[31411]: I0224 02:20:59.934731 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 24 02:20:59.958608 master-0 kubenswrapper[31411]: I0224 02:20:59.955921 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 24 02:20:59.979365 master-0 kubenswrapper[31411]: I0224 02:20:59.979296 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-7phpl" Feb 24 02:20:59.994565 master-0 kubenswrapper[31411]: I0224 02:20:59.994507 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-pfvnv" Feb 24 02:21:00.022843 master-0 kubenswrapper[31411]: I0224 02:21:00.021468 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 24 02:21:00.035344 master-0 kubenswrapper[31411]: I0224 02:21:00.035287 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 24 02:21:00.056714 master-0 kubenswrapper[31411]: I0224 02:21:00.054858 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 24 02:21:00.075185 master-0 kubenswrapper[31411]: I0224 02:21:00.075069 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 24 02:21:00.096262 master-0 kubenswrapper[31411]: I0224 02:21:00.096213 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-lbd2d" Feb 24 02:21:00.118607 master-0 kubenswrapper[31411]: I0224 02:21:00.118530 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 24 02:21:00.135964 master-0 kubenswrapper[31411]: I0224 02:21:00.135820 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-k5dgr" Feb 24 02:21:00.161082 master-0 kubenswrapper[31411]: I0224 02:21:00.161032 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 24 02:21:00.178875 master-0 kubenswrapper[31411]: I0224 02:21:00.178840 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 24 02:21:00.199524 master-0 kubenswrapper[31411]: I0224 02:21:00.198727 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-ndtpv" Feb 24 02:21:00.213985 master-0 kubenswrapper[31411]: I0224 02:21:00.213935 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 24 02:21:00.234721 master-0 kubenswrapper[31411]: I0224 02:21:00.234668 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 24 02:21:00.254783 master-0 kubenswrapper[31411]: I0224 02:21:00.254742 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 24 02:21:00.275466 master-0 kubenswrapper[31411]: I0224 02:21:00.275409 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 24 02:21:00.294192 master-0 kubenswrapper[31411]: I0224 02:21:00.294141 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 24 02:21:00.314471 master-0 kubenswrapper[31411]: I0224 02:21:00.314441 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 24 02:21:00.334639 master-0 kubenswrapper[31411]: I0224 02:21:00.334564 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-rrfph" Feb 24 02:21:00.353591 master-0 kubenswrapper[31411]: I0224 02:21:00.353517 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/24765ff1-5e7d-4100-ad81-8f73555fc0a2-metrics-client-ca\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:21:00.353652 master-0 kubenswrapper[31411]: I0224 02:21:00.353620 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-config\") pod \"controller-manager-56b6d9c5b7-lxwt6\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:21:00.353722 master-0 kubenswrapper[31411]: I0224 02:21:00.353672 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/df42c69b-1a0e-41f5-9006-17540369b9ad-proxy-tls\") pod \"machine-config-daemon-hfpql\" (UID: \"df42c69b-1a0e-41f5-9006-17540369b9ad\") " pod="openshift-machine-config-operator/machine-config-daemon-hfpql" Feb 24 02:21:00.353762 master-0 kubenswrapper[31411]: I0224 02:21:00.353728 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9cad383a-cb69-41a8-aec8-23ee1c930430-apiservice-cert\") pod \"packageserver-597975fc65-xcl6c\" (UID: \"9cad383a-cb69-41a8-aec8-23ee1c930430\") " pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" Feb 24 02:21:00.353866 master-0 kubenswrapper[31411]: I0224 02:21:00.353829 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/608a8a56-daee-4fa1-8300-42155217c68b-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-swtr6\" (UID: \"608a8a56-daee-4fa1-8300-42155217c68b\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" Feb 24 02:21:00.353956 master-0 kubenswrapper[31411]: I0224 02:21:00.353920 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c396c41-c617-4631-9700-a7052af5a276-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:21:00.354000 master-0 kubenswrapper[31411]: I0224 02:21:00.353973 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:21:00.354058 master-0 kubenswrapper[31411]: I0224 02:21:00.354032 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/91168f3d-70eb-4351-bb83-5411a96ad29d-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-mtrdk\" (UID: \"91168f3d-70eb-4351-bb83-5411a96ad29d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk" Feb 24 02:21:00.354105 master-0 kubenswrapper[31411]: I0224 02:21:00.354081 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74a7801b-b7a4-4292-91b3-6285c239aeb7-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-fcr59\" (UID: \"74a7801b-b7a4-4292-91b3-6285c239aeb7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59" Feb 24 02:21:00.354337 master-0 kubenswrapper[31411]: I0224 02:21:00.354241 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9cad383a-cb69-41a8-aec8-23ee1c930430-apiservice-cert\") pod \"packageserver-597975fc65-xcl6c\" (UID: \"9cad383a-cb69-41a8-aec8-23ee1c930430\") " pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" Feb 24 02:21:00.354502 master-0 kubenswrapper[31411]: I0224 02:21:00.354478 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 24 02:21:00.354556 master-0 kubenswrapper[31411]: I0224 02:21:00.354471 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:21:00.354624 master-0 kubenswrapper[31411]: I0224 02:21:00.354600 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e68b3061-c9d2-469d-babf-7ccac0ad9b14-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-ctssl\" (UID: \"e68b3061-c9d2-469d-babf-7ccac0ad9b14\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-ctssl" Feb 24 02:21:00.354666 master-0 kubenswrapper[31411]: I0224 02:21:00.354640 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-client-ca-bundle\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:21:00.354666 master-0 kubenswrapper[31411]: I0224 02:21:00.354635 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/91168f3d-70eb-4351-bb83-5411a96ad29d-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-mtrdk\" (UID: \"91168f3d-70eb-4351-bb83-5411a96ad29d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk" Feb 24 02:21:00.354724 master-0 kubenswrapper[31411]: I0224 02:21:00.354667 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1-certs\") pod \"machine-config-server-drf28\" (UID: \"9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1\") " pod="openshift-machine-config-operator/machine-config-server-drf28" Feb 24 02:21:00.354800 master-0 kubenswrapper[31411]: I0224 02:21:00.354760 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74a7801b-b7a4-4292-91b3-6285c239aeb7-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-fcr59\" (UID: \"74a7801b-b7a4-4292-91b3-6285c239aeb7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59" Feb 24 02:21:00.355284 master-0 kubenswrapper[31411]: I0224 02:21:00.354837 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/732a3831-20e0-47dc-a29a-8bb4659541b7-service-ca\") pod \"cluster-version-operator-57476485-9cjj5\" (UID: \"732a3831-20e0-47dc-a29a-8bb4659541b7\") " pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" Feb 24 02:21:00.355284 master-0 kubenswrapper[31411]: I0224 02:21:00.354876 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8e70a9f5-1154-40e9-a487-21e36e7f420a-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-8znkt\" (UID: \"8e70a9f5-1154-40e9-a487-21e36e7f420a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:21:00.355284 master-0 kubenswrapper[31411]: I0224 02:21:00.355063 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d-proxy-tls\") pod \"machine-config-controller-54cb48566c-xzpl4\" (UID: \"e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-xzpl4" Feb 24 02:21:00.355284 master-0 kubenswrapper[31411]: I0224 02:21:00.355125 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e70a9f5-1154-40e9-a487-21e36e7f420a-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-8znkt\" (UID: \"8e70a9f5-1154-40e9-a487-21e36e7f420a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:21:00.355284 master-0 kubenswrapper[31411]: I0224 02:21:00.355188 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/91168f3d-70eb-4351-bb83-5411a96ad29d-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-mtrdk\" (UID: \"91168f3d-70eb-4351-bb83-5411a96ad29d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk" Feb 24 02:21:00.355284 master-0 kubenswrapper[31411]: I0224 02:21:00.355211 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/732a3831-20e0-47dc-a29a-8bb4659541b7-service-ca\") pod \"cluster-version-operator-57476485-9cjj5\" (UID: \"732a3831-20e0-47dc-a29a-8bb4659541b7\") " pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" Feb 24 02:21:00.355284 master-0 kubenswrapper[31411]: I0224 02:21:00.355226 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-sjqsx\" (UID: \"8ebd1a97-ff7b-4a10-a1b5-956e427478a8\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" Feb 24 02:21:00.355494 master-0 kubenswrapper[31411]: I0224 02:21:00.355289 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-proxy-ca-bundles\") pod \"controller-manager-56b6d9c5b7-lxwt6\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:21:00.355494 master-0 kubenswrapper[31411]: I0224 02:21:00.355348 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55a2662a-d672-4a46-9b81-bfcaf334eedb-client-ca\") pod \"route-controller-manager-676fddcd58-49xzd\" (UID: \"55a2662a-d672-4a46-9b81-bfcaf334eedb\") " pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:21:00.355494 master-0 kubenswrapper[31411]: I0224 02:21:00.355389 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/df2b8111-41c6-4333-b473-4c08fb836f70-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-66lml\" (UID: \"df2b8111-41c6-4333-b473-4c08fb836f70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" Feb 24 02:21:00.355494 master-0 kubenswrapper[31411]: I0224 02:21:00.355443 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-dsjgm\" (UID: \"0ce6dd93-084c-4e15-8b7c-e0829a6df14e\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" Feb 24 02:21:00.355625 master-0 kubenswrapper[31411]: I0224 02:21:00.355505 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/91168f3d-70eb-4351-bb83-5411a96ad29d-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-mtrdk\" (UID: \"91168f3d-70eb-4351-bb83-5411a96ad29d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk" Feb 24 02:21:00.355625 master-0 kubenswrapper[31411]: I0224 02:21:00.355530 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/df2b8111-41c6-4333-b473-4c08fb836f70-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-66lml\" (UID: \"df2b8111-41c6-4333-b473-4c08fb836f70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" Feb 24 02:21:00.355697 master-0 kubenswrapper[31411]: I0224 02:21:00.355635 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/22a83952-32ec-48f7-85cd-209b62362ae2-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-9gkp2\" (UID: \"22a83952-32ec-48f7-85cd-209b62362ae2\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-9gkp2" Feb 24 02:21:00.356048 master-0 kubenswrapper[31411]: I0224 02:21:00.355728 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0127e0d5-9961-4ff6-851d-884e71e1dcf2-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-bkc9s\" (UID: \"0127e0d5-9961-4ff6-851d-884e71e1dcf2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-bkc9s" Feb 24 02:21:00.356048 master-0 kubenswrapper[31411]: I0224 02:21:00.355776 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55a2662a-d672-4a46-9b81-bfcaf334eedb-config\") pod \"route-controller-manager-676fddcd58-49xzd\" (UID: \"55a2662a-d672-4a46-9b81-bfcaf334eedb\") " pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:21:00.356048 master-0 kubenswrapper[31411]: I0224 02:21:00.355813 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9cad383a-cb69-41a8-aec8-23ee1c930430-webhook-cert\") pod \"packageserver-597975fc65-xcl6c\" (UID: \"9cad383a-cb69-41a8-aec8-23ee1c930430\") " pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" Feb 24 02:21:00.356048 master-0 kubenswrapper[31411]: I0224 02:21:00.355877 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/22a83952-32ec-48f7-85cd-209b62362ae2-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-9gkp2\" (UID: \"22a83952-32ec-48f7-85cd-209b62362ae2\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-9gkp2" Feb 24 02:21:00.356048 master-0 kubenswrapper[31411]: I0224 02:21:00.355937 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e70a9f5-1154-40e9-a487-21e36e7f420a-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-8znkt\" (UID: \"8e70a9f5-1154-40e9-a487-21e36e7f420a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:21:00.356048 master-0 kubenswrapper[31411]: I0224 02:21:00.355993 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0127e0d5-9961-4ff6-851d-884e71e1dcf2-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-bkc9s\" (UID: \"0127e0d5-9961-4ff6-851d-884e71e1dcf2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-bkc9s" Feb 24 02:21:00.356048 master-0 kubenswrapper[31411]: I0224 02:21:00.356008 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-images\") pod \"machine-api-operator-5c7cf458b4-dsjgm\" (UID: \"0ce6dd93-084c-4e15-8b7c-e0829a6df14e\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" Feb 24 02:21:00.356303 master-0 kubenswrapper[31411]: I0224 02:21:00.356181 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-tls\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:21:00.356303 master-0 kubenswrapper[31411]: I0224 02:21:00.356240 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9cad383a-cb69-41a8-aec8-23ee1c930430-webhook-cert\") pod \"packageserver-597975fc65-xcl6c\" (UID: \"9cad383a-cb69-41a8-aec8-23ee1c930430\") " pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" Feb 24 02:21:00.375751 master-0 kubenswrapper[31411]: I0224 02:21:00.375455 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-6fp4p" Feb 24 02:21:00.404017 master-0 kubenswrapper[31411]: I0224 02:21:00.403900 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 24 02:21:00.415306 master-0 kubenswrapper[31411]: I0224 02:21:00.415273 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 24 02:21:00.424590 master-0 kubenswrapper[31411]: I0224 02:21:00.424526 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/df42c69b-1a0e-41f5-9006-17540369b9ad-proxy-tls\") pod \"machine-config-daemon-hfpql\" (UID: \"df42c69b-1a0e-41f5-9006-17540369b9ad\") " pod="openshift-machine-config-operator/machine-config-daemon-hfpql" Feb 24 02:21:00.434567 master-0 kubenswrapper[31411]: I0224 02:21:00.434516 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-pbr2s" Feb 24 02:21:00.457017 master-0 kubenswrapper[31411]: I0224 02:21:00.456805 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 24 02:21:00.458567 master-0 kubenswrapper[31411]: I0224 02:21:00.458521 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8d6a6c0-b944-4206-9178-9a9930b303b9-serving-cert\") pod \"controller-manager-56b6d9c5b7-lxwt6\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:21:00.458567 master-0 kubenswrapper[31411]: I0224 02:21:00.458582 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a4cea44a-1c6e-465f-97df-2c951056cb85-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-ckntz\" (UID: \"a4cea44a-1c6e-465f-97df-2c951056cb85\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-ckntz" Feb 24 02:21:00.458697 master-0 kubenswrapper[31411]: I0224 02:21:00.458612 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55a2662a-d672-4a46-9b81-bfcaf334eedb-serving-cert\") pod \"route-controller-manager-676fddcd58-49xzd\" (UID: \"55a2662a-d672-4a46-9b81-bfcaf334eedb\") " pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:21:00.458982 master-0 kubenswrapper[31411]: I0224 02:21:00.458899 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e0c87ae-6387-4c00-b03d-582566907fb6-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-dbkwd\" (UID: \"8e0c87ae-6387-4c00-b03d-582566907fb6\") " pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" Feb 24 02:21:00.458982 master-0 kubenswrapper[31411]: I0224 02:21:00.458909 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a4cea44a-1c6e-465f-97df-2c951056cb85-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-ckntz\" (UID: \"a4cea44a-1c6e-465f-97df-2c951056cb85\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-ckntz" Feb 24 02:21:00.459273 master-0 kubenswrapper[31411]: I0224 02:21:00.459101 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-config\") pod \"machine-api-operator-5c7cf458b4-dsjgm\" (UID: \"0ce6dd93-084c-4e15-8b7c-e0829a6df14e\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" Feb 24 02:21:00.459273 master-0 kubenswrapper[31411]: I0224 02:21:00.459209 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a5305004-5311-4bc4-ad7c-6670f97c89cb-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:21:00.459273 master-0 kubenswrapper[31411]: I0224 02:21:00.459236 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e0c87ae-6387-4c00-b03d-582566907fb6-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-dbkwd\" (UID: \"8e0c87ae-6387-4c00-b03d-582566907fb6\") " pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" Feb 24 02:21:00.459273 master-0 kubenswrapper[31411]: I0224 02:21:00.459246 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-client-ca\") pod \"controller-manager-56b6d9c5b7-lxwt6\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:21:00.459989 master-0 kubenswrapper[31411]: I0224 02:21:00.459622 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/732a3831-20e0-47dc-a29a-8bb4659541b7-serving-cert\") pod \"cluster-version-operator-57476485-9cjj5\" (UID: \"732a3831-20e0-47dc-a29a-8bb4659541b7\") " pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" Feb 24 02:21:00.459989 master-0 kubenswrapper[31411]: I0224 02:21:00.459717 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-config\") pod \"machine-approver-7dd9c7d7b9-sjqsx\" (UID: \"8ebd1a97-ff7b-4a10-a1b5-956e427478a8\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" Feb 24 02:21:00.459989 master-0 kubenswrapper[31411]: I0224 02:21:00.459772 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e0c87ae-6387-4c00-b03d-582566907fb6-service-ca-bundle\") pod \"insights-operator-59b498fcfb-dbkwd\" (UID: \"8e0c87ae-6387-4c00-b03d-582566907fb6\") " pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" Feb 24 02:21:00.459989 master-0 kubenswrapper[31411]: I0224 02:21:00.459951 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/732a3831-20e0-47dc-a29a-8bb4659541b7-serving-cert\") pod \"cluster-version-operator-57476485-9cjj5\" (UID: \"732a3831-20e0-47dc-a29a-8bb4659541b7\") " pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" Feb 24 02:21:00.460155 master-0 kubenswrapper[31411]: I0224 02:21:00.460033 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/df2b8111-41c6-4333-b473-4c08fb836f70-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-66lml\" (UID: \"df2b8111-41c6-4333-b473-4c08fb836f70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" Feb 24 02:21:00.460155 master-0 kubenswrapper[31411]: I0224 02:21:00.460100 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:21:00.460155 master-0 kubenswrapper[31411]: I0224 02:21:00.460151 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-sjqsx\" (UID: \"8ebd1a97-ff7b-4a10-a1b5-956e427478a8\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" Feb 24 02:21:00.460280 master-0 kubenswrapper[31411]: I0224 02:21:00.460164 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e0c87ae-6387-4c00-b03d-582566907fb6-service-ca-bundle\") pod \"insights-operator-59b498fcfb-dbkwd\" (UID: \"8e0c87ae-6387-4c00-b03d-582566907fb6\") " pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" Feb 24 02:21:00.460449 master-0 kubenswrapper[31411]: I0224 02:21:00.460414 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3e36c9eb-0368-46dc-af84-9c602a15555d-cert\") pod \"ingress-canary-jjpsc\" (UID: \"3e36c9eb-0368-46dc-af84-9c602a15555d\") " pod="openshift-ingress-canary/ingress-canary-jjpsc" Feb 24 02:21:00.460521 master-0 kubenswrapper[31411]: I0224 02:21:00.460483 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/608a8a56-daee-4fa1-8300-42155217c68b-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-swtr6\" (UID: \"608a8a56-daee-4fa1-8300-42155217c68b\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" Feb 24 02:21:00.460558 master-0 kubenswrapper[31411]: I0224 02:21:00.460538 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-secret-metrics-client-certs\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:21:00.460708 master-0 kubenswrapper[31411]: I0224 02:21:00.460624 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-secret-metrics-server-tls\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:21:00.460708 master-0 kubenswrapper[31411]: I0224 02:21:00.460692 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/8c396c41-c617-4631-9700-a7052af5a276-metrics-server-audit-profiles\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:21:00.460810 master-0 kubenswrapper[31411]: I0224 02:21:00.460755 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/608a8a56-daee-4fa1-8300-42155217c68b-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-swtr6\" (UID: \"608a8a56-daee-4fa1-8300-42155217c68b\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" Feb 24 02:21:00.460810 master-0 kubenswrapper[31411]: I0224 02:21:00.460798 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e0c87ae-6387-4c00-b03d-582566907fb6-serving-cert\") pod \"insights-operator-59b498fcfb-dbkwd\" (UID: \"8e0c87ae-6387-4c00-b03d-582566907fb6\") " pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" Feb 24 02:21:00.461332 master-0 kubenswrapper[31411]: I0224 02:21:00.461035 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/011c6603-d533-4449-b409-f6f698a3bd50-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-c5wlk\" (UID: \"011c6603-d533-4449-b409-f6f698a3bd50\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-c5wlk" Feb 24 02:21:00.461332 master-0 kubenswrapper[31411]: I0224 02:21:00.461200 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e0c87ae-6387-4c00-b03d-582566907fb6-serving-cert\") pod \"insights-operator-59b498fcfb-dbkwd\" (UID: \"8e0c87ae-6387-4c00-b03d-582566907fb6\") " pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" Feb 24 02:21:00.461332 master-0 kubenswrapper[31411]: I0224 02:21:00.461290 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/011c6603-d533-4449-b409-f6f698a3bd50-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-c5wlk\" (UID: \"011c6603-d533-4449-b409-f6f698a3bd50\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-c5wlk" Feb 24 02:21:00.462094 master-0 kubenswrapper[31411]: I0224 02:21:00.462055 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/74a7801b-b7a4-4292-91b3-6285c239aeb7-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-fcr59\" (UID: \"74a7801b-b7a4-4292-91b3-6285c239aeb7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59" Feb 24 02:21:00.462159 master-0 kubenswrapper[31411]: I0224 02:21:00.462129 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:21:00.462451 master-0 kubenswrapper[31411]: I0224 02:21:00.462251 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1-node-bootstrap-token\") pod \"machine-config-server-drf28\" (UID: \"9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1\") " pod="openshift-machine-config-operator/machine-config-server-drf28" Feb 24 02:21:00.462451 master-0 kubenswrapper[31411]: I0224 02:21:00.462414 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/74a7801b-b7a4-4292-91b3-6285c239aeb7-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-fcr59\" (UID: \"74a7801b-b7a4-4292-91b3-6285c239aeb7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59" Feb 24 02:21:00.466835 master-0 kubenswrapper[31411]: I0224 02:21:00.466796 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-dsjgm\" (UID: \"0ce6dd93-084c-4e15-8b7c-e0829a6df14e\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" Feb 24 02:21:00.477933 master-0 kubenswrapper[31411]: I0224 02:21:00.477884 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 24 02:21:00.490655 master-0 kubenswrapper[31411]: I0224 02:21:00.490607 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-images\") pod \"machine-api-operator-5c7cf458b4-dsjgm\" (UID: \"0ce6dd93-084c-4e15-8b7c-e0829a6df14e\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" Feb 24 02:21:00.494280 master-0 kubenswrapper[31411]: I0224 02:21:00.494228 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:21:00.495660 master-0 kubenswrapper[31411]: I0224 02:21:00.495605 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-brkg4" Feb 24 02:21:00.515247 master-0 kubenswrapper[31411]: I0224 02:21:00.515162 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 24 02:21:00.520408 master-0 kubenswrapper[31411]: I0224 02:21:00.520343 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-config\") pod \"machine-api-operator-5c7cf458b4-dsjgm\" (UID: \"0ce6dd93-084c-4e15-8b7c-e0829a6df14e\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" Feb 24 02:21:00.540751 master-0 kubenswrapper[31411]: I0224 02:21:00.534877 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-mdmqh" Feb 24 02:21:00.554436 master-0 kubenswrapper[31411]: I0224 02:21:00.554349 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 24 02:21:00.575054 master-0 kubenswrapper[31411]: I0224 02:21:00.575017 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 24 02:21:00.581878 master-0 kubenswrapper[31411]: I0224 02:21:00.581841 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/8c396c41-c617-4631-9700-a7052af5a276-metrics-server-audit-profiles\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:21:00.595489 master-0 kubenswrapper[31411]: I0224 02:21:00.595428 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-xlxp2" Feb 24 02:21:00.618703 master-0 kubenswrapper[31411]: I0224 02:21:00.618666 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-n76llk2nkkst" Feb 24 02:21:00.626098 master-0 kubenswrapper[31411]: I0224 02:21:00.626042 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-client-ca-bundle\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:21:00.634700 master-0 kubenswrapper[31411]: I0224 02:21:00.634654 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 24 02:21:00.644775 master-0 kubenswrapper[31411]: I0224 02:21:00.644724 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c396c41-c617-4631-9700-a7052af5a276-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:21:00.654602 master-0 kubenswrapper[31411]: I0224 02:21:00.654505 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 24 02:21:00.661210 master-0 kubenswrapper[31411]: I0224 02:21:00.661153 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-secret-metrics-server-tls\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:21:00.677265 master-0 kubenswrapper[31411]: I0224 02:21:00.677226 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 24 02:21:00.681382 master-0 kubenswrapper[31411]: I0224 02:21:00.681342 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3e36c9eb-0368-46dc-af84-9c602a15555d-cert\") pod \"ingress-canary-jjpsc\" (UID: \"3e36c9eb-0368-46dc-af84-9c602a15555d\") " pod="openshift-ingress-canary/ingress-canary-jjpsc" Feb 24 02:21:00.695420 master-0 kubenswrapper[31411]: I0224 02:21:00.695379 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 24 02:21:00.701547 master-0 kubenswrapper[31411]: I0224 02:21:00.701066 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-secret-metrics-client-certs\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:21:00.713257 master-0 kubenswrapper[31411]: I0224 02:21:00.713209 31411 request.go:700] Waited for 2.013001013s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 24 02:21:00.714663 master-0 kubenswrapper[31411]: I0224 02:21:00.714615 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 24 02:21:00.733868 master-0 kubenswrapper[31411]: I0224 02:21:00.733840 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 24 02:21:00.735101 master-0 kubenswrapper[31411]: I0224 02:21:00.735052 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1-certs\") pod \"machine-config-server-drf28\" (UID: \"9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1\") " pod="openshift-machine-config-operator/machine-config-server-drf28" Feb 24 02:21:00.755056 master-0 kubenswrapper[31411]: I0224 02:21:00.754996 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-5kctc" Feb 24 02:21:00.774788 master-0 kubenswrapper[31411]: I0224 02:21:00.774735 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 24 02:21:00.783998 master-0 kubenswrapper[31411]: I0224 02:21:00.783955 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1-node-bootstrap-token\") pod \"machine-config-server-drf28\" (UID: \"9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1\") " pod="openshift-machine-config-operator/machine-config-server-drf28" Feb 24 02:21:00.794981 master-0 kubenswrapper[31411]: I0224 02:21:00.794936 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-7xngw" Feb 24 02:21:00.817127 master-0 kubenswrapper[31411]: I0224 02:21:00.817064 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 24 02:21:00.821765 master-0 kubenswrapper[31411]: I0224 02:21:00.821727 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-sjqsx\" (UID: \"8ebd1a97-ff7b-4a10-a1b5-956e427478a8\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" Feb 24 02:21:00.835083 master-0 kubenswrapper[31411]: I0224 02:21:00.835033 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 24 02:21:00.835820 master-0 kubenswrapper[31411]: I0224 02:21:00.835781 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-sjqsx\" (UID: \"8ebd1a97-ff7b-4a10-a1b5-956e427478a8\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" Feb 24 02:21:00.854773 master-0 kubenswrapper[31411]: I0224 02:21:00.854712 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 24 02:21:00.861928 master-0 kubenswrapper[31411]: I0224 02:21:00.861896 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/608a8a56-daee-4fa1-8300-42155217c68b-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-swtr6\" (UID: \"608a8a56-daee-4fa1-8300-42155217c68b\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" Feb 24 02:21:00.904153 master-0 kubenswrapper[31411]: I0224 02:21:00.875499 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 24 02:21:00.908866 master-0 kubenswrapper[31411]: I0224 02:21:00.908803 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-shkn8" Feb 24 02:21:00.915382 master-0 kubenswrapper[31411]: I0224 02:21:00.915341 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 24 02:21:00.920515 master-0 kubenswrapper[31411]: I0224 02:21:00.920339 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-config\") pod \"machine-approver-7dd9c7d7b9-sjqsx\" (UID: \"8ebd1a97-ff7b-4a10-a1b5-956e427478a8\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" Feb 24 02:21:00.934458 master-0 kubenswrapper[31411]: I0224 02:21:00.934417 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 24 02:21:00.942607 master-0 kubenswrapper[31411]: I0224 02:21:00.942180 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/608a8a56-daee-4fa1-8300-42155217c68b-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-swtr6\" (UID: \"608a8a56-daee-4fa1-8300-42155217c68b\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" Feb 24 02:21:00.954493 master-0 kubenswrapper[31411]: I0224 02:21:00.954459 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 24 02:21:00.974039 master-0 kubenswrapper[31411]: I0224 02:21:00.973978 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 24 02:21:00.976465 master-0 kubenswrapper[31411]: I0224 02:21:00.976435 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/df2b8111-41c6-4333-b473-4c08fb836f70-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-66lml\" (UID: \"df2b8111-41c6-4333-b473-4c08fb836f70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" Feb 24 02:21:00.994226 master-0 kubenswrapper[31411]: I0224 02:21:00.994191 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 24 02:21:00.996487 master-0 kubenswrapper[31411]: I0224 02:21:00.996456 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/df2b8111-41c6-4333-b473-4c08fb836f70-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-66lml\" (UID: \"df2b8111-41c6-4333-b473-4c08fb836f70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" Feb 24 02:21:01.000561 master-0 kubenswrapper[31411]: I0224 02:21:01.000511 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a5305004-5311-4bc4-ad7c-6670f97c89cb-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:21:01.004127 master-0 kubenswrapper[31411]: I0224 02:21:01.004085 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/24765ff1-5e7d-4100-ad81-8f73555fc0a2-metrics-client-ca\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:21:01.004355 master-0 kubenswrapper[31411]: I0224 02:21:01.004331 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/608a8a56-daee-4fa1-8300-42155217c68b-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-swtr6\" (UID: \"608a8a56-daee-4fa1-8300-42155217c68b\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" Feb 24 02:21:01.013983 master-0 kubenswrapper[31411]: I0224 02:21:01.013934 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-fdtcj" Feb 24 02:21:01.035208 master-0 kubenswrapper[31411]: I0224 02:21:01.035164 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-ntn8v" Feb 24 02:21:01.054662 master-0 kubenswrapper[31411]: I0224 02:21:01.054622 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 02:21:01.059434 master-0 kubenswrapper[31411]: I0224 02:21:01.059401 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8d6a6c0-b944-4206-9178-9a9930b303b9-serving-cert\") pod \"controller-manager-56b6d9c5b7-lxwt6\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:21:01.075014 master-0 kubenswrapper[31411]: I0224 02:21:01.074965 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 02:21:01.094385 master-0 kubenswrapper[31411]: I0224 02:21:01.094345 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 02:21:01.114535 master-0 kubenswrapper[31411]: I0224 02:21:01.114475 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 24 02:21:01.120807 master-0 kubenswrapper[31411]: I0224 02:21:01.120766 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/df2b8111-41c6-4333-b473-4c08fb836f70-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-66lml\" (UID: \"df2b8111-41c6-4333-b473-4c08fb836f70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" Feb 24 02:21:01.141730 master-0 kubenswrapper[31411]: I0224 02:21:01.141696 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 02:21:01.146713 master-0 kubenswrapper[31411]: I0224 02:21:01.146681 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-proxy-ca-bundles\") pod \"controller-manager-56b6d9c5b7-lxwt6\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:21:01.154479 master-0 kubenswrapper[31411]: I0224 02:21:01.154425 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 02:21:01.164527 master-0 kubenswrapper[31411]: I0224 02:21:01.164455 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-config\") pod \"controller-manager-56b6d9c5b7-lxwt6\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:21:01.174313 master-0 kubenswrapper[31411]: I0224 02:21:01.174276 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-rpcz4" Feb 24 02:21:01.194724 master-0 kubenswrapper[31411]: I0224 02:21:01.194672 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-m4t4r" Feb 24 02:21:01.214870 master-0 kubenswrapper[31411]: I0224 02:21:01.214821 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 24 02:21:01.225604 master-0 kubenswrapper[31411]: I0224 02:21:01.225516 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:21:01.233927 master-0 kubenswrapper[31411]: I0224 02:21:01.233880 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 24 02:21:01.236396 master-0 kubenswrapper[31411]: I0224 02:21:01.236351 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d-proxy-tls\") pod \"machine-config-controller-54cb48566c-xzpl4\" (UID: \"e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-xzpl4" Feb 24 02:21:01.255218 master-0 kubenswrapper[31411]: I0224 02:21:01.255161 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 24 02:21:01.265195 master-0 kubenswrapper[31411]: I0224 02:21:01.265133 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:21:01.275357 master-0 kubenswrapper[31411]: I0224 02:21:01.275319 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 02:21:01.279860 master-0 kubenswrapper[31411]: I0224 02:21:01.279812 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-client-ca\") pod \"controller-manager-56b6d9c5b7-lxwt6\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:21:01.294627 master-0 kubenswrapper[31411]: I0224 02:21:01.294562 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 24 02:21:01.295809 master-0 kubenswrapper[31411]: I0224 02:21:01.295771 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e70a9f5-1154-40e9-a487-21e36e7f420a-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-8znkt\" (UID: \"8e70a9f5-1154-40e9-a487-21e36e7f420a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:21:01.314450 master-0 kubenswrapper[31411]: I0224 02:21:01.314395 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 24 02:21:01.315706 master-0 kubenswrapper[31411]: I0224 02:21:01.315662 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8e70a9f5-1154-40e9-a487-21e36e7f420a-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-8znkt\" (UID: \"8e70a9f5-1154-40e9-a487-21e36e7f420a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:21:01.335140 master-0 kubenswrapper[31411]: I0224 02:21:01.335089 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 24 02:21:01.342879 master-0 kubenswrapper[31411]: I0224 02:21:01.342840 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:21:01.355107 master-0 kubenswrapper[31411]: E0224 02:21:01.355069 31411 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Feb 24 02:21:01.355168 master-0 kubenswrapper[31411]: E0224 02:21:01.355154 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e68b3061-c9d2-469d-babf-7ccac0ad9b14-webhook-certs podName:e68b3061-c9d2-469d-babf-7ccac0ad9b14 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:02.355128772 +0000 UTC m=+5.572326638 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e68b3061-c9d2-469d-babf-7ccac0ad9b14-webhook-certs") pod "multus-admission-controller-5f54bf67d4-ctssl" (UID: "e68b3061-c9d2-469d-babf-7ccac0ad9b14") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:21:01.355240 master-0 kubenswrapper[31411]: I0224 02:21:01.355193 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-lprdj" Feb 24 02:21:01.355724 master-0 kubenswrapper[31411]: E0224 02:21:01.355680 31411 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 24 02:21:01.355833 master-0 kubenswrapper[31411]: E0224 02:21:01.355798 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55a2662a-d672-4a46-9b81-bfcaf334eedb-client-ca podName:55a2662a-d672-4a46-9b81-bfcaf334eedb nodeName:}" failed. No retries permitted until 2026-02-24 02:21:02.35577019 +0000 UTC m=+5.572968036 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/55a2662a-d672-4a46-9b81-bfcaf334eedb-client-ca") pod "route-controller-manager-676fddcd58-49xzd" (UID: "55a2662a-d672-4a46-9b81-bfcaf334eedb") : failed to sync configmap cache: timed out waiting for the condition Feb 24 02:21:01.356393 master-0 kubenswrapper[31411]: E0224 02:21:01.356367 31411 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 24 02:21:01.356393 master-0 kubenswrapper[31411]: E0224 02:21:01.356384 31411 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 02:21:01.356484 master-0 kubenswrapper[31411]: E0224 02:21:01.356420 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55a2662a-d672-4a46-9b81-bfcaf334eedb-config podName:55a2662a-d672-4a46-9b81-bfcaf334eedb nodeName:}" failed. No retries permitted until 2026-02-24 02:21:02.356408068 +0000 UTC m=+5.573605914 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/55a2662a-d672-4a46-9b81-bfcaf334eedb-config") pod "route-controller-manager-676fddcd58-49xzd" (UID: "55a2662a-d672-4a46-9b81-bfcaf334eedb") : failed to sync configmap cache: timed out waiting for the condition Feb 24 02:21:01.356484 master-0 kubenswrapper[31411]: E0224 02:21:01.356445 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e70a9f5-1154-40e9-a487-21e36e7f420a-cloud-controller-manager-operator-tls podName:8e70a9f5-1154-40e9-a487-21e36e7f420a nodeName:}" failed. No retries permitted until 2026-02-24 02:21:02.356430539 +0000 UTC m=+5.573628395 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/8e70a9f5-1154-40e9-a487-21e36e7f420a-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" (UID: "8e70a9f5-1154-40e9-a487-21e36e7f420a") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:21:01.356871 master-0 kubenswrapper[31411]: E0224 02:21:01.356837 31411 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 02:21:01.356976 master-0 kubenswrapper[31411]: E0224 02:21:01.356942 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-tls podName:24765ff1-5e7d-4100-ad81-8f73555fc0a2 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:02.356914002 +0000 UTC m=+5.574111858 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-tls") pod "node-exporter-2qn8m" (UID: "24765ff1-5e7d-4100-ad81-8f73555fc0a2") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:21:01.375720 master-0 kubenswrapper[31411]: I0224 02:21:01.375669 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 24 02:21:01.394845 master-0 kubenswrapper[31411]: I0224 02:21:01.394797 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-ddb6g" Feb 24 02:21:01.414497 master-0 kubenswrapper[31411]: I0224 02:21:01.414461 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 24 02:21:01.435504 master-0 kubenswrapper[31411]: I0224 02:21:01.435414 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 24 02:21:01.441374 master-0 kubenswrapper[31411]: I0224 02:21:01.441328 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:21:01.455536 master-0 kubenswrapper[31411]: I0224 02:21:01.455399 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 24 02:21:01.459509 master-0 kubenswrapper[31411]: E0224 02:21:01.459442 31411 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 02:21:01.459700 master-0 kubenswrapper[31411]: E0224 02:21:01.459665 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55a2662a-d672-4a46-9b81-bfcaf334eedb-serving-cert podName:55a2662a-d672-4a46-9b81-bfcaf334eedb nodeName:}" failed. No retries permitted until 2026-02-24 02:21:02.45962617 +0000 UTC m=+5.676824206 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/55a2662a-d672-4a46-9b81-bfcaf334eedb-serving-cert") pod "route-controller-manager-676fddcd58-49xzd" (UID: "55a2662a-d672-4a46-9b81-bfcaf334eedb") : failed to sync secret cache: timed out waiting for the condition Feb 24 02:21:01.476045 master-0 kubenswrapper[31411]: I0224 02:21:01.475994 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 24 02:21:01.495215 master-0 kubenswrapper[31411]: I0224 02:21:01.495184 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-c5cjc" Feb 24 02:21:01.515137 master-0 kubenswrapper[31411]: I0224 02:21:01.515093 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 24 02:21:01.535368 master-0 kubenswrapper[31411]: I0224 02:21:01.535302 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 02:21:01.555233 master-0 kubenswrapper[31411]: I0224 02:21:01.555191 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-ltf57" Feb 24 02:21:01.575444 master-0 kubenswrapper[31411]: I0224 02:21:01.575418 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 02:21:01.594459 master-0 kubenswrapper[31411]: I0224 02:21:01.594411 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 02:21:01.614663 master-0 kubenswrapper[31411]: I0224 02:21:01.614609 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 02:21:01.624277 master-0 kubenswrapper[31411]: E0224 02:21:01.624161 31411 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 02:21:01.635171 master-0 kubenswrapper[31411]: I0224 02:21:01.635132 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 02:21:01.655109 master-0 kubenswrapper[31411]: I0224 02:21:01.655072 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 24 02:21:01.675355 master-0 kubenswrapper[31411]: I0224 02:21:01.675296 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-5hsr5" Feb 24 02:21:01.694414 master-0 kubenswrapper[31411]: I0224 02:21:01.694332 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-k86pk" Feb 24 02:21:01.713349 master-0 kubenswrapper[31411]: I0224 02:21:01.713278 31411 request.go:700] Waited for 3.010455556s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 24 02:21:01.715402 master-0 kubenswrapper[31411]: I0224 02:21:01.715358 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 24 02:21:01.735596 master-0 kubenswrapper[31411]: I0224 02:21:01.735544 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-ncv6j" Feb 24 02:21:01.798479 master-0 kubenswrapper[31411]: I0224 02:21:01.798424 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f85222bf-f51a-4232-8db1-1e6ee593617b-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-2492q\" (UID: \"f85222bf-f51a-4232-8db1-1e6ee593617b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-2492q" Feb 24 02:21:01.822852 master-0 kubenswrapper[31411]: I0224 02:21:01.822798 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrmsh\" (UniqueName: \"kubernetes.io/projected/57811d07-ae8a-44b7-8efb-dafc5afad31e-kube-api-access-vrmsh\") pod \"multus-additional-cni-plugins-jtdht\" (UID: \"57811d07-ae8a-44b7-8efb-dafc5afad31e\") " pod="openshift-multus/multus-additional-cni-plugins-jtdht" Feb 24 02:21:01.838422 master-0 kubenswrapper[31411]: I0224 02:21:01.838348 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8rjx\" (UniqueName: \"kubernetes.io/projected/91d16f7b-390a-4d9d-99d6-cc8e210801d1-kube-api-access-b8rjx\") pod \"marketplace-operator-6f5488b997-4qf9p\" (UID: \"91d16f7b-390a-4d9d-99d6-cc8e210801d1\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:21:01.862380 master-0 kubenswrapper[31411]: I0224 02:21:01.862299 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c84dc269-43ae-4083-9998-a0b3c90bb681-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:21:01.872511 master-0 kubenswrapper[31411]: I0224 02:21:01.872444 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgjlz\" (UniqueName: \"kubernetes.io/projected/6320dbb5-b84d-4a57-8c65-fbed8421f84a-kube-api-access-pgjlz\") pod \"package-server-manager-5c75f78c8b-2hllb\" (UID: \"6320dbb5-b84d-4a57-8c65-fbed8421f84a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" Feb 24 02:21:01.897446 master-0 kubenswrapper[31411]: I0224 02:21:01.897370 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a02536a3-7d3e-4e74-9625-aefed518ec35-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-tl97n\" (UID: \"a02536a3-7d3e-4e74-9625-aefed518ec35\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-tl97n" Feb 24 02:21:01.918229 master-0 kubenswrapper[31411]: I0224 02:21:01.918145 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nwzm\" (UniqueName: \"kubernetes.io/projected/c92835f0-7f32-4584-8304-843d7979392a-kube-api-access-6nwzm\") pod \"openshift-config-operator-6f47d587d6-ccrxg\" (UID: \"c92835f0-7f32-4584-8304-843d7979392a\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:21:01.938981 master-0 kubenswrapper[31411]: I0224 02:21:01.938932 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc8jx\" (UniqueName: \"kubernetes.io/projected/fb39fcc8-beb4-410e-b2a4-0b3e150719cc-kube-api-access-rc8jx\") pod \"ovnkube-node-rg9r6\" (UID: \"fb39fcc8-beb4-410e-b2a4-0b3e150719cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:21:01.952122 master-0 kubenswrapper[31411]: I0224 02:21:01.952054 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86pcb\" (UniqueName: \"kubernetes.io/projected/7b098bd4-5751-4b01-8409-0688fd29233e-kube-api-access-86pcb\") pod \"csi-snapshot-controller-operator-6fb4df594f-c95qc\" (UID: \"7b098bd4-5751-4b01-8409-0688fd29233e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-c95qc" Feb 24 02:21:01.971131 master-0 kubenswrapper[31411]: I0224 02:21:01.971059 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn8hz\" (UniqueName: \"kubernetes.io/projected/e3a675b9-feaa-4456-b7b4-0cd3afc42a42-kube-api-access-nn8hz\") pod \"network-check-target-54b95\" (UID: \"e3a675b9-feaa-4456-b7b4-0cd3afc42a42\") " pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:21:01.998319 master-0 kubenswrapper[31411]: I0224 02:21:01.998251 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2b65\" (UniqueName: \"kubernetes.io/projected/7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c-kube-api-access-n2b65\") pod \"openshift-controller-manager-operator-584cc7bcb5-c7fgn\" (UID: \"7e50df05-0f7f-4c4f-84fa-92dd1f7ee86c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7fgn" Feb 24 02:21:02.022336 master-0 kubenswrapper[31411]: I0224 02:21:02.022289 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6lsp\" (UniqueName: \"kubernetes.io/projected/db8d6627-394c-4087-bfa4-bf7580f6bb4b-kube-api-access-x6lsp\") pod \"machine-config-operator-7f8c75f984-ffnq7\" (UID: \"db8d6627-394c-4087-bfa4-bf7580f6bb4b\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-ffnq7" Feb 24 02:21:02.038876 master-0 kubenswrapper[31411]: I0224 02:21:02.038825 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv6t5\" (UniqueName: \"kubernetes.io/projected/d8e20d47-aeb6-41bf-9715-c437beb8e9e4-kube-api-access-qv6t5\") pod \"iptables-alerter-rjbl5\" (UID: \"d8e20d47-aeb6-41bf-9715-c437beb8e9e4\") " pod="openshift-network-operator/iptables-alerter-rjbl5" Feb 24 02:21:02.074319 master-0 kubenswrapper[31411]: I0224 02:21:02.074194 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8742\" (UniqueName: \"kubernetes.io/projected/f807f33c-8132-48a8-ab12-4b54c1cd2b10-kube-api-access-g8742\") pod \"migrator-5c85bff57-t5rgn\" (UID: \"f807f33c-8132-48a8-ab12-4b54c1cd2b10\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-t5rgn" Feb 24 02:21:02.088464 master-0 kubenswrapper[31411]: I0224 02:21:02.087501 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sp95\" (UniqueName: \"kubernetes.io/projected/c84dc269-43ae-4083-9998-a0b3c90bb681-kube-api-access-9sp95\") pod \"cluster-image-registry-operator-779979bdf7-d7sx4\" (UID: \"c84dc269-43ae-4083-9998-a0b3c90bb681\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-d7sx4" Feb 24 02:21:02.134644 master-0 kubenswrapper[31411]: I0224 02:21:02.134503 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssz8p\" (UniqueName: \"kubernetes.io/projected/6a9ccd8e-d964-4c03-8ffc-51b464030c25-kube-api-access-ssz8p\") pod \"cluster-node-tuning-operator-bcf775fc9-8x6sd\" (UID: \"6a9ccd8e-d964-4c03-8ffc-51b464030c25\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-8x6sd" Feb 24 02:21:02.135123 master-0 kubenswrapper[31411]: I0224 02:21:02.135044 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82hfh\" (UniqueName: \"kubernetes.io/projected/f2e9cdff-8c15-43df-b8df-7fe3a73fda86-kube-api-access-82hfh\") pod \"cluster-monitoring-operator-6bb6d78bf-fkzdb\" (UID: \"f2e9cdff-8c15-43df-b8df-7fe3a73fda86\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-fkzdb" Feb 24 02:21:02.167606 master-0 kubenswrapper[31411]: I0224 02:21:02.164328 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxz8j\" (UniqueName: \"kubernetes.io/projected/c6153510-452b-4726-8b63-8cc894daa168-kube-api-access-lxz8j\") pod \"service-ca-576b4d78bd-nqcs2\" (UID: \"c6153510-452b-4726-8b63-8cc894daa168\") " pod="openshift-service-ca/service-ca-576b4d78bd-nqcs2" Feb 24 02:21:02.177601 master-0 kubenswrapper[31411]: I0224 02:21:02.173752 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlg2j\" (UniqueName: \"kubernetes.io/projected/523033b8-4101-4a55-8320-55bef04ddaaf-kube-api-access-dlg2j\") pod \"ovnkube-control-plane-5d8dfcdc87-bb22k\" (UID: \"523033b8-4101-4a55-8320-55bef04ddaaf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-bb22k" Feb 24 02:21:02.177601 master-0 kubenswrapper[31411]: I0224 02:21:02.174742 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79bl6\" (UniqueName: \"kubernetes.io/projected/303d5058-84df-40d1-a941-896b093ae470-kube-api-access-79bl6\") pod \"cluster-olm-operator-5bd7768f54-7wc6k\" (UID: \"303d5058-84df-40d1-a941-896b093ae470\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-7wc6k" Feb 24 02:21:02.191608 master-0 kubenswrapper[31411]: I0224 02:21:02.191371 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbnd2\" (UniqueName: \"kubernetes.io/projected/5da829af-05fb-4f6e-9bec-c4dcc9cbec4b-kube-api-access-bbnd2\") pod \"multus-7fbjw\" (UID: \"5da829af-05fb-4f6e-9bec-c4dcc9cbec4b\") " pod="openshift-multus/multus-7fbjw" Feb 24 02:21:02.213170 master-0 kubenswrapper[31411]: I0224 02:21:02.213058 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7lsb\" (UniqueName: \"kubernetes.io/projected/f6e7b773-7ecd-4a5c-8bef-d672f371e7e5-kube-api-access-q7lsb\") pod \"csi-snapshot-controller-6847bb4785-8l58x\" (UID: \"f6e7b773-7ecd-4a5c-8bef-d672f371e7e5\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-8l58x" Feb 24 02:21:02.226173 master-0 kubenswrapper[31411]: I0224 02:21:02.226117 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njjq8\" (UniqueName: \"kubernetes.io/projected/b36d8451-0fda-4d9d-a850-d05c8f847016-kube-api-access-njjq8\") pod \"openshift-apiserver-operator-8586dccc9b-sl5hz\" (UID: \"b36d8451-0fda-4d9d-a850-d05c8f847016\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-sl5hz" Feb 24 02:21:02.250893 master-0 kubenswrapper[31411]: I0224 02:21:02.250836 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6f7j\" (UniqueName: \"kubernetes.io/projected/cabdddba-5507-4e47-98ef-a00c6d0f305d-kube-api-access-h6f7j\") pod \"authentication-operator-5bd7c86784-46vmq\" (UID: \"cabdddba-5507-4e47-98ef-a00c6d0f305d\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-46vmq" Feb 24 02:21:02.270293 master-0 kubenswrapper[31411]: I0224 02:21:02.270238 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqqwj\" (UniqueName: \"kubernetes.io/projected/ca1250a6-30f0-4cc0-b9b0-eabde42aefcf-kube-api-access-fqqwj\") pod \"network-check-source-58fb6744f5-l4wh6\" (UID: \"ca1250a6-30f0-4cc0-b9b0-eabde42aefcf\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-l4wh6" Feb 24 02:21:02.289991 master-0 kubenswrapper[31411]: I0224 02:21:02.289945 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp8hv\" (UniqueName: \"kubernetes.io/projected/2cb764f6-40f8-4e87-8be0-b9d7b0364201-kube-api-access-sp8hv\") pod \"dns-operator-8c7d49845-hxcn2\" (UID: \"2cb764f6-40f8-4e87-8be0-b9d7b0364201\") " pod="openshift-dns-operator/dns-operator-8c7d49845-hxcn2" Feb 24 02:21:02.309122 master-0 kubenswrapper[31411]: I0224 02:21:02.309045 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpg26\" (UniqueName: \"kubernetes.io/projected/fcbda577-b943-4b5c-b041-948aece8e40f-kube-api-access-vpg26\") pod \"kube-storage-version-migrator-operator-fc889cfd5-xdws2\" (UID: \"fcbda577-b943-4b5c-b041-948aece8e40f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-xdws2" Feb 24 02:21:02.331773 master-0 kubenswrapper[31411]: I0224 02:21:02.331723 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtf52\" (UniqueName: \"kubernetes.io/projected/9b5620d6-a5fe-45d7-b39e-8bed7f602a17-kube-api-access-jtf52\") pod \"service-ca-operator-c48c8bf7c-6fqkr\" (UID: \"9b5620d6-a5fe-45d7-b39e-8bed7f602a17\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-6fqkr" Feb 24 02:21:02.360377 master-0 kubenswrapper[31411]: I0224 02:21:02.360309 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nmd6\" (UniqueName: \"kubernetes.io/projected/70e2ba24-4871-4d1d-9935-156fdbeb2810-kube-api-access-4nmd6\") pod \"network-metrics-daemon-tntcf\" (UID: \"70e2ba24-4871-4d1d-9935-156fdbeb2810\") " pod="openshift-multus/network-metrics-daemon-tntcf" Feb 24 02:21:02.370815 master-0 kubenswrapper[31411]: I0224 02:21:02.370747 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb52w\" (UniqueName: \"kubernetes.io/projected/02f1d753-983a-4c4a-b1a0-560de173859a-kube-api-access-mb52w\") pod \"olm-operator-5499d7f7bb-5g6nc\" (UID: \"02f1d753-983a-4c4a-b1a0-560de173859a\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:21:02.391855 master-0 kubenswrapper[31411]: I0224 02:21:02.391794 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-bound-sa-token\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:21:02.411201 master-0 kubenswrapper[31411]: I0224 02:21:02.411141 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twgrj\" (UniqueName: \"kubernetes.io/projected/12b89e05-a503-47aa-90b2-4d741e015b19-kube-api-access-twgrj\") pod \"catalog-operator-596f79dd6f-8cg5c\" (UID: \"12b89e05-a503-47aa-90b2-4d741e015b19\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:21:02.412599 master-0 kubenswrapper[31411]: I0224 02:21:02.412532 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e68b3061-c9d2-469d-babf-7ccac0ad9b14-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-ctssl\" (UID: \"e68b3061-c9d2-469d-babf-7ccac0ad9b14\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-ctssl" Feb 24 02:21:02.412734 master-0 kubenswrapper[31411]: I0224 02:21:02.412695 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55a2662a-d672-4a46-9b81-bfcaf334eedb-client-ca\") pod \"route-controller-manager-676fddcd58-49xzd\" (UID: \"55a2662a-d672-4a46-9b81-bfcaf334eedb\") " pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:21:02.413134 master-0 kubenswrapper[31411]: I0224 02:21:02.413091 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55a2662a-d672-4a46-9b81-bfcaf334eedb-client-ca\") pod \"route-controller-manager-676fddcd58-49xzd\" (UID: \"55a2662a-d672-4a46-9b81-bfcaf334eedb\") " pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:21:02.413134 master-0 kubenswrapper[31411]: I0224 02:21:02.413106 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55a2662a-d672-4a46-9b81-bfcaf334eedb-config\") pod \"route-controller-manager-676fddcd58-49xzd\" (UID: \"55a2662a-d672-4a46-9b81-bfcaf334eedb\") " pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:21:02.413242 master-0 kubenswrapper[31411]: I0224 02:21:02.413207 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e70a9f5-1154-40e9-a487-21e36e7f420a-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-8znkt\" (UID: \"8e70a9f5-1154-40e9-a487-21e36e7f420a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:21:02.413350 master-0 kubenswrapper[31411]: I0224 02:21:02.413311 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-tls\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:21:02.413498 master-0 kubenswrapper[31411]: I0224 02:21:02.413421 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e68b3061-c9d2-469d-babf-7ccac0ad9b14-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-ctssl\" (UID: \"e68b3061-c9d2-469d-babf-7ccac0ad9b14\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-ctssl" Feb 24 02:21:02.413800 master-0 kubenswrapper[31411]: I0224 02:21:02.413753 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e70a9f5-1154-40e9-a487-21e36e7f420a-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-8znkt\" (UID: \"8e70a9f5-1154-40e9-a487-21e36e7f420a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:21:02.413948 master-0 kubenswrapper[31411]: I0224 02:21:02.413889 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55a2662a-d672-4a46-9b81-bfcaf334eedb-config\") pod \"route-controller-manager-676fddcd58-49xzd\" (UID: \"55a2662a-d672-4a46-9b81-bfcaf334eedb\") " pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:21:02.414068 master-0 kubenswrapper[31411]: I0224 02:21:02.414025 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/24765ff1-5e7d-4100-ad81-8f73555fc0a2-node-exporter-tls\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:21:02.439682 master-0 kubenswrapper[31411]: I0224 02:21:02.439626 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6qs2\" (UniqueName: \"kubernetes.io/projected/3332acec-1553-4594-a903-a322399f6d9d-kube-api-access-x6qs2\") pod \"network-operator-7d7db75979-drrqm\" (UID: \"3332acec-1553-4594-a903-a322399f6d9d\") " pod="openshift-network-operator/network-operator-7d7db75979-drrqm" Feb 24 02:21:02.450602 master-0 kubenswrapper[31411]: I0224 02:21:02.450541 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5463fbf-ac21-4058-9a3b-30d0e5ea31b7-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8tttg\" (UID: \"f5463fbf-ac21-4058-9a3b-30d0e5ea31b7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8tttg" Feb 24 02:21:02.470821 master-0 kubenswrapper[31411]: I0224 02:21:02.470726 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqqkv\" (UniqueName: \"kubernetes.io/projected/7b4e3ba0-5194-4e20-8f12-dea4b67504fe-kube-api-access-dqqkv\") pod \"cluster-baremetal-operator-d6bb9bb76-k98fq\" (UID: \"7b4e3ba0-5194-4e20-8f12-dea4b67504fe\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k98fq" Feb 24 02:21:02.489363 master-0 kubenswrapper[31411]: I0224 02:21:02.489315 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwjpw\" (UniqueName: \"kubernetes.io/projected/fbe9964a-9e82-48e9-82b0-7c07e4cec3a2-kube-api-access-pwjpw\") pod \"etcd-operator-545bf96f4d-jb9vb\" (UID: \"fbe9964a-9e82-48e9-82b0-7c07e4cec3a2\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jb9vb" Feb 24 02:21:02.518477 master-0 kubenswrapper[31411]: I0224 02:21:02.518419 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55a2662a-d672-4a46-9b81-bfcaf334eedb-serving-cert\") pod \"route-controller-manager-676fddcd58-49xzd\" (UID: \"55a2662a-d672-4a46-9b81-bfcaf334eedb\") " pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:21:02.519078 master-0 kubenswrapper[31411]: I0224 02:21:02.519019 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qph4g\" (UniqueName: \"kubernetes.io/projected/c3278a82-ee70-4d6c-9c96-f8cb1bcb9334-kube-api-access-qph4g\") pod \"ingress-operator-6569778c84-6dlqb\" (UID: \"c3278a82-ee70-4d6c-9c96-f8cb1bcb9334\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-6dlqb" Feb 24 02:21:02.519341 master-0 kubenswrapper[31411]: I0224 02:21:02.519297 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55a2662a-d672-4a46-9b81-bfcaf334eedb-serving-cert\") pod \"route-controller-manager-676fddcd58-49xzd\" (UID: \"55a2662a-d672-4a46-9b81-bfcaf334eedb\") " pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:21:02.543809 master-0 kubenswrapper[31411]: I0224 02:21:02.543682 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqtld\" (UniqueName: \"kubernetes.io/projected/adc1097b-c1ab-4f09-965d-1c819671475b-kube-api-access-nqtld\") pod \"network-node-identity-p5b6q\" (UID: \"adc1097b-c1ab-4f09-965d-1c819671475b\") " pod="openshift-network-node-identity/network-node-identity-p5b6q" Feb 24 02:21:02.560022 master-0 kubenswrapper[31411]: I0224 02:21:02.559942 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb7jb\" (UniqueName: \"kubernetes.io/projected/8e70a9f5-1154-40e9-a487-21e36e7f420a-kube-api-access-mb7jb\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-8znkt\" (UID: \"8e70a9f5-1154-40e9-a487-21e36e7f420a\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-8znkt" Feb 24 02:21:02.578007 master-0 kubenswrapper[31411]: I0224 02:21:02.577933 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt2q4\" (UniqueName: \"kubernetes.io/projected/91168f3d-70eb-4351-bb83-5411a96ad29d-kube-api-access-rt2q4\") pod \"cluster-autoscaler-operator-86b8dc6d6-mtrdk\" (UID: \"91168f3d-70eb-4351-bb83-5411a96ad29d\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mtrdk" Feb 24 02:21:02.599353 master-0 kubenswrapper[31411]: I0224 02:21:02.599313 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbzsl\" (UniqueName: \"kubernetes.io/projected/3e36c9eb-0368-46dc-af84-9c602a15555d-kube-api-access-lbzsl\") pod \"ingress-canary-jjpsc\" (UID: \"3e36c9eb-0368-46dc-af84-9c602a15555d\") " pod="openshift-ingress-canary/ingress-canary-jjpsc" Feb 24 02:21:02.618667 master-0 kubenswrapper[31411]: I0224 02:21:02.618549 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98725\" (UniqueName: \"kubernetes.io/projected/24765ff1-5e7d-4100-ad81-8f73555fc0a2-kube-api-access-98725\") pod \"node-exporter-2qn8m\" (UID: \"24765ff1-5e7d-4100-ad81-8f73555fc0a2\") " pod="openshift-monitoring/node-exporter-2qn8m" Feb 24 02:21:02.638249 master-0 kubenswrapper[31411]: I0224 02:21:02.638184 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg7sb\" (UniqueName: \"kubernetes.io/projected/e56a17d6-d740-4349-833e-b5279f7db2d4-kube-api-access-gg7sb\") pod \"redhat-operators-4znnj\" (UID: \"e56a17d6-d740-4349-833e-b5279f7db2d4\") " pod="openshift-marketplace/redhat-operators-4znnj" Feb 24 02:21:02.657102 master-0 kubenswrapper[31411]: I0224 02:21:02.657013 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gckc2\" (UniqueName: \"kubernetes.io/projected/8ebd1a97-ff7b-4a10-a1b5-956e427478a8-kube-api-access-gckc2\") pod \"machine-approver-7dd9c7d7b9-sjqsx\" (UID: \"8ebd1a97-ff7b-4a10-a1b5-956e427478a8\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-sjqsx" Feb 24 02:21:02.678931 master-0 kubenswrapper[31411]: I0224 02:21:02.678058 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/070ebb2d-57a2-4c76-8c93-e09d398f3b73-kube-api-access\") pod \"installer-3-master-0\" (UID: \"070ebb2d-57a2-4c76-8c93-e09d398f3b73\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 02:21:02.698150 master-0 kubenswrapper[31411]: I0224 02:21:02.698102 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-996wg\" (UniqueName: \"kubernetes.io/projected/638b3f88-0386-4f30-8ca5-6255e8f936fc-kube-api-access-996wg\") pod \"tuned-26b2v\" (UID: \"638b3f88-0386-4f30-8ca5-6255e8f936fc\") " pod="openshift-cluster-node-tuning-operator/tuned-26b2v" Feb 24 02:21:02.713828 master-0 kubenswrapper[31411]: I0224 02:21:02.713751 31411 request.go:700] Waited for 3.863140492s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ac/token Feb 24 02:21:02.716376 master-0 kubenswrapper[31411]: I0224 02:21:02.716321 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4q7n\" (UniqueName: \"kubernetes.io/projected/df42c69b-1a0e-41f5-9006-17540369b9ad-kube-api-access-f4q7n\") pod \"machine-config-daemon-hfpql\" (UID: \"df42c69b-1a0e-41f5-9006-17540369b9ad\") " pod="openshift-machine-config-operator/machine-config-daemon-hfpql" Feb 24 02:21:02.737966 master-0 kubenswrapper[31411]: I0224 02:21:02.737569 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xz68\" (UniqueName: \"kubernetes.io/projected/e68b3061-c9d2-469d-babf-7ccac0ad9b14-kube-api-access-6xz68\") pod \"multus-admission-controller-5f54bf67d4-ctssl\" (UID: \"e68b3061-c9d2-469d-babf-7ccac0ad9b14\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-ctssl" Feb 24 02:21:02.757747 master-0 kubenswrapper[31411]: I0224 02:21:02.757675 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/732a3831-20e0-47dc-a29a-8bb4659541b7-kube-api-access\") pod \"cluster-version-operator-57476485-9cjj5\" (UID: \"732a3831-20e0-47dc-a29a-8bb4659541b7\") " pod="openshift-cluster-version/cluster-version-operator-57476485-9cjj5" Feb 24 02:21:02.776353 master-0 kubenswrapper[31411]: I0224 02:21:02.776290 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57x9m\" (UniqueName: \"kubernetes.io/projected/a4cea44a-1c6e-465f-97df-2c951056cb85-kube-api-access-57x9m\") pod \"control-plane-machine-set-operator-686847ff5f-ckntz\" (UID: \"a4cea44a-1c6e-465f-97df-2c951056cb85\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-ckntz" Feb 24 02:21:02.790941 master-0 kubenswrapper[31411]: I0224 02:21:02.790877 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9ngc\" (UniqueName: \"kubernetes.io/projected/0cb042de-c873-408c-a4c4-ef9f7e546a08-kube-api-access-p9ngc\") pod \"certified-operators-brpmb\" (UID: \"0cb042de-c873-408c-a4c4-ef9f7e546a08\") " pod="openshift-marketplace/certified-operators-brpmb" Feb 24 02:21:02.812018 master-0 kubenswrapper[31411]: I0224 02:21:02.811945 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdmhx\" (UniqueName: \"kubernetes.io/projected/74a7801b-b7a4-4292-91b3-6285c239aeb7-kube-api-access-pdmhx\") pod \"cloud-credential-operator-6968c58f46-fcr59\" (UID: \"74a7801b-b7a4-4292-91b3-6285c239aeb7\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-fcr59" Feb 24 02:21:02.831057 master-0 kubenswrapper[31411]: I0224 02:21:02.830986 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxjtb\" (UniqueName: \"kubernetes.io/projected/e8d6a6c0-b944-4206-9178-9a9930b303b9-kube-api-access-zxjtb\") pod \"controller-manager-56b6d9c5b7-lxwt6\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:21:02.851475 master-0 kubenswrapper[31411]: I0224 02:21:02.851417 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkpjn\" (UniqueName: \"kubernetes.io/projected/390a7aa5-c7f7-4baf-a2d2-e6da9a465042-kube-api-access-dkpjn\") pod \"node-resolver-4lwwp\" (UID: \"390a7aa5-c7f7-4baf-a2d2-e6da9a465042\") " pod="openshift-dns/node-resolver-4lwwp" Feb 24 02:21:02.879173 master-0 kubenswrapper[31411]: I0224 02:21:02.879054 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4grf\" (UniqueName: \"kubernetes.io/projected/8c396c41-c617-4631-9700-a7052af5a276-kube-api-access-n4grf\") pod \"metrics-server-7b9cc5984b-smpdl\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:21:02.904105 master-0 kubenswrapper[31411]: I0224 02:21:02.904041 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd-kube-api-access\") pod \"installer-4-master-0\" (UID: \"5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 24 02:21:02.911817 master-0 kubenswrapper[31411]: I0224 02:21:02.911742 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh4wr\" (UniqueName: \"kubernetes.io/projected/011c6603-d533-4449-b409-f6f698a3bd50-kube-api-access-xh4wr\") pod \"cluster-storage-operator-f94476f49-c5wlk\" (UID: \"011c6603-d533-4449-b409-f6f698a3bd50\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-c5wlk" Feb 24 02:21:02.931249 master-0 kubenswrapper[31411]: I0224 02:21:02.931182 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pc72\" (UniqueName: \"kubernetes.io/projected/a4267e3a-aaaf-4b2f-a37c-0f097a35783f-kube-api-access-5pc72\") pod \"community-operators-kkwwl\" (UID: \"a4267e3a-aaaf-4b2f-a37c-0f097a35783f\") " pod="openshift-marketplace/community-operators-kkwwl" Feb 24 02:21:02.958086 master-0 kubenswrapper[31411]: I0224 02:21:02.958022 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6wvl\" (UniqueName: \"kubernetes.io/projected/e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d-kube-api-access-w6wvl\") pod \"machine-config-controller-54cb48566c-xzpl4\" (UID: \"e76f58c7-471f-4f1d-bb1f-5df1af4eeb5d\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-xzpl4" Feb 24 02:21:02.977418 master-0 kubenswrapper[31411]: I0224 02:21:02.977346 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tszx\" (UniqueName: \"kubernetes.io/projected/4f5b3b93-a59d-495c-a311-8913fa6000fc-kube-api-access-2tszx\") pod \"catalogd-controller-manager-84b8d9d697-jhklz\" (UID: \"4f5b3b93-a59d-495c-a311-8913fa6000fc\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:21:02.997601 master-0 kubenswrapper[31411]: I0224 02:21:02.997465 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dcvb\" (UniqueName: \"kubernetes.io/projected/8e0c87ae-6387-4c00-b03d-582566907fb6-kube-api-access-5dcvb\") pod \"insights-operator-59b498fcfb-dbkwd\" (UID: \"8e0c87ae-6387-4c00-b03d-582566907fb6\") " pod="openshift-insights/insights-operator-59b498fcfb-dbkwd" Feb 24 02:21:03.019116 master-0 kubenswrapper[31411]: I0224 02:21:03.019052 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzghr\" (UniqueName: \"kubernetes.io/projected/55a2662a-d672-4a46-9b81-bfcaf334eedb-kube-api-access-gzghr\") pod \"route-controller-manager-676fddcd58-49xzd\" (UID: \"55a2662a-d672-4a46-9b81-bfcaf334eedb\") " pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:21:03.046336 master-0 kubenswrapper[31411]: I0224 02:21:03.045076 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpg44\" (UniqueName: \"kubernetes.io/projected/8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c-kube-api-access-qpg44\") pod \"dns-default-5rf6m\" (UID: \"8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c\") " pod="openshift-dns/dns-default-5rf6m" Feb 24 02:21:03.069825 master-0 kubenswrapper[31411]: I0224 02:21:03.069745 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kznmr\" (UniqueName: \"kubernetes.io/projected/a5305004-5311-4bc4-ad7c-6670f97c89cb-kube-api-access-kznmr\") pod \"kube-state-metrics-59584d565f-f6f26\" (UID: \"a5305004-5311-4bc4-ad7c-6670f97c89cb\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-f6f26" Feb 24 02:21:03.078229 master-0 kubenswrapper[31411]: I0224 02:21:03.078168 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcq24\" (UniqueName: \"kubernetes.io/projected/b176946a-c056-441c-9145-b88ca4d75758-kube-api-access-kcq24\") pod \"apiserver-77597cc7cf-8j2k2\" (UID: \"b176946a-c056-441c-9145-b88ca4d75758\") " pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:21:03.101753 master-0 kubenswrapper[31411]: I0224 02:21:03.101674 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbc5w\" (UniqueName: \"kubernetes.io/projected/0127e0d5-9961-4ff6-851d-884e71e1dcf2-kube-api-access-nbc5w\") pod \"cluster-samples-operator-65c5c48b9b-bkc9s\" (UID: \"0127e0d5-9961-4ff6-851d-884e71e1dcf2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-bkc9s" Feb 24 02:21:03.117363 master-0 kubenswrapper[31411]: I0224 02:21:03.117284 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsb4q\" (UniqueName: \"kubernetes.io/projected/25190a18-bdac-479b-b526-840d28636be3-kube-api-access-bsb4q\") pod \"apiserver-79dc9447fd-x64vl\" (UID: \"25190a18-bdac-479b-b526-840d28636be3\") " pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:21:03.137943 master-0 kubenswrapper[31411]: I0224 02:21:03.137880 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddtsj\" (UniqueName: \"kubernetes.io/projected/4a2d8ef6-14ac-490d-a931-7082344d3f46-kube-api-access-ddtsj\") pod \"operator-controller-controller-manager-9cc7d7bb-hvr8b\" (UID: \"4a2d8ef6-14ac-490d-a931-7082344d3f46\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:21:03.156331 master-0 kubenswrapper[31411]: I0224 02:21:03.156273 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px2vd\" (UniqueName: \"kubernetes.io/projected/608a8a56-daee-4fa1-8300-42155217c68b-kube-api-access-px2vd\") pod \"openshift-state-metrics-6dbff8cb4c-swtr6\" (UID: \"608a8a56-daee-4fa1-8300-42155217c68b\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-swtr6" Feb 24 02:21:03.179291 master-0 kubenswrapper[31411]: I0224 02:21:03.179240 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv6zq\" (UniqueName: \"kubernetes.io/projected/9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1-kube-api-access-rv6zq\") pod \"machine-config-server-drf28\" (UID: \"9f34dc85-8fd3-4c8c-ad30-32a956f6f9e1\") " pod="openshift-machine-config-operator/machine-config-server-drf28" Feb 24 02:21:03.200113 master-0 kubenswrapper[31411]: I0224 02:21:03.200027 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q86gx\" (UniqueName: \"kubernetes.io/projected/b085f760-0e24-41a8-af09-538396aad935-kube-api-access-q86gx\") pod \"redhat-marketplace-qqt7p\" (UID: \"b085f760-0e24-41a8-af09-538396aad935\") " pod="openshift-marketplace/redhat-marketplace-qqt7p" Feb 24 02:21:03.217223 master-0 kubenswrapper[31411]: I0224 02:21:03.217166 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd796\" (UniqueName: \"kubernetes.io/projected/df2b8111-41c6-4333-b473-4c08fb836f70-kube-api-access-cd796\") pod \"prometheus-operator-754bc4d665-66lml\" (UID: \"df2b8111-41c6-4333-b473-4c08fb836f70\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-66lml" Feb 24 02:21:03.239489 master-0 kubenswrapper[31411]: I0224 02:21:03.239415 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svc78\" (UniqueName: \"kubernetes.io/projected/9cad383a-cb69-41a8-aec8-23ee1c930430-kube-api-access-svc78\") pod \"packageserver-597975fc65-xcl6c\" (UID: \"9cad383a-cb69-41a8-aec8-23ee1c930430\") " pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" Feb 24 02:21:03.259965 master-0 kubenswrapper[31411]: I0224 02:21:03.259845 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc5kx\" (UniqueName: \"kubernetes.io/projected/6a08a1e4-cf92-4733-a8af-c7ac5b21e925-kube-api-access-qc5kx\") pod \"router-default-7b65dc9fcb-22sgl\" (UID: \"6a08a1e4-cf92-4733-a8af-c7ac5b21e925\") " pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:21:03.282832 master-0 kubenswrapper[31411]: I0224 02:21:03.282771 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8msx\" (UniqueName: \"kubernetes.io/projected/0ce6dd93-084c-4e15-8b7c-e0829a6df14e-kube-api-access-q8msx\") pod \"machine-api-operator-5c7cf458b4-dsjgm\" (UID: \"0ce6dd93-084c-4e15-8b7c-e0829a6df14e\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-dsjgm" Feb 24 02:21:03.300018 master-0 kubenswrapper[31411]: E0224 02:21:03.299948 31411 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 02:21:03.300018 master-0 kubenswrapper[31411]: E0224 02:21:03.300021 31411 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-2-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 02:21:03.300234 master-0 kubenswrapper[31411]: E0224 02:21:03.300150 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access podName:5508683b-09ae-47a1-89fd-b0891a881e09 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:03.800110329 +0000 UTC m=+7.017308215 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access") pod "installer-2-master-0" (UID: "5508683b-09ae-47a1-89fd-b0891a881e09") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 02:21:03.307038 master-0 kubenswrapper[31411]: E0224 02:21:03.306957 31411 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.216s" Feb 24 02:21:03.307038 master-0 kubenswrapper[31411]: I0224 02:21:03.307021 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 24 02:21:03.307241 master-0 kubenswrapper[31411]: I0224 02:21:03.307050 31411 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="906fbe12-582c-4c3d-8417-22d9670712ed" Feb 24 02:21:03.307241 master-0 kubenswrapper[31411]: I0224 02:21:03.307086 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 24 02:21:03.307241 master-0 kubenswrapper[31411]: I0224 02:21:03.307119 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:21:03.307241 master-0 kubenswrapper[31411]: I0224 02:21:03.307195 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 24 02:21:03.307488 master-0 kubenswrapper[31411]: I0224 02:21:03.307312 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:21:03.324786 master-0 kubenswrapper[31411]: I0224 02:21:03.324715 31411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 24 02:21:03.357596 master-0 kubenswrapper[31411]: I0224 02:21:03.357509 31411 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Feb 24 02:21:03.357745 master-0 kubenswrapper[31411]: I0224 02:21:03.357717 31411 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 24 02:21:03.376985 master-0 kubenswrapper[31411]: I0224 02:21:03.376914 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 24 02:21:03.376985 master-0 kubenswrapper[31411]: I0224 02:21:03.376972 31411 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="906fbe12-582c-4c3d-8417-22d9670712ed" Feb 24 02:21:03.377195 master-0 kubenswrapper[31411]: I0224 02:21:03.377055 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:21:03.377195 master-0 kubenswrapper[31411]: I0224 02:21:03.377099 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:21:03.377195 master-0 kubenswrapper[31411]: I0224 02:21:03.377126 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:21:03.377368 master-0 kubenswrapper[31411]: I0224 02:21:03.377216 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:21:03.377368 master-0 kubenswrapper[31411]: I0224 02:21:03.377286 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:21:03.377562 master-0 kubenswrapper[31411]: I0224 02:21:03.377522 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:21:03.377830 master-0 kubenswrapper[31411]: I0224 02:21:03.377762 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-ccrxg" Feb 24 02:21:03.378024 master-0 kubenswrapper[31411]: I0224 02:21:03.377983 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:21:03.378134 master-0 kubenswrapper[31411]: I0224 02:21:03.378027 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:21:03.378134 master-0 kubenswrapper[31411]: I0224 02:21:03.378073 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-5g6nc" Feb 24 02:21:03.378134 master-0 kubenswrapper[31411]: I0224 02:21:03.378098 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:21:03.378463 master-0 kubenswrapper[31411]: I0224 02:21:03.378398 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-8cg5c" Feb 24 02:21:03.378557 master-0 kubenswrapper[31411]: I0224 02:21:03.378512 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:21:03.378663 master-0 kubenswrapper[31411]: I0224 02:21:03.378568 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-5rf6m" Feb 24 02:21:03.378663 master-0 kubenswrapper[31411]: I0224 02:21:03.378640 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-5rf6m" Feb 24 02:21:03.378783 master-0 kubenswrapper[31411]: I0224 02:21:03.378689 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:21:03.378783 master-0 kubenswrapper[31411]: I0224 02:21:03.378758 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:21:03.378934 master-0 kubenswrapper[31411]: I0224 02:21:03.378802 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:21:03.378934 master-0 kubenswrapper[31411]: I0224 02:21:03.378824 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" Feb 24 02:21:03.378934 master-0 kubenswrapper[31411]: I0224 02:21:03.378907 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:21:03.379146 master-0 kubenswrapper[31411]: I0224 02:21:03.378946 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2hllb" Feb 24 02:21:03.491612 master-0 kubenswrapper[31411]: I0224 02:21:03.491514 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:21:03.532955 master-0 kubenswrapper[31411]: I0224 02:21:03.532682 31411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:21:03.533334 master-0 kubenswrapper[31411]: I0224 02:21:03.533256 31411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:21:03.787355 master-0 kubenswrapper[31411]: I0224 02:21:03.787158 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qqt7p" Feb 24 02:21:03.838167 master-0 kubenswrapper[31411]: I0224 02:21:03.838048 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:21:03.838321 master-0 kubenswrapper[31411]: I0224 02:21:03.838226 31411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:21:03.842698 master-0 kubenswrapper[31411]: I0224 02:21:03.842629 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-54b95" Feb 24 02:21:04.267902 master-0 kubenswrapper[31411]: I0224 02:21:04.267832 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5508683b-09ae-47a1-89fd-b0891a881e09\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:21:04.285294 master-0 kubenswrapper[31411]: I0224 02:21:04.275219 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:21:04.285405 master-0 kubenswrapper[31411]: I0224 02:21:04.285381 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:21:04.285475 master-0 kubenswrapper[31411]: I0224 02:21:04.285414 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-9gkp2" Feb 24 02:21:04.285623 master-0 kubenswrapper[31411]: I0224 02:21:04.285598 31411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:21:04.285882 master-0 kubenswrapper[31411]: E0224 02:21:04.275766 31411 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 02:21:04.285969 master-0 kubenswrapper[31411]: I0224 02:21:04.279066 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=9.279031873 podStartE2EDuration="9.279031873s" podCreationTimestamp="2026-02-24 02:20:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:21:04.277707606 +0000 UTC m=+7.494905482" watchObservedRunningTime="2026-02-24 02:21:04.279031873 +0000 UTC m=+7.496229759" Feb 24 02:21:04.286062 master-0 kubenswrapper[31411]: E0224 02:21:04.285958 31411 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-2-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 02:21:04.295410 master-0 kubenswrapper[31411]: E0224 02:21:04.286244 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access podName:5508683b-09ae-47a1-89fd-b0891a881e09 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:05.286153072 +0000 UTC m=+8.503350958 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access") pod "installer-2-master-0" (UID: "5508683b-09ae-47a1-89fd-b0891a881e09") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 02:21:04.295410 master-0 kubenswrapper[31411]: I0224 02:21:04.286829 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:21:04.295410 master-0 kubenswrapper[31411]: I0224 02:21:04.288704 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:21:04.304608 master-0 kubenswrapper[31411]: I0224 02:21:04.302174 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-9gkp2" Feb 24 02:21:04.497419 master-0 kubenswrapper[31411]: I0224 02:21:04.497309 31411 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:21:04.497419 master-0 kubenswrapper[31411]: [-]has-synced failed: reason withheld Feb 24 02:21:04.497419 master-0 kubenswrapper[31411]: [+]process-running ok Feb 24 02:21:04.497419 master-0 kubenswrapper[31411]: healthz check failed Feb 24 02:21:04.498104 master-0 kubenswrapper[31411]: I0224 02:21:04.497430 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:21:04.580365 master-0 kubenswrapper[31411]: I0224 02:21:04.580163 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kkwwl" Feb 24 02:21:04.625674 master-0 kubenswrapper[31411]: I0224 02:21:04.623520 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4znnj" Feb 24 02:21:04.658292 master-0 kubenswrapper[31411]: I0224 02:21:04.658205 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kkwwl" Feb 24 02:21:04.699030 master-0 kubenswrapper[31411]: I0224 02:21:04.698948 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4znnj" Feb 24 02:21:04.742868 master-0 kubenswrapper[31411]: I0224 02:21:04.742777 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:21:04.743538 master-0 kubenswrapper[31411]: I0224 02:21:04.743489 31411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:21:04.747631 master-0 kubenswrapper[31411]: I0224 02:21:04.747507 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-hvr8b" Feb 24 02:21:05.271534 master-0 kubenswrapper[31411]: I0224 02:21:05.271432 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:21:05.272499 master-0 kubenswrapper[31411]: I0224 02:21:05.271690 31411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:21:05.274450 master-0 kubenswrapper[31411]: I0224 02:21:05.274381 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:21:05.316602 master-0 kubenswrapper[31411]: I0224 02:21:05.316501 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5508683b-09ae-47a1-89fd-b0891a881e09\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:21:05.317074 master-0 kubenswrapper[31411]: E0224 02:21:05.317027 31411 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 02:21:05.317074 master-0 kubenswrapper[31411]: E0224 02:21:05.317072 31411 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-2-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 02:21:05.317221 master-0 kubenswrapper[31411]: E0224 02:21:05.317162 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access podName:5508683b-09ae-47a1-89fd-b0891a881e09 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:07.317133354 +0000 UTC m=+10.534331230 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access") pod "installer-2-master-0" (UID: "5508683b-09ae-47a1-89fd-b0891a881e09") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 02:21:05.369098 master-0 kubenswrapper[31411]: I0224 02:21:05.369038 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-brpmb" Feb 24 02:21:05.442560 master-0 kubenswrapper[31411]: I0224 02:21:05.442482 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-brpmb" Feb 24 02:21:05.455764 master-0 kubenswrapper[31411]: I0224 02:21:05.455678 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:21:05.455969 master-0 kubenswrapper[31411]: I0224 02:21:05.455933 31411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:21:05.466033 master-0 kubenswrapper[31411]: I0224 02:21:05.465910 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:21:05.495642 master-0 kubenswrapper[31411]: I0224 02:21:05.495532 31411 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:21:05.495642 master-0 kubenswrapper[31411]: [-]has-synced failed: reason withheld Feb 24 02:21:05.495642 master-0 kubenswrapper[31411]: [+]process-running ok Feb 24 02:21:05.495642 master-0 kubenswrapper[31411]: healthz check failed Feb 24 02:21:05.498274 master-0 kubenswrapper[31411]: I0224 02:21:05.495677 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:21:05.553209 master-0 kubenswrapper[31411]: I0224 02:21:05.553018 31411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:21:05.561877 master-0 kubenswrapper[31411]: I0224 02:21:05.561842 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:21:05.620726 master-0 kubenswrapper[31411]: I0224 02:21:05.620484 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=7.620404643 podStartE2EDuration="7.620404643s" podCreationTimestamp="2026-02-24 02:20:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:21:05.618239693 +0000 UTC m=+8.835437579" watchObservedRunningTime="2026-02-24 02:21:05.620404643 +0000 UTC m=+8.837602529" Feb 24 02:21:05.963596 master-0 kubenswrapper[31411]: I0224 02:21:05.962824 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:21:05.972111 master-0 kubenswrapper[31411]: I0224 02:21:05.972058 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:21:06.269345 master-0 kubenswrapper[31411]: I0224 02:21:06.269085 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-retry-1-master-0"] Feb 24 02:21:06.273067 master-0 kubenswrapper[31411]: I0224 02:21:06.272972 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-3-retry-1-master-0"] Feb 24 02:21:06.333120 master-0 kubenswrapper[31411]: I0224 02:21:06.333046 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:21:06.333277 master-0 kubenswrapper[31411]: I0224 02:21:06.333263 31411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:21:06.336978 master-0 kubenswrapper[31411]: I0224 02:21:06.336900 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:21:06.501922 master-0 kubenswrapper[31411]: I0224 02:21:06.501829 31411 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:21:06.501922 master-0 kubenswrapper[31411]: [-]has-synced failed: reason withheld Feb 24 02:21:06.501922 master-0 kubenswrapper[31411]: [+]process-running ok Feb 24 02:21:06.501922 master-0 kubenswrapper[31411]: healthz check failed Feb 24 02:21:06.502406 master-0 kubenswrapper[31411]: I0224 02:21:06.501943 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:21:06.574146 master-0 kubenswrapper[31411]: I0224 02:21:06.573888 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" Feb 24 02:21:06.574487 master-0 kubenswrapper[31411]: I0224 02:21:06.574318 31411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:21:06.579788 master-0 kubenswrapper[31411]: I0224 02:21:06.579706 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" Feb 24 02:21:06.896355 master-0 kubenswrapper[31411]: I0224 02:21:06.896177 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4znnj" Feb 24 02:21:06.896847 master-0 kubenswrapper[31411]: I0224 02:21:06.896415 31411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:21:06.970123 master-0 kubenswrapper[31411]: I0224 02:21:06.970073 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4znnj" Feb 24 02:21:07.119544 master-0 kubenswrapper[31411]: I0224 02:21:07.119485 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27f0c4d0-17dd-49ed-a8a4-7be1d82738c7" path="/var/lib/kubelet/pods/27f0c4d0-17dd-49ed-a8a4-7be1d82738c7/volumes" Feb 24 02:21:07.373537 master-0 kubenswrapper[31411]: I0224 02:21:07.373471 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5508683b-09ae-47a1-89fd-b0891a881e09\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:21:07.374639 master-0 kubenswrapper[31411]: E0224 02:21:07.373804 31411 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 02:21:07.374749 master-0 kubenswrapper[31411]: E0224 02:21:07.374654 31411 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-2-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 02:21:07.374826 master-0 kubenswrapper[31411]: E0224 02:21:07.374768 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access podName:5508683b-09ae-47a1-89fd-b0891a881e09 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:11.374728378 +0000 UTC m=+14.591926264 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access") pod "installer-2-master-0" (UID: "5508683b-09ae-47a1-89fd-b0891a881e09") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 02:21:07.495607 master-0 kubenswrapper[31411]: I0224 02:21:07.495472 31411 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:21:07.495607 master-0 kubenswrapper[31411]: [-]has-synced failed: reason withheld Feb 24 02:21:07.495607 master-0 kubenswrapper[31411]: [+]process-running ok Feb 24 02:21:07.495607 master-0 kubenswrapper[31411]: healthz check failed Feb 24 02:21:07.495607 master-0 kubenswrapper[31411]: I0224 02:21:07.495594 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:21:07.620101 master-0 kubenswrapper[31411]: I0224 02:21:07.620021 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:21:07.620419 master-0 kubenswrapper[31411]: I0224 02:21:07.620239 31411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:21:07.620419 master-0 kubenswrapper[31411]: I0224 02:21:07.620255 31411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:21:07.620419 master-0 kubenswrapper[31411]: I0224 02:21:07.620262 31411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:21:07.681983 master-0 kubenswrapper[31411]: I0224 02:21:07.681802 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:21:07.919450 master-0 kubenswrapper[31411]: I0224 02:21:07.919399 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:21:07.953494 master-0 kubenswrapper[31411]: I0224 02:21:07.953458 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:21:08.038440 master-0 kubenswrapper[31411]: I0224 02:21:08.038371 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:21:08.038705 master-0 kubenswrapper[31411]: I0224 02:21:08.038624 31411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:21:08.043673 master-0 kubenswrapper[31411]: I0224 02:21:08.042837 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhklz" Feb 24 02:21:08.137240 master-0 kubenswrapper[31411]: I0224 02:21:08.137169 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:21:08.191487 master-0 kubenswrapper[31411]: I0224 02:21:08.191421 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-79dc9447fd-x64vl" Feb 24 02:21:08.197930 master-0 kubenswrapper[31411]: I0224 02:21:08.196078 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-77597cc7cf-8j2k2" Feb 24 02:21:08.270033 master-0 kubenswrapper[31411]: I0224 02:21:08.268190 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-brpmb" Feb 24 02:21:08.270033 master-0 kubenswrapper[31411]: I0224 02:21:08.268384 31411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:21:08.338599 master-0 kubenswrapper[31411]: I0224 02:21:08.336554 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-brpmb" Feb 24 02:21:08.499599 master-0 kubenswrapper[31411]: I0224 02:21:08.498714 31411 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:21:08.499599 master-0 kubenswrapper[31411]: [-]has-synced failed: reason withheld Feb 24 02:21:08.499599 master-0 kubenswrapper[31411]: [+]process-running ok Feb 24 02:21:08.499599 master-0 kubenswrapper[31411]: healthz check failed Feb 24 02:21:08.499599 master-0 kubenswrapper[31411]: I0224 02:21:08.498770 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:21:08.577189 master-0 kubenswrapper[31411]: I0224 02:21:08.577053 31411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:21:08.690957 master-0 kubenswrapper[31411]: I0224 02:21:08.690894 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 24 02:21:08.702277 master-0 kubenswrapper[31411]: I0224 02:21:08.702235 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 24 02:21:08.797639 master-0 kubenswrapper[31411]: I0224 02:21:08.797567 31411 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 24 02:21:08.797906 master-0 kubenswrapper[31411]: I0224 02:21:08.797859 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="95806c9442ee27c355bfbf25ba6f70f0" containerName="startup-monitor" containerID="cri-o://d3a2e46fc3575684f8b4b20ad7df8bf7f99ff87fb6ed3592a6211e989eb744b8" gracePeriod=5 Feb 24 02:21:08.828252 master-0 kubenswrapper[31411]: I0224 02:21:08.828151 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kkwwl" Feb 24 02:21:08.828407 master-0 kubenswrapper[31411]: I0224 02:21:08.828349 31411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:21:08.880930 master-0 kubenswrapper[31411]: I0224 02:21:08.880870 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kkwwl" Feb 24 02:21:09.494401 master-0 kubenswrapper[31411]: I0224 02:21:09.494339 31411 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:21:09.494401 master-0 kubenswrapper[31411]: [-]has-synced failed: reason withheld Feb 24 02:21:09.494401 master-0 kubenswrapper[31411]: [+]process-running ok Feb 24 02:21:09.494401 master-0 kubenswrapper[31411]: healthz check failed Feb 24 02:21:09.494803 master-0 kubenswrapper[31411]: I0224 02:21:09.494425 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:21:09.595463 master-0 kubenswrapper[31411]: I0224 02:21:09.592472 31411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:21:10.495936 master-0 kubenswrapper[31411]: I0224 02:21:10.495856 31411 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:21:10.495936 master-0 kubenswrapper[31411]: [-]has-synced failed: reason withheld Feb 24 02:21:10.495936 master-0 kubenswrapper[31411]: [+]process-running ok Feb 24 02:21:10.495936 master-0 kubenswrapper[31411]: healthz check failed Feb 24 02:21:10.496309 master-0 kubenswrapper[31411]: I0224 02:21:10.495963 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:21:11.443263 master-0 kubenswrapper[31411]: I0224 02:21:11.443190 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5508683b-09ae-47a1-89fd-b0891a881e09\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:21:11.444017 master-0 kubenswrapper[31411]: E0224 02:21:11.443455 31411 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 02:21:11.444017 master-0 kubenswrapper[31411]: E0224 02:21:11.443503 31411 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-2-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 02:21:11.444017 master-0 kubenswrapper[31411]: E0224 02:21:11.443588 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access podName:5508683b-09ae-47a1-89fd-b0891a881e09 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:19.443552734 +0000 UTC m=+22.660750580 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access") pod "installer-2-master-0" (UID: "5508683b-09ae-47a1-89fd-b0891a881e09") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 02:21:11.493466 master-0 kubenswrapper[31411]: I0224 02:21:11.493407 31411 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:21:11.493466 master-0 kubenswrapper[31411]: [-]has-synced failed: reason withheld Feb 24 02:21:11.493466 master-0 kubenswrapper[31411]: [+]process-running ok Feb 24 02:21:11.493466 master-0 kubenswrapper[31411]: healthz check failed Feb 24 02:21:11.493784 master-0 kubenswrapper[31411]: I0224 02:21:11.493479 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:21:12.149192 master-0 kubenswrapper[31411]: I0224 02:21:12.149102 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qqt7p" Feb 24 02:21:12.236504 master-0 kubenswrapper[31411]: I0224 02:21:12.236374 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qqt7p" Feb 24 02:21:12.494118 master-0 kubenswrapper[31411]: I0224 02:21:12.494001 31411 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:21:12.494118 master-0 kubenswrapper[31411]: [-]has-synced failed: reason withheld Feb 24 02:21:12.494118 master-0 kubenswrapper[31411]: [+]process-running ok Feb 24 02:21:12.494118 master-0 kubenswrapper[31411]: healthz check failed Feb 24 02:21:12.495257 master-0 kubenswrapper[31411]: I0224 02:21:12.494133 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:21:12.591434 master-0 kubenswrapper[31411]: I0224 02:21:12.591350 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:21:12.602118 master-0 kubenswrapper[31411]: I0224 02:21:12.602013 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:21:12.692483 master-0 kubenswrapper[31411]: I0224 02:21:12.692411 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qqt7p" Feb 24 02:21:13.497753 master-0 kubenswrapper[31411]: I0224 02:21:13.497666 31411 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:21:13.497753 master-0 kubenswrapper[31411]: [-]has-synced failed: reason withheld Feb 24 02:21:13.497753 master-0 kubenswrapper[31411]: [+]process-running ok Feb 24 02:21:13.497753 master-0 kubenswrapper[31411]: healthz check failed Feb 24 02:21:13.498498 master-0 kubenswrapper[31411]: I0224 02:21:13.497805 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:21:13.919481 master-0 kubenswrapper[31411]: I0224 02:21:13.919434 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-5df5ffc47c-gmjbd"] Feb 24 02:21:13.919770 master-0 kubenswrapper[31411]: E0224 02:21:13.919744 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5508683b-09ae-47a1-89fd-b0891a881e09" containerName="installer" Feb 24 02:21:13.919770 master-0 kubenswrapper[31411]: I0224 02:21:13.919764 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="5508683b-09ae-47a1-89fd-b0891a881e09" containerName="installer" Feb 24 02:21:13.919869 master-0 kubenswrapper[31411]: E0224 02:21:13.919785 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fa1462b-8f1c-4a77-9c1c-f0f79910737f" containerName="assisted-installer-controller" Feb 24 02:21:13.919869 master-0 kubenswrapper[31411]: I0224 02:21:13.919792 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fa1462b-8f1c-4a77-9c1c-f0f79910737f" containerName="assisted-installer-controller" Feb 24 02:21:13.919869 master-0 kubenswrapper[31411]: E0224 02:21:13.919816 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95806c9442ee27c355bfbf25ba6f70f0" containerName="startup-monitor" Feb 24 02:21:13.919869 master-0 kubenswrapper[31411]: I0224 02:21:13.919822 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="95806c9442ee27c355bfbf25ba6f70f0" containerName="startup-monitor" Feb 24 02:21:13.919869 master-0 kubenswrapper[31411]: E0224 02:21:13.919831 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64b7ea36-8849-4955-80b5-c7e7c12fcc29" containerName="installer" Feb 24 02:21:13.919869 master-0 kubenswrapper[31411]: I0224 02:21:13.919837 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="64b7ea36-8849-4955-80b5-c7e7c12fcc29" containerName="installer" Feb 24 02:21:13.919869 master-0 kubenswrapper[31411]: E0224 02:21:13.919848 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd02da41-8a48-4436-ae58-6363e7554898" containerName="installer" Feb 24 02:21:13.919869 master-0 kubenswrapper[31411]: I0224 02:21:13.919854 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd02da41-8a48-4436-ae58-6363e7554898" containerName="installer" Feb 24 02:21:13.919869 master-0 kubenswrapper[31411]: E0224 02:21:13.919867 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb9f7dc4-e69f-4fc1-bb1a-1878971d279d" containerName="installer" Feb 24 02:21:13.919869 master-0 kubenswrapper[31411]: I0224 02:21:13.919873 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb9f7dc4-e69f-4fc1-bb1a-1878971d279d" containerName="installer" Feb 24 02:21:13.919869 master-0 kubenswrapper[31411]: E0224 02:21:13.919881 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06fb1d82-f9e9-473b-80c5-767ec3948bd4" containerName="collect-profiles" Feb 24 02:21:13.920285 master-0 kubenswrapper[31411]: I0224 02:21:13.919891 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="06fb1d82-f9e9-473b-80c5-767ec3948bd4" containerName="collect-profiles" Feb 24 02:21:13.920285 master-0 kubenswrapper[31411]: E0224 02:21:13.919903 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50c78047-1c4d-4535-ba2c-31f080d6a57d" containerName="installer" Feb 24 02:21:13.920285 master-0 kubenswrapper[31411]: I0224 02:21:13.919910 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="50c78047-1c4d-4535-ba2c-31f080d6a57d" containerName="installer" Feb 24 02:21:13.920285 master-0 kubenswrapper[31411]: E0224 02:21:13.919919 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24983c94-f158-4a07-854b-2e5455374f19" containerName="collect-profiles" Feb 24 02:21:13.920285 master-0 kubenswrapper[31411]: I0224 02:21:13.919925 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="24983c94-f158-4a07-854b-2e5455374f19" containerName="collect-profiles" Feb 24 02:21:13.920285 master-0 kubenswrapper[31411]: E0224 02:21:13.919933 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c12652f5-003f-4b77-b2bb-b666c9d7bb53" containerName="installer" Feb 24 02:21:13.920285 master-0 kubenswrapper[31411]: I0224 02:21:13.919939 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="c12652f5-003f-4b77-b2bb-b666c9d7bb53" containerName="installer" Feb 24 02:21:13.920285 master-0 kubenswrapper[31411]: E0224 02:21:13.919952 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="683deae1-94b1-4c17-a73f-ad628a09134b" containerName="installer" Feb 24 02:21:13.920285 master-0 kubenswrapper[31411]: I0224 02:21:13.919957 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="683deae1-94b1-4c17-a73f-ad628a09134b" containerName="installer" Feb 24 02:21:13.920285 master-0 kubenswrapper[31411]: I0224 02:21:13.920067 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="c12652f5-003f-4b77-b2bb-b666c9d7bb53" containerName="installer" Feb 24 02:21:13.920285 master-0 kubenswrapper[31411]: I0224 02:21:13.920080 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="50c78047-1c4d-4535-ba2c-31f080d6a57d" containerName="installer" Feb 24 02:21:13.920285 master-0 kubenswrapper[31411]: I0224 02:21:13.920096 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="06fb1d82-f9e9-473b-80c5-767ec3948bd4" containerName="collect-profiles" Feb 24 02:21:13.920285 master-0 kubenswrapper[31411]: I0224 02:21:13.920118 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="5508683b-09ae-47a1-89fd-b0891a881e09" containerName="installer" Feb 24 02:21:13.920285 master-0 kubenswrapper[31411]: I0224 02:21:13.920127 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="95806c9442ee27c355bfbf25ba6f70f0" containerName="startup-monitor" Feb 24 02:21:13.920285 master-0 kubenswrapper[31411]: I0224 02:21:13.920136 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="683deae1-94b1-4c17-a73f-ad628a09134b" containerName="installer" Feb 24 02:21:13.920285 master-0 kubenswrapper[31411]: I0224 02:21:13.920148 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb9f7dc4-e69f-4fc1-bb1a-1878971d279d" containerName="installer" Feb 24 02:21:13.920285 master-0 kubenswrapper[31411]: I0224 02:21:13.920156 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="24983c94-f158-4a07-854b-2e5455374f19" containerName="collect-profiles" Feb 24 02:21:13.920285 master-0 kubenswrapper[31411]: I0224 02:21:13.920170 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd02da41-8a48-4436-ae58-6363e7554898" containerName="installer" Feb 24 02:21:13.920285 master-0 kubenswrapper[31411]: I0224 02:21:13.920178 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="64b7ea36-8849-4955-80b5-c7e7c12fcc29" containerName="installer" Feb 24 02:21:13.920285 master-0 kubenswrapper[31411]: I0224 02:21:13.920190 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fa1462b-8f1c-4a77-9c1c-f0f79910737f" containerName="assisted-installer-controller" Feb 24 02:21:13.921058 master-0 kubenswrapper[31411]: I0224 02:21:13.920637 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5df5ffc47c-gmjbd" Feb 24 02:21:13.925447 master-0 kubenswrapper[31411]: I0224 02:21:13.925405 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 24 02:21:13.925619 master-0 kubenswrapper[31411]: I0224 02:21:13.925565 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 24 02:21:13.930791 master-0 kubenswrapper[31411]: I0224 02:21:13.930748 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_95806c9442ee27c355bfbf25ba6f70f0/startup-monitor/0.log" Feb 24 02:21:13.930884 master-0 kubenswrapper[31411]: I0224 02:21:13.930853 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:21:13.951303 master-0 kubenswrapper[31411]: I0224 02:21:13.950989 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 24 02:21:13.951303 master-0 kubenswrapper[31411]: I0224 02:21:13.951143 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 24 02:21:13.951303 master-0 kubenswrapper[31411]: I0224 02:21:13.951229 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 24 02:21:13.951538 master-0 kubenswrapper[31411]: I0224 02:21:13.951524 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-7k4nm" Feb 24 02:21:13.980296 master-0 kubenswrapper[31411]: I0224 02:21:13.980240 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-5df5ffc47c-gmjbd"] Feb 24 02:21:13.996359 master-0 kubenswrapper[31411]: I0224 02:21:13.996310 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ea06201-f138-475b-86de-769d333048cb-config\") pod \"console-operator-5df5ffc47c-gmjbd\" (UID: \"8ea06201-f138-475b-86de-769d333048cb\") " pod="openshift-console-operator/console-operator-5df5ffc47c-gmjbd" Feb 24 02:21:13.996467 master-0 kubenswrapper[31411]: I0224 02:21:13.996442 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ea06201-f138-475b-86de-769d333048cb-trusted-ca\") pod \"console-operator-5df5ffc47c-gmjbd\" (UID: \"8ea06201-f138-475b-86de-769d333048cb\") " pod="openshift-console-operator/console-operator-5df5ffc47c-gmjbd" Feb 24 02:21:13.996516 master-0 kubenswrapper[31411]: I0224 02:21:13.996477 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ea06201-f138-475b-86de-769d333048cb-serving-cert\") pod \"console-operator-5df5ffc47c-gmjbd\" (UID: \"8ea06201-f138-475b-86de-769d333048cb\") " pod="openshift-console-operator/console-operator-5df5ffc47c-gmjbd" Feb 24 02:21:13.996726 master-0 kubenswrapper[31411]: I0224 02:21:13.996675 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhn65\" (UniqueName: \"kubernetes.io/projected/8ea06201-f138-475b-86de-769d333048cb-kube-api-access-qhn65\") pod \"console-operator-5df5ffc47c-gmjbd\" (UID: \"8ea06201-f138-475b-86de-769d333048cb\") " pod="openshift-console-operator/console-operator-5df5ffc47c-gmjbd" Feb 24 02:21:14.100679 master-0 kubenswrapper[31411]: I0224 02:21:14.097676 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-var-log\") pod \"95806c9442ee27c355bfbf25ba6f70f0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " Feb 24 02:21:14.100679 master-0 kubenswrapper[31411]: I0224 02:21:14.097843 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-var-log" (OuterVolumeSpecName: "var-log") pod "95806c9442ee27c355bfbf25ba6f70f0" (UID: "95806c9442ee27c355bfbf25ba6f70f0"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:21:14.100679 master-0 kubenswrapper[31411]: I0224 02:21:14.097905 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-manifests\") pod \"95806c9442ee27c355bfbf25ba6f70f0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " Feb 24 02:21:14.100679 master-0 kubenswrapper[31411]: I0224 02:21:14.097941 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-var-lock\") pod \"95806c9442ee27c355bfbf25ba6f70f0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " Feb 24 02:21:14.100679 master-0 kubenswrapper[31411]: I0224 02:21:14.097964 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-resource-dir\") pod \"95806c9442ee27c355bfbf25ba6f70f0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " Feb 24 02:21:14.100679 master-0 kubenswrapper[31411]: I0224 02:21:14.097984 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-manifests" (OuterVolumeSpecName: "manifests") pod "95806c9442ee27c355bfbf25ba6f70f0" (UID: "95806c9442ee27c355bfbf25ba6f70f0"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:21:14.100679 master-0 kubenswrapper[31411]: I0224 02:21:14.097996 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-pod-resource-dir\") pod \"95806c9442ee27c355bfbf25ba6f70f0\" (UID: \"95806c9442ee27c355bfbf25ba6f70f0\") " Feb 24 02:21:14.100679 master-0 kubenswrapper[31411]: I0224 02:21:14.098014 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-var-lock" (OuterVolumeSpecName: "var-lock") pod "95806c9442ee27c355bfbf25ba6f70f0" (UID: "95806c9442ee27c355bfbf25ba6f70f0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:21:14.100679 master-0 kubenswrapper[31411]: I0224 02:21:14.098041 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "95806c9442ee27c355bfbf25ba6f70f0" (UID: "95806c9442ee27c355bfbf25ba6f70f0"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:21:14.100679 master-0 kubenswrapper[31411]: I0224 02:21:14.098236 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ea06201-f138-475b-86de-769d333048cb-config\") pod \"console-operator-5df5ffc47c-gmjbd\" (UID: \"8ea06201-f138-475b-86de-769d333048cb\") " pod="openshift-console-operator/console-operator-5df5ffc47c-gmjbd" Feb 24 02:21:14.100679 master-0 kubenswrapper[31411]: I0224 02:21:14.098333 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ea06201-f138-475b-86de-769d333048cb-trusted-ca\") pod \"console-operator-5df5ffc47c-gmjbd\" (UID: \"8ea06201-f138-475b-86de-769d333048cb\") " pod="openshift-console-operator/console-operator-5df5ffc47c-gmjbd" Feb 24 02:21:14.100679 master-0 kubenswrapper[31411]: I0224 02:21:14.098363 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ea06201-f138-475b-86de-769d333048cb-serving-cert\") pod \"console-operator-5df5ffc47c-gmjbd\" (UID: \"8ea06201-f138-475b-86de-769d333048cb\") " pod="openshift-console-operator/console-operator-5df5ffc47c-gmjbd" Feb 24 02:21:14.100679 master-0 kubenswrapper[31411]: I0224 02:21:14.098391 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhn65\" (UniqueName: \"kubernetes.io/projected/8ea06201-f138-475b-86de-769d333048cb-kube-api-access-qhn65\") pod \"console-operator-5df5ffc47c-gmjbd\" (UID: \"8ea06201-f138-475b-86de-769d333048cb\") " pod="openshift-console-operator/console-operator-5df5ffc47c-gmjbd" Feb 24 02:21:14.100679 master-0 kubenswrapper[31411]: I0224 02:21:14.098450 31411 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-manifests\") on node \"master-0\" DevicePath \"\"" Feb 24 02:21:14.100679 master-0 kubenswrapper[31411]: I0224 02:21:14.098461 31411 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 02:21:14.100679 master-0 kubenswrapper[31411]: I0224 02:21:14.098472 31411 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:21:14.100679 master-0 kubenswrapper[31411]: I0224 02:21:14.098485 31411 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-var-log\") on node \"master-0\" DevicePath \"\"" Feb 24 02:21:14.100679 master-0 kubenswrapper[31411]: I0224 02:21:14.099036 31411 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 24 02:21:14.100679 master-0 kubenswrapper[31411]: I0224 02:21:14.099682 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ea06201-f138-475b-86de-769d333048cb-config\") pod \"console-operator-5df5ffc47c-gmjbd\" (UID: \"8ea06201-f138-475b-86de-769d333048cb\") " pod="openshift-console-operator/console-operator-5df5ffc47c-gmjbd" Feb 24 02:21:14.100679 master-0 kubenswrapper[31411]: I0224 02:21:14.100633 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ea06201-f138-475b-86de-769d333048cb-trusted-ca\") pod \"console-operator-5df5ffc47c-gmjbd\" (UID: \"8ea06201-f138-475b-86de-769d333048cb\") " pod="openshift-console-operator/console-operator-5df5ffc47c-gmjbd" Feb 24 02:21:14.103977 master-0 kubenswrapper[31411]: I0224 02:21:14.103942 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "95806c9442ee27c355bfbf25ba6f70f0" (UID: "95806c9442ee27c355bfbf25ba6f70f0"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:21:14.114351 master-0 kubenswrapper[31411]: I0224 02:21:14.114318 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhn65\" (UniqueName: \"kubernetes.io/projected/8ea06201-f138-475b-86de-769d333048cb-kube-api-access-qhn65\") pod \"console-operator-5df5ffc47c-gmjbd\" (UID: \"8ea06201-f138-475b-86de-769d333048cb\") " pod="openshift-console-operator/console-operator-5df5ffc47c-gmjbd" Feb 24 02:21:14.116820 master-0 kubenswrapper[31411]: I0224 02:21:14.116782 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ea06201-f138-475b-86de-769d333048cb-serving-cert\") pod \"console-operator-5df5ffc47c-gmjbd\" (UID: \"8ea06201-f138-475b-86de-769d333048cb\") " pod="openshift-console-operator/console-operator-5df5ffc47c-gmjbd" Feb 24 02:21:14.200016 master-0 kubenswrapper[31411]: I0224 02:21:14.199964 31411 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/95806c9442ee27c355bfbf25ba6f70f0-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:21:14.256346 master-0 kubenswrapper[31411]: I0224 02:21:14.256286 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5df5ffc47c-gmjbd" Feb 24 02:21:14.495789 master-0 kubenswrapper[31411]: I0224 02:21:14.494928 31411 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:21:14.495789 master-0 kubenswrapper[31411]: [-]has-synced failed: reason withheld Feb 24 02:21:14.495789 master-0 kubenswrapper[31411]: [+]process-running ok Feb 24 02:21:14.495789 master-0 kubenswrapper[31411]: healthz check failed Feb 24 02:21:14.495789 master-0 kubenswrapper[31411]: I0224 02:21:14.495023 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:21:14.632054 master-0 kubenswrapper[31411]: I0224 02:21:14.631970 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_95806c9442ee27c355bfbf25ba6f70f0/startup-monitor/0.log" Feb 24 02:21:14.632054 master-0 kubenswrapper[31411]: I0224 02:21:14.632052 31411 generic.go:334] "Generic (PLEG): container finished" podID="95806c9442ee27c355bfbf25ba6f70f0" containerID="d3a2e46fc3575684f8b4b20ad7df8bf7f99ff87fb6ed3592a6211e989eb744b8" exitCode=137 Feb 24 02:21:14.633040 master-0 kubenswrapper[31411]: I0224 02:21:14.632119 31411 scope.go:117] "RemoveContainer" containerID="d3a2e46fc3575684f8b4b20ad7df8bf7f99ff87fb6ed3592a6211e989eb744b8" Feb 24 02:21:14.633040 master-0 kubenswrapper[31411]: I0224 02:21:14.632753 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:21:14.669752 master-0 kubenswrapper[31411]: I0224 02:21:14.669707 31411 scope.go:117] "RemoveContainer" containerID="d3a2e46fc3575684f8b4b20ad7df8bf7f99ff87fb6ed3592a6211e989eb744b8" Feb 24 02:21:14.670228 master-0 kubenswrapper[31411]: E0224 02:21:14.670180 31411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3a2e46fc3575684f8b4b20ad7df8bf7f99ff87fb6ed3592a6211e989eb744b8\": container with ID starting with d3a2e46fc3575684f8b4b20ad7df8bf7f99ff87fb6ed3592a6211e989eb744b8 not found: ID does not exist" containerID="d3a2e46fc3575684f8b4b20ad7df8bf7f99ff87fb6ed3592a6211e989eb744b8" Feb 24 02:21:14.670304 master-0 kubenswrapper[31411]: I0224 02:21:14.670221 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3a2e46fc3575684f8b4b20ad7df8bf7f99ff87fb6ed3592a6211e989eb744b8"} err="failed to get container status \"d3a2e46fc3575684f8b4b20ad7df8bf7f99ff87fb6ed3592a6211e989eb744b8\": rpc error: code = NotFound desc = could not find container \"d3a2e46fc3575684f8b4b20ad7df8bf7f99ff87fb6ed3592a6211e989eb744b8\": container with ID starting with d3a2e46fc3575684f8b4b20ad7df8bf7f99ff87fb6ed3592a6211e989eb744b8 not found: ID does not exist" Feb 24 02:21:14.694303 master-0 kubenswrapper[31411]: I0224 02:21:14.694210 31411 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="d4b48e30-3a01-4900-9aad-811232d6b8b2" Feb 24 02:21:14.786011 master-0 kubenswrapper[31411]: I0224 02:21:14.783988 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-5df5ffc47c-gmjbd"] Feb 24 02:21:14.795668 master-0 kubenswrapper[31411]: W0224 02:21:14.794599 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ea06201_f138_475b_86de_769d333048cb.slice/crio-be0a33a790145e80d92d9182cfe0f11ac88e5b42f5147b00d8a7a1405d4caa28 WatchSource:0}: Error finding container be0a33a790145e80d92d9182cfe0f11ac88e5b42f5147b00d8a7a1405d4caa28: Status 404 returned error can't find the container with id be0a33a790145e80d92d9182cfe0f11ac88e5b42f5147b00d8a7a1405d4caa28 Feb 24 02:21:14.799564 master-0 kubenswrapper[31411]: I0224 02:21:14.798445 31411 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 02:21:15.103925 master-0 kubenswrapper[31411]: I0224 02:21:15.103863 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95806c9442ee27c355bfbf25ba6f70f0" path="/var/lib/kubelet/pods/95806c9442ee27c355bfbf25ba6f70f0/volumes" Feb 24 02:21:15.104401 master-0 kubenswrapper[31411]: I0224 02:21:15.104356 31411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Feb 24 02:21:15.125082 master-0 kubenswrapper[31411]: I0224 02:21:15.125007 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 24 02:21:15.125082 master-0 kubenswrapper[31411]: I0224 02:21:15.125057 31411 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="d4b48e30-3a01-4900-9aad-811232d6b8b2" Feb 24 02:21:15.130691 master-0 kubenswrapper[31411]: I0224 02:21:15.130610 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 24 02:21:15.130829 master-0 kubenswrapper[31411]: I0224 02:21:15.130692 31411 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="d4b48e30-3a01-4900-9aad-811232d6b8b2" Feb 24 02:21:15.487154 master-0 kubenswrapper[31411]: I0224 02:21:15.487077 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:21:15.487495 master-0 kubenswrapper[31411]: I0224 02:21:15.487305 31411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:21:15.494359 master-0 kubenswrapper[31411]: I0224 02:21:15.494259 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:21:15.495645 master-0 kubenswrapper[31411]: I0224 02:21:15.495511 31411 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:21:15.495645 master-0 kubenswrapper[31411]: [-]has-synced failed: reason withheld Feb 24 02:21:15.495645 master-0 kubenswrapper[31411]: [+]process-running ok Feb 24 02:21:15.495645 master-0 kubenswrapper[31411]: healthz check failed Feb 24 02:21:15.495917 master-0 kubenswrapper[31411]: I0224 02:21:15.495634 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:21:15.647126 master-0 kubenswrapper[31411]: I0224 02:21:15.647030 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5df5ffc47c-gmjbd" event={"ID":"8ea06201-f138-475b-86de-769d333048cb","Type":"ContainerStarted","Data":"be0a33a790145e80d92d9182cfe0f11ac88e5b42f5147b00d8a7a1405d4caa28"} Feb 24 02:21:16.495968 master-0 kubenswrapper[31411]: I0224 02:21:16.495874 31411 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:21:16.495968 master-0 kubenswrapper[31411]: [-]has-synced failed: reason withheld Feb 24 02:21:16.495968 master-0 kubenswrapper[31411]: [+]process-running ok Feb 24 02:21:16.495968 master-0 kubenswrapper[31411]: healthz check failed Feb 24 02:21:16.496771 master-0 kubenswrapper[31411]: I0224 02:21:16.496001 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:21:17.495478 master-0 kubenswrapper[31411]: I0224 02:21:17.495293 31411 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:21:17.495478 master-0 kubenswrapper[31411]: [-]has-synced failed: reason withheld Feb 24 02:21:17.495478 master-0 kubenswrapper[31411]: [+]process-running ok Feb 24 02:21:17.495478 master-0 kubenswrapper[31411]: healthz check failed Feb 24 02:21:17.497114 master-0 kubenswrapper[31411]: I0224 02:21:17.495485 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:21:18.495315 master-0 kubenswrapper[31411]: I0224 02:21:18.495192 31411 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:21:18.495315 master-0 kubenswrapper[31411]: [-]has-synced failed: reason withheld Feb 24 02:21:18.495315 master-0 kubenswrapper[31411]: [+]process-running ok Feb 24 02:21:18.495315 master-0 kubenswrapper[31411]: healthz check failed Feb 24 02:21:18.496445 master-0 kubenswrapper[31411]: I0224 02:21:18.495353 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:21:18.701927 master-0 kubenswrapper[31411]: I0224 02:21:18.701790 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-5df5ffc47c-gmjbd_8ea06201-f138-475b-86de-769d333048cb/console-operator/0.log" Feb 24 02:21:18.702305 master-0 kubenswrapper[31411]: I0224 02:21:18.701981 31411 generic.go:334] "Generic (PLEG): container finished" podID="8ea06201-f138-475b-86de-769d333048cb" containerID="d2c3017be7e04dc534078fdf7268da3562c82c4f1a7d7149c6b5a473cfde839b" exitCode=255 Feb 24 02:21:18.702305 master-0 kubenswrapper[31411]: I0224 02:21:18.702080 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5df5ffc47c-gmjbd" event={"ID":"8ea06201-f138-475b-86de-769d333048cb","Type":"ContainerDied","Data":"d2c3017be7e04dc534078fdf7268da3562c82c4f1a7d7149c6b5a473cfde839b"} Feb 24 02:21:18.703079 master-0 kubenswrapper[31411]: I0224 02:21:18.703010 31411 scope.go:117] "RemoveContainer" containerID="d2c3017be7e04dc534078fdf7268da3562c82c4f1a7d7149c6b5a473cfde839b" Feb 24 02:21:19.496555 master-0 kubenswrapper[31411]: I0224 02:21:19.496468 31411 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:21:19.496555 master-0 kubenswrapper[31411]: [-]has-synced failed: reason withheld Feb 24 02:21:19.496555 master-0 kubenswrapper[31411]: [+]process-running ok Feb 24 02:21:19.496555 master-0 kubenswrapper[31411]: healthz check failed Feb 24 02:21:19.497693 master-0 kubenswrapper[31411]: I0224 02:21:19.496598 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:21:19.513600 master-0 kubenswrapper[31411]: I0224 02:21:19.513538 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5508683b-09ae-47a1-89fd-b0891a881e09\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:21:19.513917 master-0 kubenswrapper[31411]: E0224 02:21:19.513843 31411 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 02:21:19.514025 master-0 kubenswrapper[31411]: E0224 02:21:19.513919 31411 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-2-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 02:21:19.514092 master-0 kubenswrapper[31411]: E0224 02:21:19.514033 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access podName:5508683b-09ae-47a1-89fd-b0891a881e09 nodeName:}" failed. No retries permitted until 2026-02-24 02:21:35.513995672 +0000 UTC m=+38.731193558 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access") pod "installer-2-master-0" (UID: "5508683b-09ae-47a1-89fd-b0891a881e09") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 02:21:19.714460 master-0 kubenswrapper[31411]: I0224 02:21:19.714382 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-5df5ffc47c-gmjbd_8ea06201-f138-475b-86de-769d333048cb/console-operator/1.log" Feb 24 02:21:19.715342 master-0 kubenswrapper[31411]: I0224 02:21:19.715302 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-5df5ffc47c-gmjbd_8ea06201-f138-475b-86de-769d333048cb/console-operator/0.log" Feb 24 02:21:19.715623 master-0 kubenswrapper[31411]: I0224 02:21:19.715554 31411 generic.go:334] "Generic (PLEG): container finished" podID="8ea06201-f138-475b-86de-769d333048cb" containerID="a2b19221868af929d70648cf2270d9cf601e49138da9d890ed4b0716f7a6dd82" exitCode=255 Feb 24 02:21:19.715776 master-0 kubenswrapper[31411]: I0224 02:21:19.715665 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5df5ffc47c-gmjbd" event={"ID":"8ea06201-f138-475b-86de-769d333048cb","Type":"ContainerDied","Data":"a2b19221868af929d70648cf2270d9cf601e49138da9d890ed4b0716f7a6dd82"} Feb 24 02:21:19.715948 master-0 kubenswrapper[31411]: I0224 02:21:19.715924 31411 scope.go:117] "RemoveContainer" containerID="d2c3017be7e04dc534078fdf7268da3562c82c4f1a7d7149c6b5a473cfde839b" Feb 24 02:21:19.716638 master-0 kubenswrapper[31411]: I0224 02:21:19.716549 31411 scope.go:117] "RemoveContainer" containerID="a2b19221868af929d70648cf2270d9cf601e49138da9d890ed4b0716f7a6dd82" Feb 24 02:21:19.717064 master-0 kubenswrapper[31411]: E0224 02:21:19.717003 31411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-5df5ffc47c-gmjbd_openshift-console-operator(8ea06201-f138-475b-86de-769d333048cb)\"" pod="openshift-console-operator/console-operator-5df5ffc47c-gmjbd" podUID="8ea06201-f138-475b-86de-769d333048cb" Feb 24 02:21:20.495472 master-0 kubenswrapper[31411]: I0224 02:21:20.495399 31411 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:21:20.495472 master-0 kubenswrapper[31411]: [-]has-synced failed: reason withheld Feb 24 02:21:20.495472 master-0 kubenswrapper[31411]: [+]process-running ok Feb 24 02:21:20.495472 master-0 kubenswrapper[31411]: healthz check failed Feb 24 02:21:20.496036 master-0 kubenswrapper[31411]: I0224 02:21:20.495493 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:21:20.728205 master-0 kubenswrapper[31411]: I0224 02:21:20.728132 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-5df5ffc47c-gmjbd_8ea06201-f138-475b-86de-769d333048cb/console-operator/1.log" Feb 24 02:21:20.729172 master-0 kubenswrapper[31411]: I0224 02:21:20.728982 31411 scope.go:117] "RemoveContainer" containerID="a2b19221868af929d70648cf2270d9cf601e49138da9d890ed4b0716f7a6dd82" Feb 24 02:21:20.729371 master-0 kubenswrapper[31411]: E0224 02:21:20.729299 31411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-5df5ffc47c-gmjbd_openshift-console-operator(8ea06201-f138-475b-86de-769d333048cb)\"" pod="openshift-console-operator/console-operator-5df5ffc47c-gmjbd" podUID="8ea06201-f138-475b-86de-769d333048cb" Feb 24 02:21:21.504213 master-0 kubenswrapper[31411]: I0224 02:21:21.495697 31411 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:21:21.504213 master-0 kubenswrapper[31411]: [-]has-synced failed: reason withheld Feb 24 02:21:21.504213 master-0 kubenswrapper[31411]: [+]process-running ok Feb 24 02:21:21.504213 master-0 kubenswrapper[31411]: healthz check failed Feb 24 02:21:21.504213 master-0 kubenswrapper[31411]: I0224 02:21:21.503432 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:21:22.495853 master-0 kubenswrapper[31411]: I0224 02:21:22.495758 31411 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-22sgl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 02:21:22.495853 master-0 kubenswrapper[31411]: [-]has-synced failed: reason withheld Feb 24 02:21:22.495853 master-0 kubenswrapper[31411]: [+]process-running ok Feb 24 02:21:22.495853 master-0 kubenswrapper[31411]: healthz check failed Feb 24 02:21:22.497011 master-0 kubenswrapper[31411]: I0224 02:21:22.495868 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" podUID="6a08a1e4-cf92-4733-a8af-c7ac5b21e925" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 02:21:23.500263 master-0 kubenswrapper[31411]: I0224 02:21:23.500205 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:21:23.509078 master-0 kubenswrapper[31411]: I0224 02:21:23.509040 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-7b65dc9fcb-22sgl" Feb 24 02:21:24.257788 master-0 kubenswrapper[31411]: I0224 02:21:24.257675 31411 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-5df5ffc47c-gmjbd" Feb 24 02:21:24.257788 master-0 kubenswrapper[31411]: I0224 02:21:24.257770 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-5df5ffc47c-gmjbd" Feb 24 02:21:24.258558 master-0 kubenswrapper[31411]: I0224 02:21:24.258512 31411 scope.go:117] "RemoveContainer" containerID="a2b19221868af929d70648cf2270d9cf601e49138da9d890ed4b0716f7a6dd82" Feb 24 02:21:24.260233 master-0 kubenswrapper[31411]: E0224 02:21:24.258917 31411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-5df5ffc47c-gmjbd_openshift-console-operator(8ea06201-f138-475b-86de-769d333048cb)\"" pod="openshift-console-operator/console-operator-5df5ffc47c-gmjbd" podUID="8ea06201-f138-475b-86de-769d333048cb" Feb 24 02:21:27.526765 master-0 kubenswrapper[31411]: I0224 02:21:27.526690 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-95876988f-c58ls"] Feb 24 02:21:27.528113 master-0 kubenswrapper[31411]: I0224 02:21:27.528070 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.536593 master-0 kubenswrapper[31411]: I0224 02:21:27.533888 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 24 02:21:27.536593 master-0 kubenswrapper[31411]: I0224 02:21:27.535804 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 24 02:21:27.536593 master-0 kubenswrapper[31411]: I0224 02:21:27.536129 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 24 02:21:27.553611 master-0 kubenswrapper[31411]: I0224 02:21:27.548054 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 24 02:21:27.553611 master-0 kubenswrapper[31411]: I0224 02:21:27.548166 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 24 02:21:27.553611 master-0 kubenswrapper[31411]: I0224 02:21:27.549426 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 24 02:21:27.560701 master-0 kubenswrapper[31411]: I0224 02:21:27.555712 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 24 02:21:27.560701 master-0 kubenswrapper[31411]: I0224 02:21:27.555999 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-2xl7c" Feb 24 02:21:27.560701 master-0 kubenswrapper[31411]: I0224 02:21:27.556320 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 24 02:21:27.560701 master-0 kubenswrapper[31411]: I0224 02:21:27.557222 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 24 02:21:27.560701 master-0 kubenswrapper[31411]: I0224 02:21:27.558049 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 24 02:21:27.560701 master-0 kubenswrapper[31411]: I0224 02:21:27.560162 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 24 02:21:27.561522 master-0 kubenswrapper[31411]: I0224 02:21:27.561450 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-95876988f-c58ls"] Feb 24 02:21:27.565385 master-0 kubenswrapper[31411]: I0224 02:21:27.565348 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 24 02:21:27.570773 master-0 kubenswrapper[31411]: I0224 02:21:27.570715 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 24 02:21:27.690189 master-0 kubenswrapper[31411]: I0224 02:21:27.687902 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-audit-policies\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.690189 master-0 kubenswrapper[31411]: I0224 02:21:27.688007 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-cliconfig\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.690189 master-0 kubenswrapper[31411]: I0224 02:21:27.688207 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h6lb\" (UniqueName: \"kubernetes.io/projected/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-kube-api-access-6h6lb\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.690189 master-0 kubenswrapper[31411]: I0224 02:21:27.688349 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.690189 master-0 kubenswrapper[31411]: I0224 02:21:27.688442 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-router-certs\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.690189 master-0 kubenswrapper[31411]: I0224 02:21:27.688513 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.690189 master-0 kubenswrapper[31411]: I0224 02:21:27.688560 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-audit-dir\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.690189 master-0 kubenswrapper[31411]: I0224 02:21:27.688626 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-session\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.690189 master-0 kubenswrapper[31411]: I0224 02:21:27.688726 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-user-template-error\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.690189 master-0 kubenswrapper[31411]: I0224 02:21:27.688849 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-user-template-login\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.690189 master-0 kubenswrapper[31411]: I0224 02:21:27.688934 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-service-ca\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.690189 master-0 kubenswrapper[31411]: I0224 02:21:27.688975 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.690189 master-0 kubenswrapper[31411]: I0224 02:21:27.689020 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-serving-cert\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.790507 master-0 kubenswrapper[31411]: I0224 02:21:27.790338 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-user-template-login\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.790507 master-0 kubenswrapper[31411]: I0224 02:21:27.790402 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-service-ca\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.790886 master-0 kubenswrapper[31411]: I0224 02:21:27.790618 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.791155 master-0 kubenswrapper[31411]: I0224 02:21:27.791105 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-serving-cert\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.791377 master-0 kubenswrapper[31411]: I0224 02:21:27.791320 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-audit-policies\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.791479 master-0 kubenswrapper[31411]: I0224 02:21:27.791380 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-cliconfig\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.791479 master-0 kubenswrapper[31411]: I0224 02:21:27.791420 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h6lb\" (UniqueName: \"kubernetes.io/projected/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-kube-api-access-6h6lb\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.791479 master-0 kubenswrapper[31411]: I0224 02:21:27.791461 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.791739 master-0 kubenswrapper[31411]: I0224 02:21:27.791492 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-router-certs\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.791739 master-0 kubenswrapper[31411]: I0224 02:21:27.791524 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.791739 master-0 kubenswrapper[31411]: I0224 02:21:27.791555 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-audit-dir\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.791739 master-0 kubenswrapper[31411]: I0224 02:21:27.791603 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-session\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.791739 master-0 kubenswrapper[31411]: I0224 02:21:27.791643 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-user-template-error\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.792272 master-0 kubenswrapper[31411]: I0224 02:21:27.792206 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-service-ca\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.793261 master-0 kubenswrapper[31411]: E0224 02:21:27.793175 31411 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Feb 24 02:21:27.793399 master-0 kubenswrapper[31411]: E0224 02:21:27.793348 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-cliconfig podName:226b24ba-d12c-4453-a6c9-0d2f7f50c4ff nodeName:}" failed. No retries permitted until 2026-02-24 02:21:28.293307081 +0000 UTC m=+31.510504967 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-cliconfig") pod "oauth-openshift-95876988f-c58ls" (UID: "226b24ba-d12c-4453-a6c9-0d2f7f50c4ff") : configmap "v4-0-config-system-cliconfig" not found Feb 24 02:21:27.793399 master-0 kubenswrapper[31411]: I0224 02:21:27.793379 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-audit-dir\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.793783 master-0 kubenswrapper[31411]: I0224 02:21:27.793727 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-audit-policies\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.799483 master-0 kubenswrapper[31411]: I0224 02:21:27.796940 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.799483 master-0 kubenswrapper[31411]: I0224 02:21:27.797303 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.799483 master-0 kubenswrapper[31411]: I0224 02:21:27.797402 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-user-template-error\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.799483 master-0 kubenswrapper[31411]: I0224 02:21:27.797699 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-user-template-login\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.799483 master-0 kubenswrapper[31411]: I0224 02:21:27.797877 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-serving-cert\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.799483 master-0 kubenswrapper[31411]: I0224 02:21:27.798085 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-router-certs\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.800010 master-0 kubenswrapper[31411]: I0224 02:21:27.799654 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.802336 master-0 kubenswrapper[31411]: I0224 02:21:27.802259 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-session\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:27.817772 master-0 kubenswrapper[31411]: I0224 02:21:27.817716 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h6lb\" (UniqueName: \"kubernetes.io/projected/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-kube-api-access-6h6lb\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:28.302141 master-0 kubenswrapper[31411]: I0224 02:21:28.302012 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-cliconfig\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:28.302545 master-0 kubenswrapper[31411]: E0224 02:21:28.302220 31411 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Feb 24 02:21:28.302545 master-0 kubenswrapper[31411]: E0224 02:21:28.302351 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-cliconfig podName:226b24ba-d12c-4453-a6c9-0d2f7f50c4ff nodeName:}" failed. No retries permitted until 2026-02-24 02:21:29.30231886 +0000 UTC m=+32.519516746 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-cliconfig") pod "oauth-openshift-95876988f-c58ls" (UID: "226b24ba-d12c-4453-a6c9-0d2f7f50c4ff") : configmap "v4-0-config-system-cliconfig" not found Feb 24 02:21:29.321767 master-0 kubenswrapper[31411]: I0224 02:21:29.321668 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-cliconfig\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:29.322942 master-0 kubenswrapper[31411]: E0224 02:21:29.321898 31411 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Feb 24 02:21:29.322942 master-0 kubenswrapper[31411]: E0224 02:21:29.322041 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-cliconfig podName:226b24ba-d12c-4453-a6c9-0d2f7f50c4ff nodeName:}" failed. No retries permitted until 2026-02-24 02:21:31.322004674 +0000 UTC m=+34.539202561 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-cliconfig") pod "oauth-openshift-95876988f-c58ls" (UID: "226b24ba-d12c-4453-a6c9-0d2f7f50c4ff") : configmap "v4-0-config-system-cliconfig" not found Feb 24 02:21:29.386410 master-0 kubenswrapper[31411]: I0224 02:21:29.386325 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:21:29.386736 master-0 kubenswrapper[31411]: I0224 02:21:29.386693 31411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:21:29.422110 master-0 kubenswrapper[31411]: I0224 02:21:29.422031 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rg9r6" Feb 24 02:21:31.361215 master-0 kubenswrapper[31411]: I0224 02:21:31.361108 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-cliconfig\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:31.362690 master-0 kubenswrapper[31411]: I0224 02:21:31.362615 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-cliconfig\") pod \"oauth-openshift-95876988f-c58ls\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:31.499447 master-0 kubenswrapper[31411]: I0224 02:21:31.499373 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:32.010190 master-0 kubenswrapper[31411]: I0224 02:21:32.010117 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-95876988f-c58ls"] Feb 24 02:21:32.863904 master-0 kubenswrapper[31411]: I0224 02:21:32.863803 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-95876988f-c58ls" event={"ID":"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff","Type":"ContainerStarted","Data":"1904f3d3180cba6d9a314ac02ca7481de11d374d89cc516057962012adc117a5"} Feb 24 02:21:34.100075 master-0 kubenswrapper[31411]: I0224 02:21:34.097411 31411 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 24 02:21:34.100075 master-0 kubenswrapper[31411]: I0224 02:21:34.097853 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="986668ae1bbdf9cce9dceeca068e9031" containerName="kube-controller-manager" containerID="cri-o://802885a1b2ef2df10b5ffa5642c921493548e8e31f3eaf3dc4bdd2f7c156af95" gracePeriod=30 Feb 24 02:21:34.100075 master-0 kubenswrapper[31411]: I0224 02:21:34.098086 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="986668ae1bbdf9cce9dceeca068e9031" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://dea6fc53fd855f67e834ba785b44e2405a5a05c89259e81225c2459f00ab9410" gracePeriod=30 Feb 24 02:21:34.100075 master-0 kubenswrapper[31411]: I0224 02:21:34.098160 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="986668ae1bbdf9cce9dceeca068e9031" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://f7e48d7a0d2c98b03ed618e0f0670a90b569c740794e4265b966c9259a6da4db" gracePeriod=30 Feb 24 02:21:34.100075 master-0 kubenswrapper[31411]: I0224 02:21:34.098218 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="986668ae1bbdf9cce9dceeca068e9031" containerName="cluster-policy-controller" containerID="cri-o://8dc4659ecc15c6bfe49a9925903b4f4687f239838392613c4865c83a8905650b" gracePeriod=30 Feb 24 02:21:34.103414 master-0 kubenswrapper[31411]: I0224 02:21:34.103364 31411 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 24 02:21:34.103891 master-0 kubenswrapper[31411]: E0224 02:21:34.103849 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986668ae1bbdf9cce9dceeca068e9031" containerName="kube-controller-manager-cert-syncer" Feb 24 02:21:34.103891 master-0 kubenswrapper[31411]: I0224 02:21:34.103886 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="986668ae1bbdf9cce9dceeca068e9031" containerName="kube-controller-manager-cert-syncer" Feb 24 02:21:34.103991 master-0 kubenswrapper[31411]: E0224 02:21:34.103931 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986668ae1bbdf9cce9dceeca068e9031" containerName="kube-controller-manager" Feb 24 02:21:34.103991 master-0 kubenswrapper[31411]: I0224 02:21:34.103948 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="986668ae1bbdf9cce9dceeca068e9031" containerName="kube-controller-manager" Feb 24 02:21:34.103991 master-0 kubenswrapper[31411]: E0224 02:21:34.103984 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986668ae1bbdf9cce9dceeca068e9031" containerName="kube-controller-manager-recovery-controller" Feb 24 02:21:34.104118 master-0 kubenswrapper[31411]: I0224 02:21:34.103998 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="986668ae1bbdf9cce9dceeca068e9031" containerName="kube-controller-manager-recovery-controller" Feb 24 02:21:34.104118 master-0 kubenswrapper[31411]: E0224 02:21:34.104022 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986668ae1bbdf9cce9dceeca068e9031" containerName="cluster-policy-controller" Feb 24 02:21:34.104118 master-0 kubenswrapper[31411]: I0224 02:21:34.104034 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="986668ae1bbdf9cce9dceeca068e9031" containerName="cluster-policy-controller" Feb 24 02:21:34.104346 master-0 kubenswrapper[31411]: I0224 02:21:34.104307 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="986668ae1bbdf9cce9dceeca068e9031" containerName="kube-controller-manager-recovery-controller" Feb 24 02:21:34.104412 master-0 kubenswrapper[31411]: I0224 02:21:34.104369 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="986668ae1bbdf9cce9dceeca068e9031" containerName="kube-controller-manager" Feb 24 02:21:34.104412 master-0 kubenswrapper[31411]: I0224 02:21:34.104398 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="986668ae1bbdf9cce9dceeca068e9031" containerName="kube-controller-manager-cert-syncer" Feb 24 02:21:34.104496 master-0 kubenswrapper[31411]: I0224 02:21:34.104426 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="986668ae1bbdf9cce9dceeca068e9031" containerName="cluster-policy-controller" Feb 24 02:21:34.225990 master-0 kubenswrapper[31411]: I0224 02:21:34.225902 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/754ca2ae56da4950b59492ccafe15df5-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"754ca2ae56da4950b59492ccafe15df5\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:21:34.226276 master-0 kubenswrapper[31411]: I0224 02:21:34.226212 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/754ca2ae56da4950b59492ccafe15df5-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"754ca2ae56da4950b59492ccafe15df5\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:21:34.327835 master-0 kubenswrapper[31411]: I0224 02:21:34.327762 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/754ca2ae56da4950b59492ccafe15df5-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"754ca2ae56da4950b59492ccafe15df5\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:21:34.328038 master-0 kubenswrapper[31411]: I0224 02:21:34.327920 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/754ca2ae56da4950b59492ccafe15df5-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"754ca2ae56da4950b59492ccafe15df5\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:21:34.328119 master-0 kubenswrapper[31411]: I0224 02:21:34.328059 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/754ca2ae56da4950b59492ccafe15df5-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"754ca2ae56da4950b59492ccafe15df5\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:21:34.328187 master-0 kubenswrapper[31411]: I0224 02:21:34.328122 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/754ca2ae56da4950b59492ccafe15df5-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"754ca2ae56da4950b59492ccafe15df5\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:21:34.887336 master-0 kubenswrapper[31411]: I0224 02:21:34.884040 31411 generic.go:334] "Generic (PLEG): container finished" podID="070ebb2d-57a2-4c76-8c93-e09d398f3b73" containerID="51a3db0d894d96bab79a718a222631106e9405e2deb8d971fa5341ac8b946184" exitCode=0 Feb 24 02:21:34.887336 master-0 kubenswrapper[31411]: I0224 02:21:34.884178 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"070ebb2d-57a2-4c76-8c93-e09d398f3b73","Type":"ContainerDied","Data":"51a3db0d894d96bab79a718a222631106e9405e2deb8d971fa5341ac8b946184"} Feb 24 02:21:34.890054 master-0 kubenswrapper[31411]: I0224 02:21:34.889981 31411 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="986668ae1bbdf9cce9dceeca068e9031" podUID="754ca2ae56da4950b59492ccafe15df5" Feb 24 02:21:34.894614 master-0 kubenswrapper[31411]: I0224 02:21:34.892697 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_986668ae1bbdf9cce9dceeca068e9031/kube-controller-manager-cert-syncer/0.log" Feb 24 02:21:34.899768 master-0 kubenswrapper[31411]: I0224 02:21:34.898378 31411 generic.go:334] "Generic (PLEG): container finished" podID="986668ae1bbdf9cce9dceeca068e9031" containerID="dea6fc53fd855f67e834ba785b44e2405a5a05c89259e81225c2459f00ab9410" exitCode=0 Feb 24 02:21:34.899768 master-0 kubenswrapper[31411]: I0224 02:21:34.898400 31411 generic.go:334] "Generic (PLEG): container finished" podID="986668ae1bbdf9cce9dceeca068e9031" containerID="f7e48d7a0d2c98b03ed618e0f0670a90b569c740794e4265b966c9259a6da4db" exitCode=2 Feb 24 02:21:34.899768 master-0 kubenswrapper[31411]: I0224 02:21:34.898407 31411 generic.go:334] "Generic (PLEG): container finished" podID="986668ae1bbdf9cce9dceeca068e9031" containerID="8dc4659ecc15c6bfe49a9925903b4f4687f239838392613c4865c83a8905650b" exitCode=0 Feb 24 02:21:34.899768 master-0 kubenswrapper[31411]: I0224 02:21:34.898416 31411 generic.go:334] "Generic (PLEG): container finished" podID="986668ae1bbdf9cce9dceeca068e9031" containerID="802885a1b2ef2df10b5ffa5642c921493548e8e31f3eaf3dc4bdd2f7c156af95" exitCode=0 Feb 24 02:21:34.988383 master-0 kubenswrapper[31411]: I0224 02:21:34.988339 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_986668ae1bbdf9cce9dceeca068e9031/kube-controller-manager-cert-syncer/0.log" Feb 24 02:21:34.989217 master-0 kubenswrapper[31411]: I0224 02:21:34.989185 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:21:34.997238 master-0 kubenswrapper[31411]: I0224 02:21:34.997193 31411 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="986668ae1bbdf9cce9dceeca068e9031" podUID="754ca2ae56da4950b59492ccafe15df5" Feb 24 02:21:35.144672 master-0 kubenswrapper[31411]: I0224 02:21:35.144511 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/986668ae1bbdf9cce9dceeca068e9031-resource-dir\") pod \"986668ae1bbdf9cce9dceeca068e9031\" (UID: \"986668ae1bbdf9cce9dceeca068e9031\") " Feb 24 02:21:35.144672 master-0 kubenswrapper[31411]: I0224 02:21:35.144644 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/986668ae1bbdf9cce9dceeca068e9031-cert-dir\") pod \"986668ae1bbdf9cce9dceeca068e9031\" (UID: \"986668ae1bbdf9cce9dceeca068e9031\") " Feb 24 02:21:35.145363 master-0 kubenswrapper[31411]: I0224 02:21:35.145145 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/986668ae1bbdf9cce9dceeca068e9031-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "986668ae1bbdf9cce9dceeca068e9031" (UID: "986668ae1bbdf9cce9dceeca068e9031"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:21:35.145363 master-0 kubenswrapper[31411]: I0224 02:21:35.145175 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/986668ae1bbdf9cce9dceeca068e9031-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "986668ae1bbdf9cce9dceeca068e9031" (UID: "986668ae1bbdf9cce9dceeca068e9031"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:21:35.246381 master-0 kubenswrapper[31411]: I0224 02:21:35.246317 31411 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/986668ae1bbdf9cce9dceeca068e9031-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:21:35.246381 master-0 kubenswrapper[31411]: I0224 02:21:35.246359 31411 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/986668ae1bbdf9cce9dceeca068e9031-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:21:35.553714 master-0 kubenswrapper[31411]: I0224 02:21:35.553637 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5508683b-09ae-47a1-89fd-b0891a881e09\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:21:35.554051 master-0 kubenswrapper[31411]: E0224 02:21:35.553988 31411 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 02:21:35.554051 master-0 kubenswrapper[31411]: E0224 02:21:35.554050 31411 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-2-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 02:21:35.554211 master-0 kubenswrapper[31411]: E0224 02:21:35.554150 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access podName:5508683b-09ae-47a1-89fd-b0891a881e09 nodeName:}" failed. No retries permitted until 2026-02-24 02:22:07.554115536 +0000 UTC m=+70.771313412 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access") pod "installer-2-master-0" (UID: "5508683b-09ae-47a1-89fd-b0891a881e09") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 02:21:35.909489 master-0 kubenswrapper[31411]: I0224 02:21:35.909081 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-95876988f-c58ls" event={"ID":"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff","Type":"ContainerStarted","Data":"78c1739440f378ea9547e3b56777f5292327e7321779d6ef20c9e2cab0a21f2f"} Feb 24 02:21:35.909489 master-0 kubenswrapper[31411]: I0224 02:21:35.909382 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:35.912371 master-0 kubenswrapper[31411]: I0224 02:21:35.912344 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_986668ae1bbdf9cce9dceeca068e9031/kube-controller-manager-cert-syncer/0.log" Feb 24 02:21:35.913679 master-0 kubenswrapper[31411]: I0224 02:21:35.913650 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:21:35.913771 master-0 kubenswrapper[31411]: I0224 02:21:35.913675 31411 scope.go:117] "RemoveContainer" containerID="dea6fc53fd855f67e834ba785b44e2405a5a05c89259e81225c2459f00ab9410" Feb 24 02:21:35.937045 master-0 kubenswrapper[31411]: I0224 02:21:35.937017 31411 scope.go:117] "RemoveContainer" containerID="f7e48d7a0d2c98b03ed618e0f0670a90b569c740794e4265b966c9259a6da4db" Feb 24 02:21:35.938505 master-0 kubenswrapper[31411]: I0224 02:21:35.938417 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-95876988f-c58ls" podStartSLOduration=5.993965398 podStartE2EDuration="8.938396576s" podCreationTimestamp="2026-02-24 02:21:27 +0000 UTC" firstStartedPulling="2026-02-24 02:21:32.019722612 +0000 UTC m=+35.236920488" lastFinishedPulling="2026-02-24 02:21:34.96415382 +0000 UTC m=+38.181351666" observedRunningTime="2026-02-24 02:21:35.937211603 +0000 UTC m=+39.154409449" watchObservedRunningTime="2026-02-24 02:21:35.938396576 +0000 UTC m=+39.155594462" Feb 24 02:21:35.947607 master-0 kubenswrapper[31411]: I0224 02:21:35.945978 31411 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="986668ae1bbdf9cce9dceeca068e9031" podUID="754ca2ae56da4950b59492ccafe15df5" Feb 24 02:21:35.971939 master-0 kubenswrapper[31411]: I0224 02:21:35.969913 31411 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="986668ae1bbdf9cce9dceeca068e9031" podUID="754ca2ae56da4950b59492ccafe15df5" Feb 24 02:21:35.976031 master-0 kubenswrapper[31411]: I0224 02:21:35.975999 31411 scope.go:117] "RemoveContainer" containerID="8dc4659ecc15c6bfe49a9925903b4f4687f239838392613c4865c83a8905650b" Feb 24 02:21:36.003853 master-0 kubenswrapper[31411]: I0224 02:21:36.000330 31411 scope.go:117] "RemoveContainer" containerID="802885a1b2ef2df10b5ffa5642c921493548e8e31f3eaf3dc4bdd2f7c156af95" Feb 24 02:21:36.055558 master-0 kubenswrapper[31411]: I0224 02:21:36.038739 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:21:36.359410 master-0 kubenswrapper[31411]: I0224 02:21:36.359366 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 02:21:36.476067 master-0 kubenswrapper[31411]: I0224 02:21:36.475987 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/070ebb2d-57a2-4c76-8c93-e09d398f3b73-kube-api-access\") pod \"070ebb2d-57a2-4c76-8c93-e09d398f3b73\" (UID: \"070ebb2d-57a2-4c76-8c93-e09d398f3b73\") " Feb 24 02:21:36.476302 master-0 kubenswrapper[31411]: I0224 02:21:36.476129 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/070ebb2d-57a2-4c76-8c93-e09d398f3b73-kubelet-dir\") pod \"070ebb2d-57a2-4c76-8c93-e09d398f3b73\" (UID: \"070ebb2d-57a2-4c76-8c93-e09d398f3b73\") " Feb 24 02:21:36.476302 master-0 kubenswrapper[31411]: I0224 02:21:36.476229 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/070ebb2d-57a2-4c76-8c93-e09d398f3b73-var-lock\") pod \"070ebb2d-57a2-4c76-8c93-e09d398f3b73\" (UID: \"070ebb2d-57a2-4c76-8c93-e09d398f3b73\") " Feb 24 02:21:36.479593 master-0 kubenswrapper[31411]: I0224 02:21:36.476367 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/070ebb2d-57a2-4c76-8c93-e09d398f3b73-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "070ebb2d-57a2-4c76-8c93-e09d398f3b73" (UID: "070ebb2d-57a2-4c76-8c93-e09d398f3b73"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:21:36.479593 master-0 kubenswrapper[31411]: I0224 02:21:36.476393 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/070ebb2d-57a2-4c76-8c93-e09d398f3b73-var-lock" (OuterVolumeSpecName: "var-lock") pod "070ebb2d-57a2-4c76-8c93-e09d398f3b73" (UID: "070ebb2d-57a2-4c76-8c93-e09d398f3b73"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:21:36.479593 master-0 kubenswrapper[31411]: I0224 02:21:36.476916 31411 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/070ebb2d-57a2-4c76-8c93-e09d398f3b73-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:21:36.479593 master-0 kubenswrapper[31411]: I0224 02:21:36.476940 31411 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/070ebb2d-57a2-4c76-8c93-e09d398f3b73-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 02:21:36.494356 master-0 kubenswrapper[31411]: I0224 02:21:36.494265 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/070ebb2d-57a2-4c76-8c93-e09d398f3b73-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "070ebb2d-57a2-4c76-8c93-e09d398f3b73" (UID: "070ebb2d-57a2-4c76-8c93-e09d398f3b73"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:21:36.578723 master-0 kubenswrapper[31411]: I0224 02:21:36.578659 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/070ebb2d-57a2-4c76-8c93-e09d398f3b73-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 02:21:36.922791 master-0 kubenswrapper[31411]: I0224 02:21:36.922743 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 02:21:36.922791 master-0 kubenswrapper[31411]: I0224 02:21:36.922795 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"070ebb2d-57a2-4c76-8c93-e09d398f3b73","Type":"ContainerDied","Data":"b9883c9e4f2305d2772b0fcadf9ca3936959b05b72ec07b2a03edcd3558e0737"} Feb 24 02:21:36.923142 master-0 kubenswrapper[31411]: I0224 02:21:36.922827 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9883c9e4f2305d2772b0fcadf9ca3936959b05b72ec07b2a03edcd3558e0737" Feb 24 02:21:37.100471 master-0 kubenswrapper[31411]: I0224 02:21:37.100404 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="986668ae1bbdf9cce9dceeca068e9031" path="/var/lib/kubelet/pods/986668ae1bbdf9cce9dceeca068e9031/volumes" Feb 24 02:21:38.092353 master-0 kubenswrapper[31411]: I0224 02:21:38.092293 31411 scope.go:117] "RemoveContainer" containerID="a2b19221868af929d70648cf2270d9cf601e49138da9d890ed4b0716f7a6dd82" Feb 24 02:21:38.278241 master-0 kubenswrapper[31411]: I0224 02:21:38.278152 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 24 02:21:38.278780 master-0 kubenswrapper[31411]: E0224 02:21:38.278718 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="070ebb2d-57a2-4c76-8c93-e09d398f3b73" containerName="installer" Feb 24 02:21:38.278780 master-0 kubenswrapper[31411]: I0224 02:21:38.278753 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="070ebb2d-57a2-4c76-8c93-e09d398f3b73" containerName="installer" Feb 24 02:21:38.279128 master-0 kubenswrapper[31411]: I0224 02:21:38.279080 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="070ebb2d-57a2-4c76-8c93-e09d398f3b73" containerName="installer" Feb 24 02:21:38.280259 master-0 kubenswrapper[31411]: I0224 02:21:38.280207 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 02:21:38.283478 master-0 kubenswrapper[31411]: I0224 02:21:38.283170 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 24 02:21:38.287814 master-0 kubenswrapper[31411]: I0224 02:21:38.287566 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-rchfr" Feb 24 02:21:38.304661 master-0 kubenswrapper[31411]: I0224 02:21:38.304512 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 24 02:21:38.411492 master-0 kubenswrapper[31411]: I0224 02:21:38.411226 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/31ece62b-35a0-4e9f-a023-4bbeea187e6a-var-lock\") pod \"installer-3-master-0\" (UID: \"31ece62b-35a0-4e9f-a023-4bbeea187e6a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 02:21:38.411492 master-0 kubenswrapper[31411]: I0224 02:21:38.411342 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31ece62b-35a0-4e9f-a023-4bbeea187e6a-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"31ece62b-35a0-4e9f-a023-4bbeea187e6a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 02:21:38.411823 master-0 kubenswrapper[31411]: I0224 02:21:38.411619 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31ece62b-35a0-4e9f-a023-4bbeea187e6a-kube-api-access\") pod \"installer-3-master-0\" (UID: \"31ece62b-35a0-4e9f-a023-4bbeea187e6a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 02:21:38.514886 master-0 kubenswrapper[31411]: I0224 02:21:38.514776 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/31ece62b-35a0-4e9f-a023-4bbeea187e6a-var-lock\") pod \"installer-3-master-0\" (UID: \"31ece62b-35a0-4e9f-a023-4bbeea187e6a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 02:21:38.515193 master-0 kubenswrapper[31411]: I0224 02:21:38.514979 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/31ece62b-35a0-4e9f-a023-4bbeea187e6a-var-lock\") pod \"installer-3-master-0\" (UID: \"31ece62b-35a0-4e9f-a023-4bbeea187e6a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 02:21:38.515193 master-0 kubenswrapper[31411]: I0224 02:21:38.515111 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31ece62b-35a0-4e9f-a023-4bbeea187e6a-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"31ece62b-35a0-4e9f-a023-4bbeea187e6a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 02:21:38.515354 master-0 kubenswrapper[31411]: I0224 02:21:38.515231 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31ece62b-35a0-4e9f-a023-4bbeea187e6a-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"31ece62b-35a0-4e9f-a023-4bbeea187e6a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 02:21:38.515354 master-0 kubenswrapper[31411]: I0224 02:21:38.515286 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31ece62b-35a0-4e9f-a023-4bbeea187e6a-kube-api-access\") pod \"installer-3-master-0\" (UID: \"31ece62b-35a0-4e9f-a023-4bbeea187e6a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 02:21:38.536349 master-0 kubenswrapper[31411]: I0224 02:21:38.536273 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31ece62b-35a0-4e9f-a023-4bbeea187e6a-kube-api-access\") pod \"installer-3-master-0\" (UID: \"31ece62b-35a0-4e9f-a023-4bbeea187e6a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 02:21:38.644614 master-0 kubenswrapper[31411]: I0224 02:21:38.644489 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 02:21:38.960420 master-0 kubenswrapper[31411]: I0224 02:21:38.960249 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-5df5ffc47c-gmjbd_8ea06201-f138-475b-86de-769d333048cb/console-operator/1.log" Feb 24 02:21:38.960420 master-0 kubenswrapper[31411]: I0224 02:21:38.960357 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5df5ffc47c-gmjbd" event={"ID":"8ea06201-f138-475b-86de-769d333048cb","Type":"ContainerStarted","Data":"309f5bb160d1806ce7165e06f0e15d75c2912b2dfe3c5c6d9766a8b146f6d185"} Feb 24 02:21:38.961207 master-0 kubenswrapper[31411]: I0224 02:21:38.961168 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-5df5ffc47c-gmjbd" Feb 24 02:21:38.971041 master-0 kubenswrapper[31411]: I0224 02:21:38.970972 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-5df5ffc47c-gmjbd" Feb 24 02:21:38.985434 master-0 kubenswrapper[31411]: I0224 02:21:38.985320 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-5df5ffc47c-gmjbd" podStartSLOduration=23.255386882 podStartE2EDuration="25.985284416s" podCreationTimestamp="2026-02-24 02:21:13 +0000 UTC" firstStartedPulling="2026-02-24 02:21:14.79836696 +0000 UTC m=+18.015564816" lastFinishedPulling="2026-02-24 02:21:17.528264444 +0000 UTC m=+20.745462350" observedRunningTime="2026-02-24 02:21:38.982526678 +0000 UTC m=+42.199724564" watchObservedRunningTime="2026-02-24 02:21:38.985284416 +0000 UTC m=+42.202482292" Feb 24 02:21:39.245453 master-0 kubenswrapper[31411]: I0224 02:21:39.240969 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 24 02:21:39.977145 master-0 kubenswrapper[31411]: I0224 02:21:39.976978 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"31ece62b-35a0-4e9f-a023-4bbeea187e6a","Type":"ContainerStarted","Data":"86bf2b6325181e208af70cb4d9a2f3add1ff276ed4f9decb02931097cc68dae9"} Feb 24 02:21:39.977145 master-0 kubenswrapper[31411]: I0224 02:21:39.977088 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"31ece62b-35a0-4e9f-a023-4bbeea187e6a","Type":"ContainerStarted","Data":"b7538280697fe5d99f19e4ecd1e459ad2582b780847f9e06f19862a092b081f2"} Feb 24 02:21:40.013999 master-0 kubenswrapper[31411]: I0224 02:21:40.013854 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-master-0" podStartSLOduration=2.013820689 podStartE2EDuration="2.013820689s" podCreationTimestamp="2026-02-24 02:21:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:21:40.012015999 +0000 UTC m=+43.229213885" watchObservedRunningTime="2026-02-24 02:21:40.013820689 +0000 UTC m=+43.231018575" Feb 24 02:21:44.163641 master-0 kubenswrapper[31411]: I0224 02:21:44.163537 31411 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 24 02:21:44.164528 master-0 kubenswrapper[31411]: I0224 02:21:44.163869 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" containerID="cri-o://f26d70a857d43fac3deacb2102ae3da953979c9be93877036525bd880271cb08" gracePeriod=30 Feb 24 02:21:44.165379 master-0 kubenswrapper[31411]: I0224 02:21:44.165317 31411 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 24 02:21:44.165765 master-0 kubenswrapper[31411]: E0224 02:21:44.165731 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 24 02:21:44.165765 master-0 kubenswrapper[31411]: I0224 02:21:44.165755 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 24 02:21:44.165873 master-0 kubenswrapper[31411]: E0224 02:21:44.165808 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 24 02:21:44.165873 master-0 kubenswrapper[31411]: I0224 02:21:44.165819 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 24 02:21:44.166033 master-0 kubenswrapper[31411]: I0224 02:21:44.165999 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 24 02:21:44.166033 master-0 kubenswrapper[31411]: I0224 02:21:44.166024 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 24 02:21:44.167539 master-0 kubenswrapper[31411]: I0224 02:21:44.167496 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 02:21:44.208319 master-0 kubenswrapper[31411]: I0224 02:21:44.208259 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 24 02:21:44.272158 master-0 kubenswrapper[31411]: I0224 02:21:44.272101 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 02:21:44.272491 master-0 kubenswrapper[31411]: I0224 02:21:44.272469 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 02:21:44.325114 master-0 kubenswrapper[31411]: I0224 02:21:44.325055 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 02:21:44.346164 master-0 kubenswrapper[31411]: I0224 02:21:44.346095 31411 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="ce3a2814-5eee-41ec-87b4-5d5c5476c6e4" Feb 24 02:21:44.373183 master-0 kubenswrapper[31411]: I0224 02:21:44.373142 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"56c3cb71c9851003c8de7e7c5db4b87e\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " Feb 24 02:21:44.373628 master-0 kubenswrapper[31411]: I0224 02:21:44.373534 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets" (OuterVolumeSpecName: "secrets") pod "56c3cb71c9851003c8de7e7c5db4b87e" (UID: "56c3cb71c9851003c8de7e7c5db4b87e"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:21:44.373770 master-0 kubenswrapper[31411]: I0224 02:21:44.373750 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"56c3cb71c9851003c8de7e7c5db4b87e\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " Feb 24 02:21:44.373943 master-0 kubenswrapper[31411]: I0224 02:21:44.373811 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs" (OuterVolumeSpecName: "logs") pod "56c3cb71c9851003c8de7e7c5db4b87e" (UID: "56c3cb71c9851003c8de7e7c5db4b87e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:21:44.378562 master-0 kubenswrapper[31411]: I0224 02:21:44.378533 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 02:21:44.378740 master-0 kubenswrapper[31411]: I0224 02:21:44.378721 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 02:21:44.378921 master-0 kubenswrapper[31411]: I0224 02:21:44.378905 31411 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 02:21:44.379009 master-0 kubenswrapper[31411]: I0224 02:21:44.378996 31411 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") on node \"master-0\" DevicePath \"\"" Feb 24 02:21:44.379128 master-0 kubenswrapper[31411]: I0224 02:21:44.379092 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 02:21:44.379241 master-0 kubenswrapper[31411]: I0224 02:21:44.379222 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 02:21:44.507818 master-0 kubenswrapper[31411]: I0224 02:21:44.507777 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 02:21:44.552431 master-0 kubenswrapper[31411]: W0224 02:21:44.552360 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ff46cdb00d28519af7c0cdc9ea8d11.slice/crio-72f8787a7afaf4b3f67c62fa87f52c94bce2a618dfd34441bd25b63990a3417e WatchSource:0}: Error finding container 72f8787a7afaf4b3f67c62fa87f52c94bce2a618dfd34441bd25b63990a3417e: Status 404 returned error can't find the container with id 72f8787a7afaf4b3f67c62fa87f52c94bce2a618dfd34441bd25b63990a3417e Feb 24 02:21:44.861393 master-0 kubenswrapper[31411]: I0224 02:21:44.861336 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 24 02:21:44.861636 master-0 kubenswrapper[31411]: I0224 02:21:44.861592 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-3-master-0" podUID="31ece62b-35a0-4e9f-a023-4bbeea187e6a" containerName="installer" containerID="cri-o://86bf2b6325181e208af70cb4d9a2f3add1ff276ed4f9decb02931097cc68dae9" gracePeriod=30 Feb 24 02:21:45.028284 master-0 kubenswrapper[31411]: I0224 02:21:45.028193 31411 generic.go:334] "Generic (PLEG): container finished" podID="56c3cb71c9851003c8de7e7c5db4b87e" containerID="f26d70a857d43fac3deacb2102ae3da953979c9be93877036525bd880271cb08" exitCode=0 Feb 24 02:21:45.028462 master-0 kubenswrapper[31411]: I0224 02:21:45.028354 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d1c4a7e4e4241cdd4f673e537ec599a9ec1bd539d78669446c1a36b609a7a02" Feb 24 02:21:45.028462 master-0 kubenswrapper[31411]: I0224 02:21:45.028369 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 02:21:45.028462 master-0 kubenswrapper[31411]: I0224 02:21:45.028410 31411 scope.go:117] "RemoveContainer" containerID="5fbafce85063f872b1786e48e809b15f5aa08369e9d34c7d53d1c636ed17075e" Feb 24 02:21:45.031495 master-0 kubenswrapper[31411]: I0224 02:21:45.031424 31411 generic.go:334] "Generic (PLEG): container finished" podID="56ff46cdb00d28519af7c0cdc9ea8d11" containerID="2457dc349077b5c7f4b7dcbaf833ec1967e6d2f9a313f5f811c265cc6634ec95" exitCode=0 Feb 24 02:21:45.031617 master-0 kubenswrapper[31411]: I0224 02:21:45.031547 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"56ff46cdb00d28519af7c0cdc9ea8d11","Type":"ContainerDied","Data":"2457dc349077b5c7f4b7dcbaf833ec1967e6d2f9a313f5f811c265cc6634ec95"} Feb 24 02:21:45.031696 master-0 kubenswrapper[31411]: I0224 02:21:45.031645 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"56ff46cdb00d28519af7c0cdc9ea8d11","Type":"ContainerStarted","Data":"72f8787a7afaf4b3f67c62fa87f52c94bce2a618dfd34441bd25b63990a3417e"} Feb 24 02:21:45.034916 master-0 kubenswrapper[31411]: I0224 02:21:45.034852 31411 generic.go:334] "Generic (PLEG): container finished" podID="5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd" containerID="a1bd23ca02400c09ae750684bb9e9e78e05cea2070ce8f8f95459966c9e876eb" exitCode=0 Feb 24 02:21:45.035045 master-0 kubenswrapper[31411]: I0224 02:21:45.034913 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd","Type":"ContainerDied","Data":"a1bd23ca02400c09ae750684bb9e9e78e05cea2070ce8f8f95459966c9e876eb"} Feb 24 02:21:45.110287 master-0 kubenswrapper[31411]: I0224 02:21:45.110180 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56c3cb71c9851003c8de7e7c5db4b87e" path="/var/lib/kubelet/pods/56c3cb71c9851003c8de7e7c5db4b87e/volumes" Feb 24 02:21:45.110814 master-0 kubenswrapper[31411]: I0224 02:21:45.110771 31411 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 24 02:21:45.136140 master-0 kubenswrapper[31411]: I0224 02:21:45.136075 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 24 02:21:45.136140 master-0 kubenswrapper[31411]: I0224 02:21:45.136132 31411 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="ce3a2814-5eee-41ec-87b4-5d5c5476c6e4" Feb 24 02:21:45.141842 master-0 kubenswrapper[31411]: I0224 02:21:45.141746 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 24 02:21:45.141842 master-0 kubenswrapper[31411]: I0224 02:21:45.141804 31411 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="ce3a2814-5eee-41ec-87b4-5d5c5476c6e4" Feb 24 02:21:46.076084 master-0 kubenswrapper[31411]: I0224 02:21:46.075981 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"56ff46cdb00d28519af7c0cdc9ea8d11","Type":"ContainerStarted","Data":"243b73bebade5163025cc27d64f8ae820cc4a3563295e4260c1071b2d6b6b3ac"} Feb 24 02:21:46.080374 master-0 kubenswrapper[31411]: I0224 02:21:46.076108 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"56ff46cdb00d28519af7c0cdc9ea8d11","Type":"ContainerStarted","Data":"31ef0690b431a4d84c1fc9a23d7eb62060a90aaa8560581973be1ebd40e27255"} Feb 24 02:21:46.091060 master-0 kubenswrapper[31411]: I0224 02:21:46.091007 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:21:46.140220 master-0 kubenswrapper[31411]: I0224 02:21:46.140096 31411 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="9cad938a-37d0-4ea7-a3d7-857ff0ed3fa1" Feb 24 02:21:46.140220 master-0 kubenswrapper[31411]: I0224 02:21:46.140143 31411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="9cad938a-37d0-4ea7-a3d7-857ff0ed3fa1" Feb 24 02:21:46.159729 master-0 kubenswrapper[31411]: I0224 02:21:46.159648 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 24 02:21:46.165672 master-0 kubenswrapper[31411]: I0224 02:21:46.163925 31411 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:21:46.170775 master-0 kubenswrapper[31411]: I0224 02:21:46.170718 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 24 02:21:46.181051 master-0 kubenswrapper[31411]: I0224 02:21:46.180985 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:21:46.185846 master-0 kubenswrapper[31411]: I0224 02:21:46.184489 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 24 02:21:46.219545 master-0 kubenswrapper[31411]: W0224 02:21:46.219483 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod754ca2ae56da4950b59492ccafe15df5.slice/crio-1c7a4ddfc580299e516c3b8a5ba70fdc237d2a8d221bdcc8c2e2dd933bbe11dc WatchSource:0}: Error finding container 1c7a4ddfc580299e516c3b8a5ba70fdc237d2a8d221bdcc8c2e2dd933bbe11dc: Status 404 returned error can't find the container with id 1c7a4ddfc580299e516c3b8a5ba70fdc237d2a8d221bdcc8c2e2dd933bbe11dc Feb 24 02:21:46.616301 master-0 kubenswrapper[31411]: I0224 02:21:46.616244 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 24 02:21:46.749473 master-0 kubenswrapper[31411]: I0224 02:21:46.749380 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd-kube-api-access\") pod \"5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd\" (UID: \"5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd\") " Feb 24 02:21:46.749689 master-0 kubenswrapper[31411]: I0224 02:21:46.749475 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd-var-lock\") pod \"5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd\" (UID: \"5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd\") " Feb 24 02:21:46.749788 master-0 kubenswrapper[31411]: I0224 02:21:46.749772 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd-kubelet-dir\") pod \"5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd\" (UID: \"5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd\") " Feb 24 02:21:46.750497 master-0 kubenswrapper[31411]: I0224 02:21:46.750433 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd" (UID: "5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:21:46.750653 master-0 kubenswrapper[31411]: I0224 02:21:46.750518 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd-var-lock" (OuterVolumeSpecName: "var-lock") pod "5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd" (UID: "5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:21:46.767670 master-0 kubenswrapper[31411]: I0224 02:21:46.767532 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd" (UID: "5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:21:46.852043 master-0 kubenswrapper[31411]: I0224 02:21:46.851974 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 02:21:46.852043 master-0 kubenswrapper[31411]: I0224 02:21:46.852030 31411 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 02:21:46.852043 master-0 kubenswrapper[31411]: I0224 02:21:46.852051 31411 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:21:47.095375 master-0 kubenswrapper[31411]: I0224 02:21:47.095323 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 24 02:21:47.109175 master-0 kubenswrapper[31411]: I0224 02:21:47.109127 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 02:21:47.109287 master-0 kubenswrapper[31411]: I0224 02:21:47.109185 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd","Type":"ContainerDied","Data":"87721a77ed25537b4640a4bb3b51b25ccc9baed9db06541dd0ac651dd7e4b7bb"} Feb 24 02:21:47.109287 master-0 kubenswrapper[31411]: I0224 02:21:47.109225 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87721a77ed25537b4640a4bb3b51b25ccc9baed9db06541dd0ac651dd7e4b7bb" Feb 24 02:21:47.109287 master-0 kubenswrapper[31411]: I0224 02:21:47.109250 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"56ff46cdb00d28519af7c0cdc9ea8d11","Type":"ContainerStarted","Data":"4db52baf0c43cf0ba6be6034a72db963c4810f2bc5b434ffe0179b9abc789fef"} Feb 24 02:21:47.109521 master-0 kubenswrapper[31411]: I0224 02:21:47.109282 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"754ca2ae56da4950b59492ccafe15df5","Type":"ContainerStarted","Data":"aafef463fb6ec586aa93d68463140f0e495d6fd2628a807d6f4cc72093d982ad"} Feb 24 02:21:47.109521 master-0 kubenswrapper[31411]: I0224 02:21:47.109314 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"754ca2ae56da4950b59492ccafe15df5","Type":"ContainerStarted","Data":"e8427f995dbba6ba67b5688619b31dac495d913cbeaa8eabfe11085df7d0a498"} Feb 24 02:21:47.109521 master-0 kubenswrapper[31411]: I0224 02:21:47.109342 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"754ca2ae56da4950b59492ccafe15df5","Type":"ContainerStarted","Data":"1c7a4ddfc580299e516c3b8a5ba70fdc237d2a8d221bdcc8c2e2dd933bbe11dc"} Feb 24 02:21:47.132270 master-0 kubenswrapper[31411]: I0224 02:21:47.132166 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=3.13213188 podStartE2EDuration="3.13213188s" podCreationTimestamp="2026-02-24 02:21:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:21:47.128862218 +0000 UTC m=+50.346060084" watchObservedRunningTime="2026-02-24 02:21:47.13213188 +0000 UTC m=+50.349329726" Feb 24 02:21:48.134191 master-0 kubenswrapper[31411]: I0224 02:21:48.134104 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"754ca2ae56da4950b59492ccafe15df5","Type":"ContainerStarted","Data":"2dd35a4b899d0d46f8cae0b0fdc77f4727b3df36f32d794fdf4c0908a1c24f54"} Feb 24 02:21:48.134191 master-0 kubenswrapper[31411]: I0224 02:21:48.134188 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"754ca2ae56da4950b59492ccafe15df5","Type":"ContainerStarted","Data":"7221134b3af5b91666a2752516338d10accc510917d963f481c2b324d0d85d72"} Feb 24 02:21:48.165819 master-0 kubenswrapper[31411]: I0224 02:21:48.160684 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.160664782 podStartE2EDuration="2.160664782s" podCreationTimestamp="2026-02-24 02:21:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:21:48.159102778 +0000 UTC m=+51.376300654" watchObservedRunningTime="2026-02-24 02:21:48.160664782 +0000 UTC m=+51.377862658" Feb 24 02:21:49.666599 master-0 kubenswrapper[31411]: I0224 02:21:49.666474 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 24 02:21:49.667770 master-0 kubenswrapper[31411]: E0224 02:21:49.666939 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd" containerName="installer" Feb 24 02:21:49.667770 master-0 kubenswrapper[31411]: I0224 02:21:49.666964 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd" containerName="installer" Feb 24 02:21:49.667770 master-0 kubenswrapper[31411]: I0224 02:21:49.667215 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b66f6d8-e4a0-4abc-8cfe-8ecfc2a157bd" containerName="installer" Feb 24 02:21:49.668059 master-0 kubenswrapper[31411]: I0224 02:21:49.668020 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 24 02:21:49.703063 master-0 kubenswrapper[31411]: I0224 02:21:49.702975 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 24 02:21:49.708646 master-0 kubenswrapper[31411]: I0224 02:21:49.708558 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/766929c6-ef3a-4dee-ae94-76e32a331888-kube-api-access\") pod \"installer-4-master-0\" (UID: \"766929c6-ef3a-4dee-ae94-76e32a331888\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 24 02:21:49.708789 master-0 kubenswrapper[31411]: I0224 02:21:49.708687 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/766929c6-ef3a-4dee-ae94-76e32a331888-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"766929c6-ef3a-4dee-ae94-76e32a331888\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 24 02:21:49.708789 master-0 kubenswrapper[31411]: I0224 02:21:49.708740 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/766929c6-ef3a-4dee-ae94-76e32a331888-var-lock\") pod \"installer-4-master-0\" (UID: \"766929c6-ef3a-4dee-ae94-76e32a331888\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 24 02:21:49.811614 master-0 kubenswrapper[31411]: I0224 02:21:49.811467 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/766929c6-ef3a-4dee-ae94-76e32a331888-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"766929c6-ef3a-4dee-ae94-76e32a331888\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 24 02:21:49.811950 master-0 kubenswrapper[31411]: I0224 02:21:49.811663 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/766929c6-ef3a-4dee-ae94-76e32a331888-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"766929c6-ef3a-4dee-ae94-76e32a331888\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 24 02:21:49.811950 master-0 kubenswrapper[31411]: I0224 02:21:49.811706 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/766929c6-ef3a-4dee-ae94-76e32a331888-var-lock\") pod \"installer-4-master-0\" (UID: \"766929c6-ef3a-4dee-ae94-76e32a331888\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 24 02:21:49.811950 master-0 kubenswrapper[31411]: I0224 02:21:49.811771 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/766929c6-ef3a-4dee-ae94-76e32a331888-var-lock\") pod \"installer-4-master-0\" (UID: \"766929c6-ef3a-4dee-ae94-76e32a331888\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 24 02:21:49.812161 master-0 kubenswrapper[31411]: I0224 02:21:49.811972 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/766929c6-ef3a-4dee-ae94-76e32a331888-kube-api-access\") pod \"installer-4-master-0\" (UID: \"766929c6-ef3a-4dee-ae94-76e32a331888\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 24 02:21:49.842076 master-0 kubenswrapper[31411]: I0224 02:21:49.842012 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/766929c6-ef3a-4dee-ae94-76e32a331888-kube-api-access\") pod \"installer-4-master-0\" (UID: \"766929c6-ef3a-4dee-ae94-76e32a331888\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 24 02:21:50.036567 master-0 kubenswrapper[31411]: I0224 02:21:50.036480 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 24 02:21:50.540774 master-0 kubenswrapper[31411]: I0224 02:21:50.540661 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 24 02:21:50.549058 master-0 kubenswrapper[31411]: W0224 02:21:50.548773 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod766929c6_ef3a_4dee_ae94_76e32a331888.slice/crio-67d4c4e88f4e48acee302360f73e0f446a490ee30453022485f51586840cadee WatchSource:0}: Error finding container 67d4c4e88f4e48acee302360f73e0f446a490ee30453022485f51586840cadee: Status 404 returned error can't find the container with id 67d4c4e88f4e48acee302360f73e0f446a490ee30453022485f51586840cadee Feb 24 02:21:51.177448 master-0 kubenswrapper[31411]: I0224 02:21:51.177371 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"766929c6-ef3a-4dee-ae94-76e32a331888","Type":"ContainerStarted","Data":"67d4c4e88f4e48acee302360f73e0f446a490ee30453022485f51586840cadee"} Feb 24 02:21:52.196229 master-0 kubenswrapper[31411]: I0224 02:21:52.196137 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"766929c6-ef3a-4dee-ae94-76e32a331888","Type":"ContainerStarted","Data":"a5c4e0b5273e526a6ad36e9259d6c76a55d678146b51225bb5e9737af0fa0f16"} Feb 24 02:21:56.190983 master-0 kubenswrapper[31411]: I0224 02:21:56.190911 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:21:56.190983 master-0 kubenswrapper[31411]: I0224 02:21:56.190977 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:21:56.192184 master-0 kubenswrapper[31411]: I0224 02:21:56.191770 31411 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 24 02:21:56.192184 master-0 kubenswrapper[31411]: I0224 02:21:56.191845 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="754ca2ae56da4950b59492ccafe15df5" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 24 02:21:56.192184 master-0 kubenswrapper[31411]: I0224 02:21:56.192078 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:21:56.192184 master-0 kubenswrapper[31411]: I0224 02:21:56.192103 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:21:56.200146 master-0 kubenswrapper[31411]: I0224 02:21:56.200111 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:21:56.230419 master-0 kubenswrapper[31411]: I0224 02:21:56.230312 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-master-0" podStartSLOduration=7.230282254 podStartE2EDuration="7.230282254s" podCreationTimestamp="2026-02-24 02:21:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:21:52.233471022 +0000 UTC m=+55.450668908" watchObservedRunningTime="2026-02-24 02:21:56.230282254 +0000 UTC m=+59.447480140" Feb 24 02:21:56.241431 master-0 kubenswrapper[31411]: I0224 02:21:56.241363 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:21:57.086359 master-0 kubenswrapper[31411]: I0224 02:21:57.086297 31411 scope.go:117] "RemoveContainer" containerID="f26d70a857d43fac3deacb2102ae3da953979c9be93877036525bd880271cb08" Feb 24 02:22:06.182705 master-0 kubenswrapper[31411]: I0224 02:22:06.182604 31411 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 24 02:22:06.182705 master-0 kubenswrapper[31411]: I0224 02:22:06.182718 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="754ca2ae56da4950b59492ccafe15df5" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 24 02:22:07.595100 master-0 kubenswrapper[31411]: I0224 02:22:07.595000 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5508683b-09ae-47a1-89fd-b0891a881e09\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:22:07.600790 master-0 kubenswrapper[31411]: I0224 02:22:07.600707 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5508683b-09ae-47a1-89fd-b0891a881e09\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 02:22:07.797705 master-0 kubenswrapper[31411]: I0224 02:22:07.797551 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access\") pod \"5508683b-09ae-47a1-89fd-b0891a881e09\" (UID: \"5508683b-09ae-47a1-89fd-b0891a881e09\") " Feb 24 02:22:07.802826 master-0 kubenswrapper[31411]: I0224 02:22:07.802738 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5508683b-09ae-47a1-89fd-b0891a881e09" (UID: "5508683b-09ae-47a1-89fd-b0891a881e09"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:22:07.900646 master-0 kubenswrapper[31411]: I0224 02:22:07.900384 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5508683b-09ae-47a1-89fd-b0891a881e09-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 02:22:09.467827 master-0 kubenswrapper[31411]: I0224 02:22:09.467713 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 24 02:22:09.468832 master-0 kubenswrapper[31411]: I0224 02:22:09.468106 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-4-master-0" podUID="766929c6-ef3a-4dee-ae94-76e32a331888" containerName="installer" containerID="cri-o://a5c4e0b5273e526a6ad36e9259d6c76a55d678146b51225bb5e9737af0fa0f16" gracePeriod=30 Feb 24 02:22:10.141312 master-0 kubenswrapper[31411]: I0224 02:22:10.141174 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_766929c6-ef3a-4dee-ae94-76e32a331888/installer/0.log" Feb 24 02:22:10.141312 master-0 kubenswrapper[31411]: I0224 02:22:10.141316 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 24 02:22:10.246198 master-0 kubenswrapper[31411]: I0224 02:22:10.246104 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/766929c6-ef3a-4dee-ae94-76e32a331888-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "766929c6-ef3a-4dee-ae94-76e32a331888" (UID: "766929c6-ef3a-4dee-ae94-76e32a331888"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:22:10.246852 master-0 kubenswrapper[31411]: I0224 02:22:10.246767 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/766929c6-ef3a-4dee-ae94-76e32a331888-kubelet-dir\") pod \"766929c6-ef3a-4dee-ae94-76e32a331888\" (UID: \"766929c6-ef3a-4dee-ae94-76e32a331888\") " Feb 24 02:22:10.247157 master-0 kubenswrapper[31411]: I0224 02:22:10.247106 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/766929c6-ef3a-4dee-ae94-76e32a331888-var-lock\") pod \"766929c6-ef3a-4dee-ae94-76e32a331888\" (UID: \"766929c6-ef3a-4dee-ae94-76e32a331888\") " Feb 24 02:22:10.247279 master-0 kubenswrapper[31411]: I0224 02:22:10.247235 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/766929c6-ef3a-4dee-ae94-76e32a331888-kube-api-access\") pod \"766929c6-ef3a-4dee-ae94-76e32a331888\" (UID: \"766929c6-ef3a-4dee-ae94-76e32a331888\") " Feb 24 02:22:10.247618 master-0 kubenswrapper[31411]: I0224 02:22:10.247233 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/766929c6-ef3a-4dee-ae94-76e32a331888-var-lock" (OuterVolumeSpecName: "var-lock") pod "766929c6-ef3a-4dee-ae94-76e32a331888" (UID: "766929c6-ef3a-4dee-ae94-76e32a331888"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:22:10.247980 master-0 kubenswrapper[31411]: I0224 02:22:10.247918 31411 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/766929c6-ef3a-4dee-ae94-76e32a331888-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 02:22:10.247980 master-0 kubenswrapper[31411]: I0224 02:22:10.247957 31411 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/766929c6-ef3a-4dee-ae94-76e32a331888-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:22:10.255287 master-0 kubenswrapper[31411]: I0224 02:22:10.255215 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/766929c6-ef3a-4dee-ae94-76e32a331888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "766929c6-ef3a-4dee-ae94-76e32a331888" (UID: "766929c6-ef3a-4dee-ae94-76e32a331888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:22:10.350409 master-0 kubenswrapper[31411]: I0224 02:22:10.350254 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/766929c6-ef3a-4dee-ae94-76e32a331888-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 02:22:10.405006 master-0 kubenswrapper[31411]: I0224 02:22:10.404928 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_766929c6-ef3a-4dee-ae94-76e32a331888/installer/0.log" Feb 24 02:22:10.405225 master-0 kubenswrapper[31411]: I0224 02:22:10.405049 31411 generic.go:334] "Generic (PLEG): container finished" podID="766929c6-ef3a-4dee-ae94-76e32a331888" containerID="a5c4e0b5273e526a6ad36e9259d6c76a55d678146b51225bb5e9737af0fa0f16" exitCode=1 Feb 24 02:22:10.405225 master-0 kubenswrapper[31411]: I0224 02:22:10.405149 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 24 02:22:10.405438 master-0 kubenswrapper[31411]: I0224 02:22:10.405182 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"766929c6-ef3a-4dee-ae94-76e32a331888","Type":"ContainerDied","Data":"a5c4e0b5273e526a6ad36e9259d6c76a55d678146b51225bb5e9737af0fa0f16"} Feb 24 02:22:10.405438 master-0 kubenswrapper[31411]: I0224 02:22:10.405299 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"766929c6-ef3a-4dee-ae94-76e32a331888","Type":"ContainerDied","Data":"67d4c4e88f4e48acee302360f73e0f446a490ee30453022485f51586840cadee"} Feb 24 02:22:10.405438 master-0 kubenswrapper[31411]: I0224 02:22:10.405342 31411 scope.go:117] "RemoveContainer" containerID="a5c4e0b5273e526a6ad36e9259d6c76a55d678146b51225bb5e9737af0fa0f16" Feb 24 02:22:10.449561 master-0 kubenswrapper[31411]: I0224 02:22:10.449522 31411 scope.go:117] "RemoveContainer" containerID="a5c4e0b5273e526a6ad36e9259d6c76a55d678146b51225bb5e9737af0fa0f16" Feb 24 02:22:10.450417 master-0 kubenswrapper[31411]: E0224 02:22:10.450321 31411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5c4e0b5273e526a6ad36e9259d6c76a55d678146b51225bb5e9737af0fa0f16\": container with ID starting with a5c4e0b5273e526a6ad36e9259d6c76a55d678146b51225bb5e9737af0fa0f16 not found: ID does not exist" containerID="a5c4e0b5273e526a6ad36e9259d6c76a55d678146b51225bb5e9737af0fa0f16" Feb 24 02:22:10.450530 master-0 kubenswrapper[31411]: I0224 02:22:10.450475 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5c4e0b5273e526a6ad36e9259d6c76a55d678146b51225bb5e9737af0fa0f16"} err="failed to get container status \"a5c4e0b5273e526a6ad36e9259d6c76a55d678146b51225bb5e9737af0fa0f16\": rpc error: code = NotFound desc = could not find container \"a5c4e0b5273e526a6ad36e9259d6c76a55d678146b51225bb5e9737af0fa0f16\": container with ID starting with a5c4e0b5273e526a6ad36e9259d6c76a55d678146b51225bb5e9737af0fa0f16 not found: ID does not exist" Feb 24 02:22:10.471145 master-0 kubenswrapper[31411]: I0224 02:22:10.467399 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 24 02:22:10.474087 master-0 kubenswrapper[31411]: I0224 02:22:10.474036 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 24 02:22:11.108052 master-0 kubenswrapper[31411]: I0224 02:22:11.107926 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="766929c6-ef3a-4dee-ae94-76e32a331888" path="/var/lib/kubelet/pods/766929c6-ef3a-4dee-ae94-76e32a331888/volumes" Feb 24 02:22:11.416914 master-0 kubenswrapper[31411]: I0224 02:22:11.416748 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_31ece62b-35a0-4e9f-a023-4bbeea187e6a/installer/0.log" Feb 24 02:22:11.416914 master-0 kubenswrapper[31411]: I0224 02:22:11.416845 31411 generic.go:334] "Generic (PLEG): container finished" podID="31ece62b-35a0-4e9f-a023-4bbeea187e6a" containerID="86bf2b6325181e208af70cb4d9a2f3add1ff276ed4f9decb02931097cc68dae9" exitCode=1 Feb 24 02:22:11.417320 master-0 kubenswrapper[31411]: I0224 02:22:11.416997 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"31ece62b-35a0-4e9f-a023-4bbeea187e6a","Type":"ContainerDied","Data":"86bf2b6325181e208af70cb4d9a2f3add1ff276ed4f9decb02931097cc68dae9"} Feb 24 02:22:11.802524 master-0 kubenswrapper[31411]: I0224 02:22:11.802436 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_31ece62b-35a0-4e9f-a023-4bbeea187e6a/installer/0.log" Feb 24 02:22:11.803303 master-0 kubenswrapper[31411]: I0224 02:22:11.802626 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 02:22:11.980679 master-0 kubenswrapper[31411]: I0224 02:22:11.980593 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31ece62b-35a0-4e9f-a023-4bbeea187e6a-kube-api-access\") pod \"31ece62b-35a0-4e9f-a023-4bbeea187e6a\" (UID: \"31ece62b-35a0-4e9f-a023-4bbeea187e6a\") " Feb 24 02:22:11.980976 master-0 kubenswrapper[31411]: I0224 02:22:11.980932 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/31ece62b-35a0-4e9f-a023-4bbeea187e6a-var-lock\") pod \"31ece62b-35a0-4e9f-a023-4bbeea187e6a\" (UID: \"31ece62b-35a0-4e9f-a023-4bbeea187e6a\") " Feb 24 02:22:11.981064 master-0 kubenswrapper[31411]: I0224 02:22:11.980975 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31ece62b-35a0-4e9f-a023-4bbeea187e6a-kubelet-dir\") pod \"31ece62b-35a0-4e9f-a023-4bbeea187e6a\" (UID: \"31ece62b-35a0-4e9f-a023-4bbeea187e6a\") " Feb 24 02:22:11.981272 master-0 kubenswrapper[31411]: I0224 02:22:11.981204 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31ece62b-35a0-4e9f-a023-4bbeea187e6a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "31ece62b-35a0-4e9f-a023-4bbeea187e6a" (UID: "31ece62b-35a0-4e9f-a023-4bbeea187e6a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:22:11.981272 master-0 kubenswrapper[31411]: I0224 02:22:11.981151 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31ece62b-35a0-4e9f-a023-4bbeea187e6a-var-lock" (OuterVolumeSpecName: "var-lock") pod "31ece62b-35a0-4e9f-a023-4bbeea187e6a" (UID: "31ece62b-35a0-4e9f-a023-4bbeea187e6a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:22:11.981642 master-0 kubenswrapper[31411]: I0224 02:22:11.981606 31411 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/31ece62b-35a0-4e9f-a023-4bbeea187e6a-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 02:22:11.981731 master-0 kubenswrapper[31411]: I0224 02:22:11.981644 31411 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31ece62b-35a0-4e9f-a023-4bbeea187e6a-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:22:11.985609 master-0 kubenswrapper[31411]: I0224 02:22:11.985507 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31ece62b-35a0-4e9f-a023-4bbeea187e6a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "31ece62b-35a0-4e9f-a023-4bbeea187e6a" (UID: "31ece62b-35a0-4e9f-a023-4bbeea187e6a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:22:12.083764 master-0 kubenswrapper[31411]: I0224 02:22:12.083680 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31ece62b-35a0-4e9f-a023-4bbeea187e6a-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 02:22:12.431138 master-0 kubenswrapper[31411]: I0224 02:22:12.430961 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_31ece62b-35a0-4e9f-a023-4bbeea187e6a/installer/0.log" Feb 24 02:22:12.431138 master-0 kubenswrapper[31411]: I0224 02:22:12.431062 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"31ece62b-35a0-4e9f-a023-4bbeea187e6a","Type":"ContainerDied","Data":"b7538280697fe5d99f19e4ecd1e459ad2582b780847f9e06f19862a092b081f2"} Feb 24 02:22:12.431517 master-0 kubenswrapper[31411]: I0224 02:22:12.431186 31411 scope.go:117] "RemoveContainer" containerID="86bf2b6325181e208af70cb4d9a2f3add1ff276ed4f9decb02931097cc68dae9" Feb 24 02:22:12.431517 master-0 kubenswrapper[31411]: I0224 02:22:12.431192 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 02:22:12.482476 master-0 kubenswrapper[31411]: I0224 02:22:12.482395 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 24 02:22:12.486360 master-0 kubenswrapper[31411]: I0224 02:22:12.486301 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 24 02:22:13.105491 master-0 kubenswrapper[31411]: I0224 02:22:13.105392 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31ece62b-35a0-4e9f-a023-4bbeea187e6a" path="/var/lib/kubelet/pods/31ece62b-35a0-4e9f-a023-4bbeea187e6a/volumes" Feb 24 02:22:13.677060 master-0 kubenswrapper[31411]: I0224 02:22:13.676979 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 24 02:22:13.677425 master-0 kubenswrapper[31411]: E0224 02:22:13.677410 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="766929c6-ef3a-4dee-ae94-76e32a331888" containerName="installer" Feb 24 02:22:13.677518 master-0 kubenswrapper[31411]: I0224 02:22:13.677434 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="766929c6-ef3a-4dee-ae94-76e32a331888" containerName="installer" Feb 24 02:22:13.677518 master-0 kubenswrapper[31411]: E0224 02:22:13.677467 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ece62b-35a0-4e9f-a023-4bbeea187e6a" containerName="installer" Feb 24 02:22:13.677518 master-0 kubenswrapper[31411]: I0224 02:22:13.677480 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ece62b-35a0-4e9f-a023-4bbeea187e6a" containerName="installer" Feb 24 02:22:13.677830 master-0 kubenswrapper[31411]: I0224 02:22:13.677785 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="31ece62b-35a0-4e9f-a023-4bbeea187e6a" containerName="installer" Feb 24 02:22:13.677830 master-0 kubenswrapper[31411]: I0224 02:22:13.677817 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="766929c6-ef3a-4dee-ae94-76e32a331888" containerName="installer" Feb 24 02:22:13.678687 master-0 kubenswrapper[31411]: I0224 02:22:13.678639 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 24 02:22:13.682775 master-0 kubenswrapper[31411]: I0224 02:22:13.682701 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 24 02:22:13.682775 master-0 kubenswrapper[31411]: I0224 02:22:13.682757 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-rchfr" Feb 24 02:22:13.687554 master-0 kubenswrapper[31411]: I0224 02:22:13.687449 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 24 02:22:13.715886 master-0 kubenswrapper[31411]: I0224 02:22:13.715809 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fc19da8d-13e6-4e8e-a506-9b067fd8870b-var-lock\") pod \"installer-5-master-0\" (UID: \"fc19da8d-13e6-4e8e-a506-9b067fd8870b\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 24 02:22:13.716048 master-0 kubenswrapper[31411]: I0224 02:22:13.715931 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc19da8d-13e6-4e8e-a506-9b067fd8870b-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"fc19da8d-13e6-4e8e-a506-9b067fd8870b\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 24 02:22:13.716125 master-0 kubenswrapper[31411]: I0224 02:22:13.716050 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc19da8d-13e6-4e8e-a506-9b067fd8870b-kube-api-access\") pod \"installer-5-master-0\" (UID: \"fc19da8d-13e6-4e8e-a506-9b067fd8870b\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 24 02:22:13.818175 master-0 kubenswrapper[31411]: I0224 02:22:13.818100 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc19da8d-13e6-4e8e-a506-9b067fd8870b-kube-api-access\") pod \"installer-5-master-0\" (UID: \"fc19da8d-13e6-4e8e-a506-9b067fd8870b\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 24 02:22:13.818384 master-0 kubenswrapper[31411]: I0224 02:22:13.818217 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fc19da8d-13e6-4e8e-a506-9b067fd8870b-var-lock\") pod \"installer-5-master-0\" (UID: \"fc19da8d-13e6-4e8e-a506-9b067fd8870b\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 24 02:22:13.818384 master-0 kubenswrapper[31411]: I0224 02:22:13.818310 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc19da8d-13e6-4e8e-a506-9b067fd8870b-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"fc19da8d-13e6-4e8e-a506-9b067fd8870b\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 24 02:22:13.818531 master-0 kubenswrapper[31411]: I0224 02:22:13.818437 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fc19da8d-13e6-4e8e-a506-9b067fd8870b-var-lock\") pod \"installer-5-master-0\" (UID: \"fc19da8d-13e6-4e8e-a506-9b067fd8870b\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 24 02:22:13.818531 master-0 kubenswrapper[31411]: I0224 02:22:13.818471 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc19da8d-13e6-4e8e-a506-9b067fd8870b-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"fc19da8d-13e6-4e8e-a506-9b067fd8870b\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 24 02:22:13.848522 master-0 kubenswrapper[31411]: I0224 02:22:13.848440 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc19da8d-13e6-4e8e-a506-9b067fd8870b-kube-api-access\") pod \"installer-5-master-0\" (UID: \"fc19da8d-13e6-4e8e-a506-9b067fd8870b\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 24 02:22:14.019484 master-0 kubenswrapper[31411]: I0224 02:22:14.019435 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 24 02:22:14.584174 master-0 kubenswrapper[31411]: I0224 02:22:14.584061 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 24 02:22:14.596171 master-0 kubenswrapper[31411]: W0224 02:22:14.596118 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podfc19da8d_13e6_4e8e_a506_9b067fd8870b.slice/crio-35547e7cc36e35890d8387b2a0f04861875a37acf51a8e71be48936b9906d034 WatchSource:0}: Error finding container 35547e7cc36e35890d8387b2a0f04861875a37acf51a8e71be48936b9906d034: Status 404 returned error can't find the container with id 35547e7cc36e35890d8387b2a0f04861875a37acf51a8e71be48936b9906d034 Feb 24 02:22:15.467340 master-0 kubenswrapper[31411]: I0224 02:22:15.467241 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"fc19da8d-13e6-4e8e-a506-9b067fd8870b","Type":"ContainerStarted","Data":"f4e42ea1faefa429220929122f1bbe401c1306aa84786457ebf92acc65f11798"} Feb 24 02:22:15.467340 master-0 kubenswrapper[31411]: I0224 02:22:15.467330 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"fc19da8d-13e6-4e8e-a506-9b067fd8870b","Type":"ContainerStarted","Data":"35547e7cc36e35890d8387b2a0f04861875a37acf51a8e71be48936b9906d034"} Feb 24 02:22:15.498615 master-0 kubenswrapper[31411]: I0224 02:22:15.497074 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-5-master-0" podStartSLOduration=2.497051156 podStartE2EDuration="2.497051156s" podCreationTimestamp="2026-02-24 02:22:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:22:15.494836944 +0000 UTC m=+78.712034830" watchObservedRunningTime="2026-02-24 02:22:15.497051156 +0000 UTC m=+78.714249032" Feb 24 02:22:16.183529 master-0 kubenswrapper[31411]: I0224 02:22:16.183433 31411 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 24 02:22:16.184520 master-0 kubenswrapper[31411]: I0224 02:22:16.183567 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="754ca2ae56da4950b59492ccafe15df5" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 24 02:22:16.184520 master-0 kubenswrapper[31411]: I0224 02:22:16.183696 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:22:16.184984 master-0 kubenswrapper[31411]: I0224 02:22:16.184909 31411 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"e8427f995dbba6ba67b5688619b31dac495d913cbeaa8eabfe11085df7d0a498"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 24 02:22:16.185159 master-0 kubenswrapper[31411]: I0224 02:22:16.185119 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="754ca2ae56da4950b59492ccafe15df5" containerName="kube-controller-manager" containerID="cri-o://e8427f995dbba6ba67b5688619b31dac495d913cbeaa8eabfe11085df7d0a498" gracePeriod=30 Feb 24 02:22:34.517996 master-0 kubenswrapper[31411]: I0224 02:22:34.517889 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 02:22:46.800273 master-0 kubenswrapper[31411]: I0224 02:22:46.800177 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_754ca2ae56da4950b59492ccafe15df5/kube-controller-manager/0.log" Feb 24 02:22:46.801106 master-0 kubenswrapper[31411]: I0224 02:22:46.800304 31411 generic.go:334] "Generic (PLEG): container finished" podID="754ca2ae56da4950b59492ccafe15df5" containerID="e8427f995dbba6ba67b5688619b31dac495d913cbeaa8eabfe11085df7d0a498" exitCode=137 Feb 24 02:22:46.801106 master-0 kubenswrapper[31411]: I0224 02:22:46.800400 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"754ca2ae56da4950b59492ccafe15df5","Type":"ContainerDied","Data":"e8427f995dbba6ba67b5688619b31dac495d913cbeaa8eabfe11085df7d0a498"} Feb 24 02:22:47.818256 master-0 kubenswrapper[31411]: I0224 02:22:47.818131 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_754ca2ae56da4950b59492ccafe15df5/kube-controller-manager/0.log" Feb 24 02:22:47.819242 master-0 kubenswrapper[31411]: I0224 02:22:47.818304 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"754ca2ae56da4950b59492ccafe15df5","Type":"ContainerStarted","Data":"9250496c585327c26795a4ba925297eda6aefab50f23daee888a6e7c19b4af75"} Feb 24 02:22:53.560387 master-0 kubenswrapper[31411]: I0224 02:22:53.560325 31411 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 24 02:22:53.563419 master-0 kubenswrapper[31411]: I0224 02:22:53.563380 31411 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 24 02:22:53.563718 master-0 kubenswrapper[31411]: I0224 02:22:53.563568 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:22:53.564661 master-0 kubenswrapper[31411]: I0224 02:22:53.564255 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver" containerID="cri-o://3a6d28be061daa57e672f2fb4170c8cb1508d58e58f77136d5136349ebce9c80" gracePeriod=15 Feb 24 02:22:53.564661 master-0 kubenswrapper[31411]: I0224 02:22:53.564309 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://6b1118fa0c775e5798c55738a3388475285cbd52e99121aa6b42aa4b89f48976" gracePeriod=15 Feb 24 02:22:53.564661 master-0 kubenswrapper[31411]: I0224 02:22:53.564433 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://cc4f41e88d31c09269221f1953bba1f1ec74ac34cb3604d797dd60e2b7ff3d27" gracePeriod=15 Feb 24 02:22:53.565027 master-0 kubenswrapper[31411]: I0224 02:22:53.564687 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-cert-syncer" containerID="cri-o://558bf7531d8535d6ff0e2eef748fcf2e0526fa528cbc80b5c0930f84e0c9378c" gracePeriod=15 Feb 24 02:22:53.567497 master-0 kubenswrapper[31411]: I0224 02:22:53.565902 31411 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 24 02:22:53.567497 master-0 kubenswrapper[31411]: E0224 02:22:53.566277 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-cert-syncer" Feb 24 02:22:53.567497 master-0 kubenswrapper[31411]: I0224 02:22:53.566301 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-cert-syncer" Feb 24 02:22:53.567497 master-0 kubenswrapper[31411]: E0224 02:22:53.566319 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="setup" Feb 24 02:22:53.567497 master-0 kubenswrapper[31411]: I0224 02:22:53.566334 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="setup" Feb 24 02:22:53.567497 master-0 kubenswrapper[31411]: E0224 02:22:53.566350 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-check-endpoints" Feb 24 02:22:53.567497 master-0 kubenswrapper[31411]: I0224 02:22:53.566372 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-check-endpoints" Feb 24 02:22:53.567497 master-0 kubenswrapper[31411]: E0224 02:22:53.566393 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-insecure-readyz" Feb 24 02:22:53.567497 master-0 kubenswrapper[31411]: I0224 02:22:53.566408 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-insecure-readyz" Feb 24 02:22:53.567497 master-0 kubenswrapper[31411]: E0224 02:22:53.566449 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-cert-regeneration-controller" Feb 24 02:22:53.567497 master-0 kubenswrapper[31411]: I0224 02:22:53.566463 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-cert-regeneration-controller" Feb 24 02:22:53.567497 master-0 kubenswrapper[31411]: E0224 02:22:53.566500 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver" Feb 24 02:22:53.567497 master-0 kubenswrapper[31411]: I0224 02:22:53.566514 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver" Feb 24 02:22:53.567497 master-0 kubenswrapper[31411]: I0224 02:22:53.566797 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver" Feb 24 02:22:53.567497 master-0 kubenswrapper[31411]: I0224 02:22:53.566824 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-check-endpoints" Feb 24 02:22:53.567497 master-0 kubenswrapper[31411]: I0224 02:22:53.566880 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-cert-syncer" Feb 24 02:22:53.567497 master-0 kubenswrapper[31411]: I0224 02:22:53.566916 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-insecure-readyz" Feb 24 02:22:53.578186 master-0 kubenswrapper[31411]: I0224 02:22:53.566948 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="setup" Feb 24 02:22:53.578186 master-0 kubenswrapper[31411]: I0224 02:22:53.568449 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-cert-regeneration-controller" Feb 24 02:22:53.578186 master-0 kubenswrapper[31411]: I0224 02:22:53.569243 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver-check-endpoints" containerID="cri-o://50cb231a9b1774d52ada393ca11772e3b1ec2821c7a67614161aa92f0c51c9f1" gracePeriod=15 Feb 24 02:22:53.737446 master-0 kubenswrapper[31411]: E0224 02:22:53.737330 31411 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:22:53.738655 master-0 kubenswrapper[31411]: E0224 02:22:53.738568 31411 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:22:53.739779 master-0 kubenswrapper[31411]: E0224 02:22:53.739714 31411 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:22:53.740672 master-0 kubenswrapper[31411]: E0224 02:22:53.740616 31411 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:22:53.741377 master-0 kubenswrapper[31411]: E0224 02:22:53.741299 31411 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:22:53.741377 master-0 kubenswrapper[31411]: I0224 02:22:53.741338 31411 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 24 02:22:53.742069 master-0 kubenswrapper[31411]: E0224 02:22:53.741997 31411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Feb 24 02:22:53.756267 master-0 kubenswrapper[31411]: I0224 02:22:53.756214 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:22:53.756346 master-0 kubenswrapper[31411]: I0224 02:22:53.756313 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:22:53.756722 master-0 kubenswrapper[31411]: I0224 02:22:53.756673 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:22:53.756798 master-0 kubenswrapper[31411]: I0224 02:22:53.756725 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:22:53.756798 master-0 kubenswrapper[31411]: I0224 02:22:53.756756 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:22:53.756885 master-0 kubenswrapper[31411]: I0224 02:22:53.756815 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:22:53.756945 master-0 kubenswrapper[31411]: I0224 02:22:53.756907 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:22:53.757000 master-0 kubenswrapper[31411]: I0224 02:22:53.756954 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:22:53.859394 master-0 kubenswrapper[31411]: I0224 02:22:53.859211 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:22:53.859559 master-0 kubenswrapper[31411]: I0224 02:22:53.859420 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:22:53.859559 master-0 kubenswrapper[31411]: I0224 02:22:53.859421 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:22:53.859559 master-0 kubenswrapper[31411]: I0224 02:22:53.859490 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:22:53.859559 master-0 kubenswrapper[31411]: I0224 02:22:53.859523 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:22:53.859559 master-0 kubenswrapper[31411]: I0224 02:22:53.859538 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:22:53.859937 master-0 kubenswrapper[31411]: I0224 02:22:53.859564 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:22:53.859937 master-0 kubenswrapper[31411]: I0224 02:22:53.859651 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:22:53.859937 master-0 kubenswrapper[31411]: I0224 02:22:53.859663 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:22:53.859937 master-0 kubenswrapper[31411]: I0224 02:22:53.859687 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:22:53.859937 master-0 kubenswrapper[31411]: I0224 02:22:53.859717 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:22:53.859937 master-0 kubenswrapper[31411]: I0224 02:22:53.859762 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:22:53.859937 master-0 kubenswrapper[31411]: I0224 02:22:53.859814 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:22:53.859937 master-0 kubenswrapper[31411]: I0224 02:22:53.859923 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:22:53.859937 master-0 kubenswrapper[31411]: I0224 02:22:53.859925 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:22:53.860520 master-0 kubenswrapper[31411]: I0224 02:22:53.860027 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:22:53.877923 master-0 kubenswrapper[31411]: I0224 02:22:53.877836 31411 generic.go:334] "Generic (PLEG): container finished" podID="fc19da8d-13e6-4e8e-a506-9b067fd8870b" containerID="f4e42ea1faefa429220929122f1bbe401c1306aa84786457ebf92acc65f11798" exitCode=0 Feb 24 02:22:53.878019 master-0 kubenswrapper[31411]: I0224 02:22:53.877890 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"fc19da8d-13e6-4e8e-a506-9b067fd8870b","Type":"ContainerDied","Data":"f4e42ea1faefa429220929122f1bbe401c1306aa84786457ebf92acc65f11798"} Feb 24 02:22:53.879485 master-0 kubenswrapper[31411]: I0224 02:22:53.879385 31411 status_manager.go:851] "Failed to get status for pod" podUID="888e23114cf20f3bf6573c5f7b88d7d0" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:22:53.880494 master-0 kubenswrapper[31411]: I0224 02:22:53.880417 31411 status_manager.go:851] "Failed to get status for pod" podUID="fc19da8d-13e6-4e8e-a506-9b067fd8870b" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:22:53.882515 master-0 kubenswrapper[31411]: I0224 02:22:53.882456 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_888e23114cf20f3bf6573c5f7b88d7d0/kube-apiserver-cert-syncer/0.log" Feb 24 02:22:53.883756 master-0 kubenswrapper[31411]: I0224 02:22:53.883655 31411 generic.go:334] "Generic (PLEG): container finished" podID="888e23114cf20f3bf6573c5f7b88d7d0" containerID="50cb231a9b1774d52ada393ca11772e3b1ec2821c7a67614161aa92f0c51c9f1" exitCode=0 Feb 24 02:22:53.883756 master-0 kubenswrapper[31411]: I0224 02:22:53.883747 31411 generic.go:334] "Generic (PLEG): container finished" podID="888e23114cf20f3bf6573c5f7b88d7d0" containerID="cc4f41e88d31c09269221f1953bba1f1ec74ac34cb3604d797dd60e2b7ff3d27" exitCode=0 Feb 24 02:22:53.884052 master-0 kubenswrapper[31411]: I0224 02:22:53.883840 31411 generic.go:334] "Generic (PLEG): container finished" podID="888e23114cf20f3bf6573c5f7b88d7d0" containerID="6b1118fa0c775e5798c55738a3388475285cbd52e99121aa6b42aa4b89f48976" exitCode=0 Feb 24 02:22:53.884052 master-0 kubenswrapper[31411]: I0224 02:22:53.883865 31411 generic.go:334] "Generic (PLEG): container finished" podID="888e23114cf20f3bf6573c5f7b88d7d0" containerID="558bf7531d8535d6ff0e2eef748fcf2e0526fa528cbc80b5c0930f84e0c9378c" exitCode=2 Feb 24 02:22:53.944086 master-0 kubenswrapper[31411]: E0224 02:22:53.943989 31411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 24 02:22:54.276830 master-0 kubenswrapper[31411]: I0224 02:22:54.275828 31411 patch_prober.go:28] interesting pod/kube-apiserver-master-0 container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.32.10:6443/readyz\": dial tcp 192.168.32.10:6443: connect: connection refused" start-of-body= Feb 24 02:22:54.276830 master-0 kubenswrapper[31411]: I0224 02:22:54.275927 31411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="888e23114cf20f3bf6573c5f7b88d7d0" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.32.10:6443/readyz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:22:54.278089 master-0 kubenswrapper[31411]: E0224 02:22:54.277792 31411 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event=< Feb 24 02:22:54.278089 master-0 kubenswrapper[31411]: &Event{ObjectMeta:{kube-apiserver-master-0.18970d74ef4f4bb1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-master-0,UID:888e23114cf20f3bf6573c5f7b88d7d0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.32.10:6443/readyz": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:22:54.278089 master-0 kubenswrapper[31411]: body: Feb 24 02:22:54.278089 master-0 kubenswrapper[31411]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:22:54.275898289 +0000 UTC m=+117.493096175,LastTimestamp:2026-02-24 02:22:54.275898289 +0000 UTC m=+117.493096175,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Feb 24 02:22:54.278089 master-0 kubenswrapper[31411]: > Feb 24 02:22:54.346034 master-0 kubenswrapper[31411]: E0224 02:22:54.345942 31411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 24 02:22:55.152743 master-0 kubenswrapper[31411]: E0224 02:22:55.152473 31411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Feb 24 02:22:55.409263 master-0 kubenswrapper[31411]: I0224 02:22:55.409039 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 24 02:22:55.410765 master-0 kubenswrapper[31411]: I0224 02:22:55.410677 31411 status_manager.go:851] "Failed to get status for pod" podUID="fc19da8d-13e6-4e8e-a506-9b067fd8870b" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:22:55.587722 master-0 kubenswrapper[31411]: I0224 02:22:55.587605 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc19da8d-13e6-4e8e-a506-9b067fd8870b-kube-api-access\") pod \"fc19da8d-13e6-4e8e-a506-9b067fd8870b\" (UID: \"fc19da8d-13e6-4e8e-a506-9b067fd8870b\") " Feb 24 02:22:55.588134 master-0 kubenswrapper[31411]: I0224 02:22:55.587772 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fc19da8d-13e6-4e8e-a506-9b067fd8870b-var-lock\") pod \"fc19da8d-13e6-4e8e-a506-9b067fd8870b\" (UID: \"fc19da8d-13e6-4e8e-a506-9b067fd8870b\") " Feb 24 02:22:55.588134 master-0 kubenswrapper[31411]: I0224 02:22:55.587809 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc19da8d-13e6-4e8e-a506-9b067fd8870b-kubelet-dir\") pod \"fc19da8d-13e6-4e8e-a506-9b067fd8870b\" (UID: \"fc19da8d-13e6-4e8e-a506-9b067fd8870b\") " Feb 24 02:22:55.588134 master-0 kubenswrapper[31411]: I0224 02:22:55.588011 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc19da8d-13e6-4e8e-a506-9b067fd8870b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fc19da8d-13e6-4e8e-a506-9b067fd8870b" (UID: "fc19da8d-13e6-4e8e-a506-9b067fd8870b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:22:55.588445 master-0 kubenswrapper[31411]: I0224 02:22:55.588387 31411 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc19da8d-13e6-4e8e-a506-9b067fd8870b-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:22:55.588445 master-0 kubenswrapper[31411]: I0224 02:22:55.588438 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc19da8d-13e6-4e8e-a506-9b067fd8870b-var-lock" (OuterVolumeSpecName: "var-lock") pod "fc19da8d-13e6-4e8e-a506-9b067fd8870b" (UID: "fc19da8d-13e6-4e8e-a506-9b067fd8870b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:22:55.592618 master-0 kubenswrapper[31411]: I0224 02:22:55.592514 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc19da8d-13e6-4e8e-a506-9b067fd8870b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fc19da8d-13e6-4e8e-a506-9b067fd8870b" (UID: "fc19da8d-13e6-4e8e-a506-9b067fd8870b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:22:55.722884 master-0 kubenswrapper[31411]: I0224 02:22:55.720774 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc19da8d-13e6-4e8e-a506-9b067fd8870b-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 02:22:55.722884 master-0 kubenswrapper[31411]: I0224 02:22:55.720836 31411 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fc19da8d-13e6-4e8e-a506-9b067fd8870b-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 02:22:55.907443 master-0 kubenswrapper[31411]: I0224 02:22:55.906751 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"fc19da8d-13e6-4e8e-a506-9b067fd8870b","Type":"ContainerDied","Data":"35547e7cc36e35890d8387b2a0f04861875a37acf51a8e71be48936b9906d034"} Feb 24 02:22:55.907443 master-0 kubenswrapper[31411]: I0224 02:22:55.906804 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35547e7cc36e35890d8387b2a0f04861875a37acf51a8e71be48936b9906d034" Feb 24 02:22:55.907443 master-0 kubenswrapper[31411]: I0224 02:22:55.906875 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 24 02:22:55.963069 master-0 kubenswrapper[31411]: E0224 02:22:55.962784 31411 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event=< Feb 24 02:22:55.963069 master-0 kubenswrapper[31411]: &Event{ObjectMeta:{kube-apiserver-master-0.18970d74ef4f4bb1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-master-0,UID:888e23114cf20f3bf6573c5f7b88d7d0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.32.10:6443/readyz": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:22:55.963069 master-0 kubenswrapper[31411]: body: Feb 24 02:22:55.963069 master-0 kubenswrapper[31411]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:22:54.275898289 +0000 UTC m=+117.493096175,LastTimestamp:2026-02-24 02:22:54.275898289 +0000 UTC m=+117.493096175,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Feb 24 02:22:55.963069 master-0 kubenswrapper[31411]: > Feb 24 02:22:55.981986 master-0 kubenswrapper[31411]: I0224 02:22:55.981888 31411 status_manager.go:851] "Failed to get status for pod" podUID="fc19da8d-13e6-4e8e-a506-9b067fd8870b" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:22:56.078791 master-0 kubenswrapper[31411]: I0224 02:22:56.078707 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_888e23114cf20f3bf6573c5f7b88d7d0/kube-apiserver-cert-syncer/0.log" Feb 24 02:22:56.080328 master-0 kubenswrapper[31411]: I0224 02:22:56.080263 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:22:56.081885 master-0 kubenswrapper[31411]: I0224 02:22:56.081813 31411 status_manager.go:851] "Failed to get status for pod" podUID="fc19da8d-13e6-4e8e-a506-9b067fd8870b" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:22:56.082804 master-0 kubenswrapper[31411]: I0224 02:22:56.082717 31411 status_manager.go:851] "Failed to get status for pod" podUID="888e23114cf20f3bf6573c5f7b88d7d0" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:22:56.183212 master-0 kubenswrapper[31411]: I0224 02:22:56.183087 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:22:56.183980 master-0 kubenswrapper[31411]: I0224 02:22:56.183285 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:22:56.191055 master-0 kubenswrapper[31411]: I0224 02:22:56.190988 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:22:56.192491 master-0 kubenswrapper[31411]: I0224 02:22:56.192420 31411 status_manager.go:851] "Failed to get status for pod" podUID="fc19da8d-13e6-4e8e-a506-9b067fd8870b" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:22:56.193236 master-0 kubenswrapper[31411]: I0224 02:22:56.193175 31411 status_manager.go:851] "Failed to get status for pod" podUID="754ca2ae56da4950b59492ccafe15df5" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:22:56.193928 master-0 kubenswrapper[31411]: I0224 02:22:56.193870 31411 status_manager.go:851] "Failed to get status for pod" podUID="888e23114cf20f3bf6573c5f7b88d7d0" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:22:56.228681 master-0 kubenswrapper[31411]: I0224 02:22:56.228540 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-resource-dir\") pod \"888e23114cf20f3bf6573c5f7b88d7d0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " Feb 24 02:22:56.228791 master-0 kubenswrapper[31411]: I0224 02:22:56.228681 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "888e23114cf20f3bf6573c5f7b88d7d0" (UID: "888e23114cf20f3bf6573c5f7b88d7d0"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:22:56.228975 master-0 kubenswrapper[31411]: I0224 02:22:56.228922 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-audit-dir\") pod \"888e23114cf20f3bf6573c5f7b88d7d0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " Feb 24 02:22:56.229075 master-0 kubenswrapper[31411]: I0224 02:22:56.229044 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "888e23114cf20f3bf6573c5f7b88d7d0" (UID: "888e23114cf20f3bf6573c5f7b88d7d0"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:22:56.229154 master-0 kubenswrapper[31411]: I0224 02:22:56.229060 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-cert-dir\") pod \"888e23114cf20f3bf6573c5f7b88d7d0\" (UID: \"888e23114cf20f3bf6573c5f7b88d7d0\") " Feb 24 02:22:56.229154 master-0 kubenswrapper[31411]: I0224 02:22:56.229109 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "888e23114cf20f3bf6573c5f7b88d7d0" (UID: "888e23114cf20f3bf6573c5f7b88d7d0"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:22:56.229618 master-0 kubenswrapper[31411]: I0224 02:22:56.229534 31411 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:22:56.229618 master-0 kubenswrapper[31411]: I0224 02:22:56.229565 31411 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:22:56.229765 master-0 kubenswrapper[31411]: I0224 02:22:56.229628 31411 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/888e23114cf20f3bf6573c5f7b88d7d0-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:22:56.753618 master-0 kubenswrapper[31411]: E0224 02:22:56.753488 31411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Feb 24 02:22:56.922095 master-0 kubenswrapper[31411]: I0224 02:22:56.922021 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_888e23114cf20f3bf6573c5f7b88d7d0/kube-apiserver-cert-syncer/0.log" Feb 24 02:22:56.923405 master-0 kubenswrapper[31411]: I0224 02:22:56.923343 31411 generic.go:334] "Generic (PLEG): container finished" podID="888e23114cf20f3bf6573c5f7b88d7d0" containerID="3a6d28be061daa57e672f2fb4170c8cb1508d58e58f77136d5136349ebce9c80" exitCode=0 Feb 24 02:22:56.923532 master-0 kubenswrapper[31411]: I0224 02:22:56.923480 31411 scope.go:117] "RemoveContainer" containerID="50cb231a9b1774d52ada393ca11772e3b1ec2821c7a67614161aa92f0c51c9f1" Feb 24 02:22:56.923629 master-0 kubenswrapper[31411]: I0224 02:22:56.923551 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:22:56.959186 master-0 kubenswrapper[31411]: I0224 02:22:56.958828 31411 status_manager.go:851] "Failed to get status for pod" podUID="fc19da8d-13e6-4e8e-a506-9b067fd8870b" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:22:56.959783 master-0 kubenswrapper[31411]: I0224 02:22:56.959692 31411 status_manager.go:851] "Failed to get status for pod" podUID="754ca2ae56da4950b59492ccafe15df5" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:22:56.960642 master-0 kubenswrapper[31411]: I0224 02:22:56.960543 31411 status_manager.go:851] "Failed to get status for pod" podUID="888e23114cf20f3bf6573c5f7b88d7d0" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:22:56.961661 master-0 kubenswrapper[31411]: I0224 02:22:56.961613 31411 scope.go:117] "RemoveContainer" containerID="cc4f41e88d31c09269221f1953bba1f1ec74ac34cb3604d797dd60e2b7ff3d27" Feb 24 02:22:56.990403 master-0 kubenswrapper[31411]: I0224 02:22:56.990231 31411 scope.go:117] "RemoveContainer" containerID="6b1118fa0c775e5798c55738a3388475285cbd52e99121aa6b42aa4b89f48976" Feb 24 02:22:57.023344 master-0 kubenswrapper[31411]: I0224 02:22:57.022918 31411 scope.go:117] "RemoveContainer" containerID="558bf7531d8535d6ff0e2eef748fcf2e0526fa528cbc80b5c0930f84e0c9378c" Feb 24 02:22:57.060104 master-0 kubenswrapper[31411]: I0224 02:22:57.060029 31411 scope.go:117] "RemoveContainer" containerID="3a6d28be061daa57e672f2fb4170c8cb1508d58e58f77136d5136349ebce9c80" Feb 24 02:22:57.090869 master-0 kubenswrapper[31411]: I0224 02:22:57.090815 31411 scope.go:117] "RemoveContainer" containerID="4e2723790b49b8efef2a3cd4b841ce0f7ce144ba7c018da13a9e536997af68e4" Feb 24 02:22:57.104785 master-0 kubenswrapper[31411]: I0224 02:22:57.104702 31411 status_manager.go:851] "Failed to get status for pod" podUID="888e23114cf20f3bf6573c5f7b88d7d0" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:22:57.106467 master-0 kubenswrapper[31411]: I0224 02:22:57.106324 31411 status_manager.go:851] "Failed to get status for pod" podUID="fc19da8d-13e6-4e8e-a506-9b067fd8870b" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:22:57.107638 master-0 kubenswrapper[31411]: I0224 02:22:57.107546 31411 status_manager.go:851] "Failed to get status for pod" podUID="754ca2ae56da4950b59492ccafe15df5" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:22:57.111144 master-0 kubenswrapper[31411]: I0224 02:22:57.111085 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="888e23114cf20f3bf6573c5f7b88d7d0" path="/var/lib/kubelet/pods/888e23114cf20f3bf6573c5f7b88d7d0/volumes" Feb 24 02:22:57.124444 master-0 kubenswrapper[31411]: I0224 02:22:57.124387 31411 scope.go:117] "RemoveContainer" containerID="50cb231a9b1774d52ada393ca11772e3b1ec2821c7a67614161aa92f0c51c9f1" Feb 24 02:22:57.125043 master-0 kubenswrapper[31411]: E0224 02:22:57.124987 31411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50cb231a9b1774d52ada393ca11772e3b1ec2821c7a67614161aa92f0c51c9f1\": container with ID starting with 50cb231a9b1774d52ada393ca11772e3b1ec2821c7a67614161aa92f0c51c9f1 not found: ID does not exist" containerID="50cb231a9b1774d52ada393ca11772e3b1ec2821c7a67614161aa92f0c51c9f1" Feb 24 02:22:57.125140 master-0 kubenswrapper[31411]: I0224 02:22:57.125050 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50cb231a9b1774d52ada393ca11772e3b1ec2821c7a67614161aa92f0c51c9f1"} err="failed to get container status \"50cb231a9b1774d52ada393ca11772e3b1ec2821c7a67614161aa92f0c51c9f1\": rpc error: code = NotFound desc = could not find container \"50cb231a9b1774d52ada393ca11772e3b1ec2821c7a67614161aa92f0c51c9f1\": container with ID starting with 50cb231a9b1774d52ada393ca11772e3b1ec2821c7a67614161aa92f0c51c9f1 not found: ID does not exist" Feb 24 02:22:57.125140 master-0 kubenswrapper[31411]: I0224 02:22:57.125095 31411 scope.go:117] "RemoveContainer" containerID="cc4f41e88d31c09269221f1953bba1f1ec74ac34cb3604d797dd60e2b7ff3d27" Feb 24 02:22:57.125770 master-0 kubenswrapper[31411]: E0224 02:22:57.125714 31411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc4f41e88d31c09269221f1953bba1f1ec74ac34cb3604d797dd60e2b7ff3d27\": container with ID starting with cc4f41e88d31c09269221f1953bba1f1ec74ac34cb3604d797dd60e2b7ff3d27 not found: ID does not exist" containerID="cc4f41e88d31c09269221f1953bba1f1ec74ac34cb3604d797dd60e2b7ff3d27" Feb 24 02:22:57.125868 master-0 kubenswrapper[31411]: I0224 02:22:57.125765 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc4f41e88d31c09269221f1953bba1f1ec74ac34cb3604d797dd60e2b7ff3d27"} err="failed to get container status \"cc4f41e88d31c09269221f1953bba1f1ec74ac34cb3604d797dd60e2b7ff3d27\": rpc error: code = NotFound desc = could not find container \"cc4f41e88d31c09269221f1953bba1f1ec74ac34cb3604d797dd60e2b7ff3d27\": container with ID starting with cc4f41e88d31c09269221f1953bba1f1ec74ac34cb3604d797dd60e2b7ff3d27 not found: ID does not exist" Feb 24 02:22:57.125868 master-0 kubenswrapper[31411]: I0224 02:22:57.125793 31411 scope.go:117] "RemoveContainer" containerID="6b1118fa0c775e5798c55738a3388475285cbd52e99121aa6b42aa4b89f48976" Feb 24 02:22:57.126287 master-0 kubenswrapper[31411]: E0224 02:22:57.126219 31411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b1118fa0c775e5798c55738a3388475285cbd52e99121aa6b42aa4b89f48976\": container with ID starting with 6b1118fa0c775e5798c55738a3388475285cbd52e99121aa6b42aa4b89f48976 not found: ID does not exist" containerID="6b1118fa0c775e5798c55738a3388475285cbd52e99121aa6b42aa4b89f48976" Feb 24 02:22:57.126367 master-0 kubenswrapper[31411]: I0224 02:22:57.126292 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b1118fa0c775e5798c55738a3388475285cbd52e99121aa6b42aa4b89f48976"} err="failed to get container status \"6b1118fa0c775e5798c55738a3388475285cbd52e99121aa6b42aa4b89f48976\": rpc error: code = NotFound desc = could not find container \"6b1118fa0c775e5798c55738a3388475285cbd52e99121aa6b42aa4b89f48976\": container with ID starting with 6b1118fa0c775e5798c55738a3388475285cbd52e99121aa6b42aa4b89f48976 not found: ID does not exist" Feb 24 02:22:57.126367 master-0 kubenswrapper[31411]: I0224 02:22:57.126339 31411 scope.go:117] "RemoveContainer" containerID="558bf7531d8535d6ff0e2eef748fcf2e0526fa528cbc80b5c0930f84e0c9378c" Feb 24 02:22:57.126987 master-0 kubenswrapper[31411]: E0224 02:22:57.126938 31411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"558bf7531d8535d6ff0e2eef748fcf2e0526fa528cbc80b5c0930f84e0c9378c\": container with ID starting with 558bf7531d8535d6ff0e2eef748fcf2e0526fa528cbc80b5c0930f84e0c9378c not found: ID does not exist" containerID="558bf7531d8535d6ff0e2eef748fcf2e0526fa528cbc80b5c0930f84e0c9378c" Feb 24 02:22:57.127079 master-0 kubenswrapper[31411]: I0224 02:22:57.126981 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"558bf7531d8535d6ff0e2eef748fcf2e0526fa528cbc80b5c0930f84e0c9378c"} err="failed to get container status \"558bf7531d8535d6ff0e2eef748fcf2e0526fa528cbc80b5c0930f84e0c9378c\": rpc error: code = NotFound desc = could not find container \"558bf7531d8535d6ff0e2eef748fcf2e0526fa528cbc80b5c0930f84e0c9378c\": container with ID starting with 558bf7531d8535d6ff0e2eef748fcf2e0526fa528cbc80b5c0930f84e0c9378c not found: ID does not exist" Feb 24 02:22:57.127079 master-0 kubenswrapper[31411]: I0224 02:22:57.127008 31411 scope.go:117] "RemoveContainer" containerID="3a6d28be061daa57e672f2fb4170c8cb1508d58e58f77136d5136349ebce9c80" Feb 24 02:22:57.127445 master-0 kubenswrapper[31411]: E0224 02:22:57.127391 31411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a6d28be061daa57e672f2fb4170c8cb1508d58e58f77136d5136349ebce9c80\": container with ID starting with 3a6d28be061daa57e672f2fb4170c8cb1508d58e58f77136d5136349ebce9c80 not found: ID does not exist" containerID="3a6d28be061daa57e672f2fb4170c8cb1508d58e58f77136d5136349ebce9c80" Feb 24 02:22:57.127525 master-0 kubenswrapper[31411]: I0224 02:22:57.127439 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a6d28be061daa57e672f2fb4170c8cb1508d58e58f77136d5136349ebce9c80"} err="failed to get container status \"3a6d28be061daa57e672f2fb4170c8cb1508d58e58f77136d5136349ebce9c80\": rpc error: code = NotFound desc = could not find container \"3a6d28be061daa57e672f2fb4170c8cb1508d58e58f77136d5136349ebce9c80\": container with ID starting with 3a6d28be061daa57e672f2fb4170c8cb1508d58e58f77136d5136349ebce9c80 not found: ID does not exist" Feb 24 02:22:57.127525 master-0 kubenswrapper[31411]: I0224 02:22:57.127469 31411 scope.go:117] "RemoveContainer" containerID="4e2723790b49b8efef2a3cd4b841ce0f7ce144ba7c018da13a9e536997af68e4" Feb 24 02:22:57.128060 master-0 kubenswrapper[31411]: E0224 02:22:57.128004 31411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e2723790b49b8efef2a3cd4b841ce0f7ce144ba7c018da13a9e536997af68e4\": container with ID starting with 4e2723790b49b8efef2a3cd4b841ce0f7ce144ba7c018da13a9e536997af68e4 not found: ID does not exist" containerID="4e2723790b49b8efef2a3cd4b841ce0f7ce144ba7c018da13a9e536997af68e4" Feb 24 02:22:57.128142 master-0 kubenswrapper[31411]: I0224 02:22:57.128056 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e2723790b49b8efef2a3cd4b841ce0f7ce144ba7c018da13a9e536997af68e4"} err="failed to get container status \"4e2723790b49b8efef2a3cd4b841ce0f7ce144ba7c018da13a9e536997af68e4\": rpc error: code = NotFound desc = could not find container \"4e2723790b49b8efef2a3cd4b841ce0f7ce144ba7c018da13a9e536997af68e4\": container with ID starting with 4e2723790b49b8efef2a3cd4b841ce0f7ce144ba7c018da13a9e536997af68e4 not found: ID does not exist" Feb 24 02:22:58.661761 master-0 kubenswrapper[31411]: E0224 02:22:58.661631 31411 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:22:58.662955 master-0 kubenswrapper[31411]: I0224 02:22:58.662900 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:22:58.709049 master-0 kubenswrapper[31411]: W0224 02:22:58.708957 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2146f0e3671998cad8bbc2464b009ab7.slice/crio-9f07e660646bed701a082c7699b7c31a09d8ca565ce2f64327afd7810489e11b WatchSource:0}: Error finding container 9f07e660646bed701a082c7699b7c31a09d8ca565ce2f64327afd7810489e11b: Status 404 returned error can't find the container with id 9f07e660646bed701a082c7699b7c31a09d8ca565ce2f64327afd7810489e11b Feb 24 02:22:58.946135 master-0 kubenswrapper[31411]: I0224 02:22:58.945946 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"2146f0e3671998cad8bbc2464b009ab7","Type":"ContainerStarted","Data":"9f07e660646bed701a082c7699b7c31a09d8ca565ce2f64327afd7810489e11b"} Feb 24 02:22:59.955520 master-0 kubenswrapper[31411]: E0224 02:22:59.955446 31411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Feb 24 02:22:59.958245 master-0 kubenswrapper[31411]: I0224 02:22:59.958170 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"2146f0e3671998cad8bbc2464b009ab7","Type":"ContainerStarted","Data":"8a7d4b17e99ce12c7613b41df4f07cef823745e189c7d46a4fee547cc41d905c"} Feb 24 02:22:59.959710 master-0 kubenswrapper[31411]: E0224 02:22:59.959638 31411 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:22:59.959836 master-0 kubenswrapper[31411]: I0224 02:22:59.959713 31411 status_manager.go:851] "Failed to get status for pod" podUID="fc19da8d-13e6-4e8e-a506-9b067fd8870b" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:22:59.960857 master-0 kubenswrapper[31411]: I0224 02:22:59.960759 31411 status_manager.go:851] "Failed to get status for pod" podUID="754ca2ae56da4950b59492ccafe15df5" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:23:00.968757 master-0 kubenswrapper[31411]: E0224 02:23:00.968673 31411 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:23:05.965614 master-0 kubenswrapper[31411]: E0224 02:23:05.965336 31411 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event=< Feb 24 02:23:05.965614 master-0 kubenswrapper[31411]: &Event{ObjectMeta:{kube-apiserver-master-0.18970d74ef4f4bb1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-master-0,UID:888e23114cf20f3bf6573c5f7b88d7d0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.32.10:6443/readyz": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 02:23:05.965614 master-0 kubenswrapper[31411]: body: Feb 24 02:23:05.965614 master-0 kubenswrapper[31411]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 02:22:54.275898289 +0000 UTC m=+117.493096175,LastTimestamp:2026-02-24 02:22:54.275898289 +0000 UTC m=+117.493096175,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Feb 24 02:23:05.965614 master-0 kubenswrapper[31411]: > Feb 24 02:23:06.091975 master-0 kubenswrapper[31411]: I0224 02:23:06.091891 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:23:06.093680 master-0 kubenswrapper[31411]: I0224 02:23:06.093569 31411 status_manager.go:851] "Failed to get status for pod" podUID="fc19da8d-13e6-4e8e-a506-9b067fd8870b" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:23:06.094877 master-0 kubenswrapper[31411]: I0224 02:23:06.094721 31411 status_manager.go:851] "Failed to get status for pod" podUID="754ca2ae56da4950b59492ccafe15df5" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:23:06.126078 master-0 kubenswrapper[31411]: I0224 02:23:06.126004 31411 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b7689f5c-1454-4048-b5e5-9b1b0b1784c7" Feb 24 02:23:06.126078 master-0 kubenswrapper[31411]: I0224 02:23:06.126054 31411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b7689f5c-1454-4048-b5e5-9b1b0b1784c7" Feb 24 02:23:06.127202 master-0 kubenswrapper[31411]: E0224 02:23:06.127129 31411 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:23:06.128153 master-0 kubenswrapper[31411]: I0224 02:23:06.128107 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:23:06.190830 master-0 kubenswrapper[31411]: I0224 02:23:06.190757 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:23:06.193931 master-0 kubenswrapper[31411]: I0224 02:23:06.193853 31411 status_manager.go:851] "Failed to get status for pod" podUID="fc19da8d-13e6-4e8e-a506-9b067fd8870b" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:23:06.194824 master-0 kubenswrapper[31411]: I0224 02:23:06.194762 31411 status_manager.go:851] "Failed to get status for pod" podUID="754ca2ae56da4950b59492ccafe15df5" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:23:06.357227 master-0 kubenswrapper[31411]: E0224 02:23:06.357098 31411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="7s" Feb 24 02:23:07.023647 master-0 kubenswrapper[31411]: I0224 02:23:07.023548 31411 generic.go:334] "Generic (PLEG): container finished" podID="487622064474ed0ec70f7bf2a0fcb80b" containerID="d9b7915bdbd3a1215520f69b7ff7a13fefc559280a722ef9ced31092988beca5" exitCode=0 Feb 24 02:23:07.024466 master-0 kubenswrapper[31411]: I0224 02:23:07.023682 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"487622064474ed0ec70f7bf2a0fcb80b","Type":"ContainerDied","Data":"d9b7915bdbd3a1215520f69b7ff7a13fefc559280a722ef9ced31092988beca5"} Feb 24 02:23:07.024466 master-0 kubenswrapper[31411]: I0224 02:23:07.023775 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"487622064474ed0ec70f7bf2a0fcb80b","Type":"ContainerStarted","Data":"c3600b15b67d9a4415902b76a19140a9a16baf27390f21e3cf6f4b5a4d508418"} Feb 24 02:23:07.024466 master-0 kubenswrapper[31411]: I0224 02:23:07.024257 31411 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b7689f5c-1454-4048-b5e5-9b1b0b1784c7" Feb 24 02:23:07.024466 master-0 kubenswrapper[31411]: I0224 02:23:07.024289 31411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b7689f5c-1454-4048-b5e5-9b1b0b1784c7" Feb 24 02:23:07.025556 master-0 kubenswrapper[31411]: I0224 02:23:07.025476 31411 status_manager.go:851] "Failed to get status for pod" podUID="754ca2ae56da4950b59492ccafe15df5" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:23:07.025556 master-0 kubenswrapper[31411]: E0224 02:23:07.025502 31411 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:23:07.026373 master-0 kubenswrapper[31411]: I0224 02:23:07.026303 31411 status_manager.go:851] "Failed to get status for pod" podUID="fc19da8d-13e6-4e8e-a506-9b067fd8870b" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:23:07.103050 master-0 kubenswrapper[31411]: I0224 02:23:07.102959 31411 status_manager.go:851] "Failed to get status for pod" podUID="487622064474ed0ec70f7bf2a0fcb80b" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:23:07.103831 master-0 kubenswrapper[31411]: I0224 02:23:07.103780 31411 status_manager.go:851] "Failed to get status for pod" podUID="fc19da8d-13e6-4e8e-a506-9b067fd8870b" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:23:07.104739 master-0 kubenswrapper[31411]: I0224 02:23:07.104680 31411 status_manager.go:851] "Failed to get status for pod" podUID="754ca2ae56da4950b59492ccafe15df5" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 02:23:08.046614 master-0 kubenswrapper[31411]: I0224 02:23:08.040377 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"487622064474ed0ec70f7bf2a0fcb80b","Type":"ContainerStarted","Data":"845b8fb70fadf1f0185f18388f5cf4364385687e4f88ff68052786102ba7d93b"} Feb 24 02:23:08.046614 master-0 kubenswrapper[31411]: I0224 02:23:08.040435 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"487622064474ed0ec70f7bf2a0fcb80b","Type":"ContainerStarted","Data":"b2f02b6b6a43ecd387377032fc25f41c30bbca6775fdf3e18984edf40d0109a0"} Feb 24 02:23:09.056860 master-0 kubenswrapper[31411]: I0224 02:23:09.056799 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"487622064474ed0ec70f7bf2a0fcb80b","Type":"ContainerStarted","Data":"6e85bc4ca8b019e2e905ab7c4446f8a9f51c697cedf1702c3fb766bc59818150"} Feb 24 02:23:09.056860 master-0 kubenswrapper[31411]: I0224 02:23:09.056860 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"487622064474ed0ec70f7bf2a0fcb80b","Type":"ContainerStarted","Data":"5574a4f666f8dc9ad40963089d7551b6e29aabd805b13b0e15fe464c0cc12495"} Feb 24 02:23:09.056860 master-0 kubenswrapper[31411]: I0224 02:23:09.056875 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"487622064474ed0ec70f7bf2a0fcb80b","Type":"ContainerStarted","Data":"dda15f70bbb873e11f28aa6f6b2a862210a732d9bc86661ecad6a9c6f644d0aa"} Feb 24 02:23:09.057566 master-0 kubenswrapper[31411]: I0224 02:23:09.057074 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:23:09.057566 master-0 kubenswrapper[31411]: I0224 02:23:09.057321 31411 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b7689f5c-1454-4048-b5e5-9b1b0b1784c7" Feb 24 02:23:09.057566 master-0 kubenswrapper[31411]: I0224 02:23:09.057372 31411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b7689f5c-1454-4048-b5e5-9b1b0b1784c7" Feb 24 02:23:11.128741 master-0 kubenswrapper[31411]: I0224 02:23:11.128631 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:23:11.128741 master-0 kubenswrapper[31411]: I0224 02:23:11.128716 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:23:11.138892 master-0 kubenswrapper[31411]: I0224 02:23:11.138827 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:23:14.088552 master-0 kubenswrapper[31411]: I0224 02:23:14.088485 31411 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:23:14.130380 master-0 kubenswrapper[31411]: I0224 02:23:14.130209 31411 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7689f5c-1454-4048-b5e5-9b1b0b1784c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T02:23:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T02:23:07Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T02:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T02:23:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2f02b6b6a43ecd387377032fc25f41c30bbca6775fdf3e18984edf40d0109a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T02:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dda15f70bbb873e11f28aa6f6b2a862210a732d9bc86661ecad6a9c6f644d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T02:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845b8fb70fadf1f0185f18388f5cf4364385687e4f88ff68052786102ba7d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T02:23:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e85bc4ca8b019e2e905ab7c4446f8a9f51c697cedf1702c3fb766bc59818150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T02:23:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5574a4f666f8dc9ad40963089d7551b6e29aabd805b13b0e15fe464c0cc12495\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T02:23:08Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b7915bdbd3a1215520f69b7ff7a13fefc559280a722ef9ced31092988beca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9b7915bdbd3a1215520f69b7ff7a13fefc559280a722ef9ced31092988beca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T02:23:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T02:23:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}]}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-master-0\": Pod \"kube-apiserver-master-0\" is invalid: metadata.uid: Invalid value: \"b7689f5c-1454-4048-b5e5-9b1b0b1784c7\": field is immutable" Feb 24 02:23:14.199809 master-0 kubenswrapper[31411]: I0224 02:23:14.199731 31411 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="487622064474ed0ec70f7bf2a0fcb80b" podUID="c647306d-581d-465c-8d93-bd5957c01409" Feb 24 02:23:15.117120 master-0 kubenswrapper[31411]: I0224 02:23:15.117062 31411 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b7689f5c-1454-4048-b5e5-9b1b0b1784c7" Feb 24 02:23:15.117120 master-0 kubenswrapper[31411]: I0224 02:23:15.117116 31411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b7689f5c-1454-4048-b5e5-9b1b0b1784c7" Feb 24 02:23:15.122175 master-0 kubenswrapper[31411]: I0224 02:23:15.122082 31411 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="487622064474ed0ec70f7bf2a0fcb80b" podUID="c647306d-581d-465c-8d93-bd5957c01409" Feb 24 02:23:15.123789 master-0 kubenswrapper[31411]: I0224 02:23:15.123736 31411 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-master-0" containerID="cri-o://b2f02b6b6a43ecd387377032fc25f41c30bbca6775fdf3e18984edf40d0109a0" Feb 24 02:23:15.123789 master-0 kubenswrapper[31411]: I0224 02:23:15.123777 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:23:16.125562 master-0 kubenswrapper[31411]: I0224 02:23:16.125498 31411 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b7689f5c-1454-4048-b5e5-9b1b0b1784c7" Feb 24 02:23:16.125562 master-0 kubenswrapper[31411]: I0224 02:23:16.125552 31411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b7689f5c-1454-4048-b5e5-9b1b0b1784c7" Feb 24 02:23:16.130534 master-0 kubenswrapper[31411]: I0224 02:23:16.130461 31411 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="487622064474ed0ec70f7bf2a0fcb80b" podUID="c647306d-581d-465c-8d93-bd5957c01409" Feb 24 02:23:23.987297 master-0 kubenswrapper[31411]: I0224 02:23:23.987199 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 24 02:23:24.310532 master-0 kubenswrapper[31411]: I0224 02:23:24.310344 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 24 02:23:24.434191 master-0 kubenswrapper[31411]: I0224 02:23:24.434104 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 24 02:23:24.674707 master-0 kubenswrapper[31411]: I0224 02:23:24.674518 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 24 02:23:25.055111 master-0 kubenswrapper[31411]: I0224 02:23:25.055019 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 24 02:23:25.097212 master-0 kubenswrapper[31411]: I0224 02:23:25.097101 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 24 02:23:25.250204 master-0 kubenswrapper[31411]: I0224 02:23:25.250104 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 24 02:23:25.318404 master-0 kubenswrapper[31411]: I0224 02:23:25.318222 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 24 02:23:25.367456 master-0 kubenswrapper[31411]: I0224 02:23:25.367392 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 24 02:23:25.443365 master-0 kubenswrapper[31411]: I0224 02:23:25.443309 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 24 02:23:25.461937 master-0 kubenswrapper[31411]: I0224 02:23:25.461893 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 24 02:23:25.863314 master-0 kubenswrapper[31411]: I0224 02:23:25.863256 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 24 02:23:25.972755 master-0 kubenswrapper[31411]: I0224 02:23:25.972664 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 24 02:23:26.001912 master-0 kubenswrapper[31411]: I0224 02:23:26.001848 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 24 02:23:26.159042 master-0 kubenswrapper[31411]: I0224 02:23:26.158833 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-pbr2s" Feb 24 02:23:26.197530 master-0 kubenswrapper[31411]: I0224 02:23:26.197473 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 24 02:23:26.219965 master-0 kubenswrapper[31411]: I0224 02:23:26.219899 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 24 02:23:26.725986 master-0 kubenswrapper[31411]: I0224 02:23:26.725908 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 24 02:23:26.747539 master-0 kubenswrapper[31411]: I0224 02:23:26.747464 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 24 02:23:26.945797 master-0 kubenswrapper[31411]: I0224 02:23:26.945725 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 24 02:23:26.981270 master-0 kubenswrapper[31411]: I0224 02:23:26.981140 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 24 02:23:27.068933 master-0 kubenswrapper[31411]: I0224 02:23:27.068858 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 24 02:23:27.162660 master-0 kubenswrapper[31411]: I0224 02:23:27.162571 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 02:23:27.257190 master-0 kubenswrapper[31411]: I0224 02:23:27.257035 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 24 02:23:27.268388 master-0 kubenswrapper[31411]: I0224 02:23:27.268331 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 24 02:23:27.416796 master-0 kubenswrapper[31411]: I0224 02:23:27.416724 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 24 02:23:27.464081 master-0 kubenswrapper[31411]: I0224 02:23:27.463997 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 24 02:23:27.545997 master-0 kubenswrapper[31411]: I0224 02:23:27.545858 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-c5cjc" Feb 24 02:23:27.638966 master-0 kubenswrapper[31411]: I0224 02:23:27.638879 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 24 02:23:27.649085 master-0 kubenswrapper[31411]: I0224 02:23:27.649005 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 24 02:23:27.662971 master-0 kubenswrapper[31411]: I0224 02:23:27.662898 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 24 02:23:27.717212 master-0 kubenswrapper[31411]: I0224 02:23:27.717136 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 24 02:23:27.723455 master-0 kubenswrapper[31411]: I0224 02:23:27.723311 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 24 02:23:27.776287 master-0 kubenswrapper[31411]: I0224 02:23:27.776146 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 24 02:23:28.105709 master-0 kubenswrapper[31411]: I0224 02:23:28.105641 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 24 02:23:28.152431 master-0 kubenswrapper[31411]: I0224 02:23:28.152368 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 24 02:23:28.196988 master-0 kubenswrapper[31411]: I0224 02:23:28.196928 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 24 02:23:28.278244 master-0 kubenswrapper[31411]: I0224 02:23:28.278176 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 24 02:23:28.351879 master-0 kubenswrapper[31411]: I0224 02:23:28.351814 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 24 02:23:28.366562 master-0 kubenswrapper[31411]: I0224 02:23:28.366416 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 24 02:23:28.391685 master-0 kubenswrapper[31411]: I0224 02:23:28.391601 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 24 02:23:28.536149 master-0 kubenswrapper[31411]: I0224 02:23:28.536080 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 24 02:23:28.621056 master-0 kubenswrapper[31411]: I0224 02:23:28.620938 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 24 02:23:28.672436 master-0 kubenswrapper[31411]: I0224 02:23:28.672384 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 24 02:23:28.680001 master-0 kubenswrapper[31411]: I0224 02:23:28.679936 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 24 02:23:28.717322 master-0 kubenswrapper[31411]: I0224 02:23:28.717262 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 24 02:23:28.730135 master-0 kubenswrapper[31411]: I0224 02:23:28.730071 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-ddb6g" Feb 24 02:23:28.799820 master-0 kubenswrapper[31411]: I0224 02:23:28.799749 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-ndtpv" Feb 24 02:23:28.865991 master-0 kubenswrapper[31411]: I0224 02:23:28.865928 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 24 02:23:28.897375 master-0 kubenswrapper[31411]: I0224 02:23:28.897253 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 24 02:23:28.925827 master-0 kubenswrapper[31411]: I0224 02:23:28.925766 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 24 02:23:28.988461 master-0 kubenswrapper[31411]: I0224 02:23:28.988394 31411 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 24 02:23:29.003290 master-0 kubenswrapper[31411]: I0224 02:23:29.003229 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 24 02:23:29.111231 master-0 kubenswrapper[31411]: I0224 02:23:29.111168 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 24 02:23:29.186030 master-0 kubenswrapper[31411]: I0224 02:23:29.185880 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 24 02:23:29.319313 master-0 kubenswrapper[31411]: I0224 02:23:29.319200 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 24 02:23:29.334304 master-0 kubenswrapper[31411]: I0224 02:23:29.334166 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-brkg4" Feb 24 02:23:29.666770 master-0 kubenswrapper[31411]: I0224 02:23:29.666691 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 24 02:23:29.677384 master-0 kubenswrapper[31411]: I0224 02:23:29.677328 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 02:23:29.685003 master-0 kubenswrapper[31411]: I0224 02:23:29.684721 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 24 02:23:29.728141 master-0 kubenswrapper[31411]: I0224 02:23:29.728099 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 24 02:23:29.746277 master-0 kubenswrapper[31411]: I0224 02:23:29.746242 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 24 02:23:29.828824 master-0 kubenswrapper[31411]: I0224 02:23:29.828710 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 24 02:23:29.844597 master-0 kubenswrapper[31411]: I0224 02:23:29.844509 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 02:23:29.870051 master-0 kubenswrapper[31411]: I0224 02:23:29.870017 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 24 02:23:29.956306 master-0 kubenswrapper[31411]: I0224 02:23:29.956238 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 24 02:23:29.975315 master-0 kubenswrapper[31411]: I0224 02:23:29.975243 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 24 02:23:29.993378 master-0 kubenswrapper[31411]: I0224 02:23:29.993321 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 24 02:23:30.015979 master-0 kubenswrapper[31411]: I0224 02:23:30.015942 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-xlxp2" Feb 24 02:23:30.024146 master-0 kubenswrapper[31411]: I0224 02:23:30.024082 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 24 02:23:30.044636 master-0 kubenswrapper[31411]: I0224 02:23:30.044532 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 24 02:23:30.071079 master-0 kubenswrapper[31411]: I0224 02:23:30.071040 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 24 02:23:30.101271 master-0 kubenswrapper[31411]: I0224 02:23:30.101235 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 24 02:23:30.193355 master-0 kubenswrapper[31411]: I0224 02:23:30.193247 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 24 02:23:30.234360 master-0 kubenswrapper[31411]: I0224 02:23:30.234212 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 24 02:23:30.299330 master-0 kubenswrapper[31411]: I0224 02:23:30.299252 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-k5dgr" Feb 24 02:23:30.306782 master-0 kubenswrapper[31411]: I0224 02:23:30.306730 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 24 02:23:30.393897 master-0 kubenswrapper[31411]: I0224 02:23:30.393815 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 24 02:23:30.461497 master-0 kubenswrapper[31411]: I0224 02:23:30.461391 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-rpcz4" Feb 24 02:23:30.482595 master-0 kubenswrapper[31411]: I0224 02:23:30.482501 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-2xl7c" Feb 24 02:23:30.496897 master-0 kubenswrapper[31411]: I0224 02:23:30.496753 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 24 02:23:30.503739 master-0 kubenswrapper[31411]: I0224 02:23:30.503657 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 24 02:23:30.644211 master-0 kubenswrapper[31411]: I0224 02:23:30.644152 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 24 02:23:30.718776 master-0 kubenswrapper[31411]: I0224 02:23:30.718700 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 24 02:23:30.790795 master-0 kubenswrapper[31411]: I0224 02:23:30.790645 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 24 02:23:30.895493 master-0 kubenswrapper[31411]: I0224 02:23:30.895406 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 24 02:23:30.907277 master-0 kubenswrapper[31411]: I0224 02:23:30.907230 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 24 02:23:30.993113 master-0 kubenswrapper[31411]: I0224 02:23:30.993015 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 24 02:23:31.040551 master-0 kubenswrapper[31411]: I0224 02:23:31.040509 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-fdtcj" Feb 24 02:23:31.060085 master-0 kubenswrapper[31411]: I0224 02:23:31.059946 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 24 02:23:31.262204 master-0 kubenswrapper[31411]: I0224 02:23:31.262120 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 02:23:31.289298 master-0 kubenswrapper[31411]: I0224 02:23:31.289220 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 24 02:23:31.372813 master-0 kubenswrapper[31411]: I0224 02:23:31.372673 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 24 02:23:31.436877 master-0 kubenswrapper[31411]: I0224 02:23:31.436773 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 24 02:23:31.445388 master-0 kubenswrapper[31411]: I0224 02:23:31.445324 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 24 02:23:31.549670 master-0 kubenswrapper[31411]: I0224 02:23:31.549561 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 24 02:23:31.563813 master-0 kubenswrapper[31411]: I0224 02:23:31.563758 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 24 02:23:31.684823 master-0 kubenswrapper[31411]: I0224 02:23:31.684638 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 24 02:23:31.713984 master-0 kubenswrapper[31411]: I0224 02:23:31.713906 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 24 02:23:31.766682 master-0 kubenswrapper[31411]: I0224 02:23:31.766549 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 24 02:23:31.786864 master-0 kubenswrapper[31411]: I0224 02:23:31.786820 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 02:23:31.845126 master-0 kubenswrapper[31411]: I0224 02:23:31.845044 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 24 02:23:31.858069 master-0 kubenswrapper[31411]: I0224 02:23:31.858014 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 24 02:23:31.911725 master-0 kubenswrapper[31411]: I0224 02:23:31.911658 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 02:23:31.923417 master-0 kubenswrapper[31411]: I0224 02:23:31.923351 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 24 02:23:31.989974 master-0 kubenswrapper[31411]: I0224 02:23:31.989909 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 24 02:23:31.997900 master-0 kubenswrapper[31411]: I0224 02:23:31.997863 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 24 02:23:32.088733 master-0 kubenswrapper[31411]: I0224 02:23:32.088638 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 24 02:23:32.094767 master-0 kubenswrapper[31411]: I0224 02:23:32.094731 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 24 02:23:32.179714 master-0 kubenswrapper[31411]: I0224 02:23:32.179673 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 24 02:23:32.193784 master-0 kubenswrapper[31411]: I0224 02:23:32.193753 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 24 02:23:32.243939 master-0 kubenswrapper[31411]: I0224 02:23:32.243747 31411 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 24 02:23:32.287145 master-0 kubenswrapper[31411]: I0224 02:23:32.287073 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 24 02:23:32.298637 master-0 kubenswrapper[31411]: I0224 02:23:32.297304 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 24 02:23:32.378011 master-0 kubenswrapper[31411]: I0224 02:23:32.377927 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 24 02:23:32.446789 master-0 kubenswrapper[31411]: I0224 02:23:32.446727 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 24 02:23:32.490446 master-0 kubenswrapper[31411]: I0224 02:23:32.490373 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 24 02:23:32.523453 master-0 kubenswrapper[31411]: I0224 02:23:32.523312 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-7k4nm" Feb 24 02:23:32.525056 master-0 kubenswrapper[31411]: I0224 02:23:32.525007 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 24 02:23:32.554125 master-0 kubenswrapper[31411]: I0224 02:23:32.554075 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 24 02:23:32.616378 master-0 kubenswrapper[31411]: I0224 02:23:32.616302 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 24 02:23:32.620789 master-0 kubenswrapper[31411]: I0224 02:23:32.620739 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 24 02:23:32.651522 master-0 kubenswrapper[31411]: I0224 02:23:32.651452 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 24 02:23:32.664355 master-0 kubenswrapper[31411]: I0224 02:23:32.664287 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 24 02:23:32.741218 master-0 kubenswrapper[31411]: I0224 02:23:32.741156 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-lbd2d" Feb 24 02:23:32.752052 master-0 kubenswrapper[31411]: I0224 02:23:32.752007 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 24 02:23:32.793913 master-0 kubenswrapper[31411]: I0224 02:23:32.793784 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-rrfph" Feb 24 02:23:32.837750 master-0 kubenswrapper[31411]: I0224 02:23:32.837725 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 24 02:23:32.914497 master-0 kubenswrapper[31411]: I0224 02:23:32.914460 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 24 02:23:32.925710 master-0 kubenswrapper[31411]: I0224 02:23:32.925656 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 24 02:23:32.927388 master-0 kubenswrapper[31411]: I0224 02:23:32.927340 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-7xngw" Feb 24 02:23:33.057047 master-0 kubenswrapper[31411]: I0224 02:23:33.056884 31411 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 24 02:23:33.066620 master-0 kubenswrapper[31411]: I0224 02:23:33.066535 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 24 02:23:33.069559 master-0 kubenswrapper[31411]: I0224 02:23:33.069486 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 24 02:23:33.069703 master-0 kubenswrapper[31411]: I0224 02:23:33.069614 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-955b69498-x847l","openshift-kube-apiserver/kube-apiserver-master-0","openshift-network-console/networking-console-plugin-79f587d78f-6bkc6","openshift-monitoring/monitoring-plugin-5d9ddb8754-xtrdd","openshift-console/console-576f8c76bf-2xx46"] Feb 24 02:23:33.070173 master-0 kubenswrapper[31411]: E0224 02:23:33.070122 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc19da8d-13e6-4e8e-a506-9b067fd8870b" containerName="installer" Feb 24 02:23:33.070173 master-0 kubenswrapper[31411]: I0224 02:23:33.070161 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc19da8d-13e6-4e8e-a506-9b067fd8870b" containerName="installer" Feb 24 02:23:33.070460 master-0 kubenswrapper[31411]: I0224 02:23:33.070314 31411 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b7689f5c-1454-4048-b5e5-9b1b0b1784c7" Feb 24 02:23:33.070460 master-0 kubenswrapper[31411]: I0224 02:23:33.070366 31411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b7689f5c-1454-4048-b5e5-9b1b0b1784c7" Feb 24 02:23:33.070761 master-0 kubenswrapper[31411]: I0224 02:23:33.070487 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc19da8d-13e6-4e8e-a506-9b067fd8870b" containerName="installer" Feb 24 02:23:33.071711 master-0 kubenswrapper[31411]: I0224 02:23:33.071650 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-955b69498-x847l" Feb 24 02:23:33.072884 master-0 kubenswrapper[31411]: I0224 02:23:33.072842 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-79f587d78f-6bkc6" Feb 24 02:23:33.074324 master-0 kubenswrapper[31411]: I0224 02:23:33.074269 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 24 02:23:33.075555 master-0 kubenswrapper[31411]: I0224 02:23:33.074887 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-5d9ddb8754-xtrdd" Feb 24 02:23:33.075974 master-0 kubenswrapper[31411]: I0224 02:23:33.075612 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 24 02:23:33.075974 master-0 kubenswrapper[31411]: I0224 02:23:33.075671 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-gt947" Feb 24 02:23:33.075974 master-0 kubenswrapper[31411]: I0224 02:23:33.075831 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6","openshift-authentication/oauth-openshift-95876988f-c58ls","openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd"] Feb 24 02:23:33.076421 master-0 kubenswrapper[31411]: I0224 02:23:33.076112 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 24 02:23:33.076421 master-0 kubenswrapper[31411]: I0224 02:23:33.076414 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-576f8c76bf-2xx46" Feb 24 02:23:33.076823 master-0 kubenswrapper[31411]: I0224 02:23:33.076374 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" podUID="55a2662a-d672-4a46-9b81-bfcaf334eedb" containerName="route-controller-manager" containerID="cri-o://bcd82e1ea303c732e8cd1c96072c832298944aee61e13de8101c6575d136f541" gracePeriod=30 Feb 24 02:23:33.076823 master-0 kubenswrapper[31411]: I0224 02:23:33.076549 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 24 02:23:33.077562 master-0 kubenswrapper[31411]: I0224 02:23:33.077506 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 02:23:33.077562 master-0 kubenswrapper[31411]: I0224 02:23:33.077474 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" podUID="e8d6a6c0-b944-4206-9178-9a9930b303b9" containerName="controller-manager" containerID="cri-o://00fb88dd6ea7a9ddb5d7ddf189575d130d7630f12716d8a17f87c83ef377ea62" gracePeriod=30 Feb 24 02:23:33.080293 master-0 kubenswrapper[31411]: I0224 02:23:33.078234 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 24 02:23:33.080293 master-0 kubenswrapper[31411]: I0224 02:23:33.079787 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 24 02:23:33.080293 master-0 kubenswrapper[31411]: I0224 02:23:33.080100 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-kzrxs" Feb 24 02:23:33.080665 master-0 kubenswrapper[31411]: I0224 02:23:33.080452 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 24 02:23:33.080665 master-0 kubenswrapper[31411]: I0224 02:23:33.080516 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 24 02:23:33.080933 master-0 kubenswrapper[31411]: I0224 02:23:33.080903 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 24 02:23:33.081809 master-0 kubenswrapper[31411]: I0224 02:23:33.081757 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 24 02:23:33.082421 master-0 kubenswrapper[31411]: I0224 02:23:33.082351 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-mfccl" Feb 24 02:23:33.092521 master-0 kubenswrapper[31411]: I0224 02:23:33.092278 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 24 02:23:33.136393 master-0 kubenswrapper[31411]: I0224 02:23:33.135871 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=19.135823544 podStartE2EDuration="19.135823544s" podCreationTimestamp="2026-02-24 02:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:23:33.126466761 +0000 UTC m=+156.343664687" watchObservedRunningTime="2026-02-24 02:23:33.135823544 +0000 UTC m=+156.353021480" Feb 24 02:23:33.169990 master-0 kubenswrapper[31411]: I0224 02:23:33.169919 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 24 02:23:33.177144 master-0 kubenswrapper[31411]: I0224 02:23:33.176977 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 24 02:23:33.226692 master-0 kubenswrapper[31411]: I0224 02:23:33.218994 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 24 02:23:33.240999 master-0 kubenswrapper[31411]: I0224 02:23:33.240938 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92850e60-7126-43e8-a7f7-fe411b6fc2b7-trusted-ca-bundle\") pod \"console-576f8c76bf-2xx46\" (UID: \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\") " pod="openshift-console/console-576f8c76bf-2xx46" Feb 24 02:23:33.241187 master-0 kubenswrapper[31411]: I0224 02:23:33.241025 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/7a90d46f-0217-49aa-a63c-c81a371fa218-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-6bkc6\" (UID: \"7a90d46f-0217-49aa-a63c-c81a371fa218\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-6bkc6" Feb 24 02:23:33.241187 master-0 kubenswrapper[31411]: I0224 02:23:33.241065 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/92850e60-7126-43e8-a7f7-fe411b6fc2b7-service-ca\") pod \"console-576f8c76bf-2xx46\" (UID: \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\") " pod="openshift-console/console-576f8c76bf-2xx46" Feb 24 02:23:33.241187 master-0 kubenswrapper[31411]: I0224 02:23:33.241095 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/92850e60-7126-43e8-a7f7-fe411b6fc2b7-console-oauth-config\") pod \"console-576f8c76bf-2xx46\" (UID: \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\") " pod="openshift-console/console-576f8c76bf-2xx46" Feb 24 02:23:33.241187 master-0 kubenswrapper[31411]: I0224 02:23:33.241117 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7a90d46f-0217-49aa-a63c-c81a371fa218-nginx-conf\") pod \"networking-console-plugin-79f587d78f-6bkc6\" (UID: \"7a90d46f-0217-49aa-a63c-c81a371fa218\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-6bkc6" Feb 24 02:23:33.241187 master-0 kubenswrapper[31411]: I0224 02:23:33.241143 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/92850e60-7126-43e8-a7f7-fe411b6fc2b7-console-config\") pod \"console-576f8c76bf-2xx46\" (UID: \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\") " pod="openshift-console/console-576f8c76bf-2xx46" Feb 24 02:23:33.241512 master-0 kubenswrapper[31411]: I0224 02:23:33.241200 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96mmc\" (UniqueName: \"kubernetes.io/projected/92850e60-7126-43e8-a7f7-fe411b6fc2b7-kube-api-access-96mmc\") pod \"console-576f8c76bf-2xx46\" (UID: \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\") " pod="openshift-console/console-576f8c76bf-2xx46" Feb 24 02:23:33.241512 master-0 kubenswrapper[31411]: I0224 02:23:33.241255 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/823c983e-f9a6-4074-9a69-14ec0666dfd5-monitoring-plugin-cert\") pod \"monitoring-plugin-5d9ddb8754-xtrdd\" (UID: \"823c983e-f9a6-4074-9a69-14ec0666dfd5\") " pod="openshift-monitoring/monitoring-plugin-5d9ddb8754-xtrdd" Feb 24 02:23:33.241512 master-0 kubenswrapper[31411]: I0224 02:23:33.241309 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdrd8\" (UniqueName: \"kubernetes.io/projected/2fbb8ae4-fc8b-46ff-a295-10a1207dd571-kube-api-access-mdrd8\") pod \"downloads-955b69498-x847l\" (UID: \"2fbb8ae4-fc8b-46ff-a295-10a1207dd571\") " pod="openshift-console/downloads-955b69498-x847l" Feb 24 02:23:33.241512 master-0 kubenswrapper[31411]: I0224 02:23:33.241337 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/92850e60-7126-43e8-a7f7-fe411b6fc2b7-console-serving-cert\") pod \"console-576f8c76bf-2xx46\" (UID: \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\") " pod="openshift-console/console-576f8c76bf-2xx46" Feb 24 02:23:33.241512 master-0 kubenswrapper[31411]: I0224 02:23:33.241364 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/92850e60-7126-43e8-a7f7-fe411b6fc2b7-oauth-serving-cert\") pod \"console-576f8c76bf-2xx46\" (UID: \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\") " pod="openshift-console/console-576f8c76bf-2xx46" Feb 24 02:23:33.244075 master-0 kubenswrapper[31411]: I0224 02:23:33.244032 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 24 02:23:33.269990 master-0 kubenswrapper[31411]: I0224 02:23:33.269624 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 24 02:23:33.325338 master-0 kubenswrapper[31411]: I0224 02:23:33.301104 31411 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 24 02:23:33.325338 master-0 kubenswrapper[31411]: I0224 02:23:33.308131 31411 generic.go:334] "Generic (PLEG): container finished" podID="e8d6a6c0-b944-4206-9178-9a9930b303b9" containerID="00fb88dd6ea7a9ddb5d7ddf189575d130d7630f12716d8a17f87c83ef377ea62" exitCode=0 Feb 24 02:23:33.325338 master-0 kubenswrapper[31411]: I0224 02:23:33.308238 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" event={"ID":"e8d6a6c0-b944-4206-9178-9a9930b303b9","Type":"ContainerDied","Data":"00fb88dd6ea7a9ddb5d7ddf189575d130d7630f12716d8a17f87c83ef377ea62"} Feb 24 02:23:33.325338 master-0 kubenswrapper[31411]: I0224 02:23:33.308394 31411 scope.go:117] "RemoveContainer" containerID="5ed088abb8fdf119602dca1779c3b84da28af95aaab8dcf8c7df738c7d83aa56" Feb 24 02:23:33.325338 master-0 kubenswrapper[31411]: I0224 02:23:33.311847 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 24 02:23:33.325338 master-0 kubenswrapper[31411]: I0224 02:23:33.320908 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-lprdj" Feb 24 02:23:33.328831 master-0 kubenswrapper[31411]: I0224 02:23:33.328727 31411 generic.go:334] "Generic (PLEG): container finished" podID="55a2662a-d672-4a46-9b81-bfcaf334eedb" containerID="bcd82e1ea303c732e8cd1c96072c832298944aee61e13de8101c6575d136f541" exitCode=0 Feb 24 02:23:33.328831 master-0 kubenswrapper[31411]: I0224 02:23:33.328787 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" event={"ID":"55a2662a-d672-4a46-9b81-bfcaf334eedb","Type":"ContainerDied","Data":"bcd82e1ea303c732e8cd1c96072c832298944aee61e13de8101c6575d136f541"} Feb 24 02:23:33.336078 master-0 kubenswrapper[31411]: I0224 02:23:33.336019 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-mdmqh" Feb 24 02:23:33.343082 master-0 kubenswrapper[31411]: I0224 02:23:33.343007 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7a90d46f-0217-49aa-a63c-c81a371fa218-nginx-conf\") pod \"networking-console-plugin-79f587d78f-6bkc6\" (UID: \"7a90d46f-0217-49aa-a63c-c81a371fa218\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-6bkc6" Feb 24 02:23:33.343242 master-0 kubenswrapper[31411]: I0224 02:23:33.343092 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/92850e60-7126-43e8-a7f7-fe411b6fc2b7-console-oauth-config\") pod \"console-576f8c76bf-2xx46\" (UID: \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\") " pod="openshift-console/console-576f8c76bf-2xx46" Feb 24 02:23:33.343242 master-0 kubenswrapper[31411]: I0224 02:23:33.343143 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/92850e60-7126-43e8-a7f7-fe411b6fc2b7-console-config\") pod \"console-576f8c76bf-2xx46\" (UID: \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\") " pod="openshift-console/console-576f8c76bf-2xx46" Feb 24 02:23:33.343410 master-0 kubenswrapper[31411]: I0224 02:23:33.343250 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96mmc\" (UniqueName: \"kubernetes.io/projected/92850e60-7126-43e8-a7f7-fe411b6fc2b7-kube-api-access-96mmc\") pod \"console-576f8c76bf-2xx46\" (UID: \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\") " pod="openshift-console/console-576f8c76bf-2xx46" Feb 24 02:23:33.343637 master-0 kubenswrapper[31411]: I0224 02:23:33.343525 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/823c983e-f9a6-4074-9a69-14ec0666dfd5-monitoring-plugin-cert\") pod \"monitoring-plugin-5d9ddb8754-xtrdd\" (UID: \"823c983e-f9a6-4074-9a69-14ec0666dfd5\") " pod="openshift-monitoring/monitoring-plugin-5d9ddb8754-xtrdd" Feb 24 02:23:33.343812 master-0 kubenswrapper[31411]: I0224 02:23:33.343751 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdrd8\" (UniqueName: \"kubernetes.io/projected/2fbb8ae4-fc8b-46ff-a295-10a1207dd571-kube-api-access-mdrd8\") pod \"downloads-955b69498-x847l\" (UID: \"2fbb8ae4-fc8b-46ff-a295-10a1207dd571\") " pod="openshift-console/downloads-955b69498-x847l" Feb 24 02:23:33.345987 master-0 kubenswrapper[31411]: I0224 02:23:33.344441 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/92850e60-7126-43e8-a7f7-fe411b6fc2b7-console-serving-cert\") pod \"console-576f8c76bf-2xx46\" (UID: \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\") " pod="openshift-console/console-576f8c76bf-2xx46" Feb 24 02:23:33.345987 master-0 kubenswrapper[31411]: I0224 02:23:33.344503 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/92850e60-7126-43e8-a7f7-fe411b6fc2b7-oauth-serving-cert\") pod \"console-576f8c76bf-2xx46\" (UID: \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\") " pod="openshift-console/console-576f8c76bf-2xx46" Feb 24 02:23:33.345987 master-0 kubenswrapper[31411]: I0224 02:23:33.344564 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92850e60-7126-43e8-a7f7-fe411b6fc2b7-trusted-ca-bundle\") pod \"console-576f8c76bf-2xx46\" (UID: \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\") " pod="openshift-console/console-576f8c76bf-2xx46" Feb 24 02:23:33.345987 master-0 kubenswrapper[31411]: I0224 02:23:33.344643 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/7a90d46f-0217-49aa-a63c-c81a371fa218-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-6bkc6\" (UID: \"7a90d46f-0217-49aa-a63c-c81a371fa218\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-6bkc6" Feb 24 02:23:33.345987 master-0 kubenswrapper[31411]: I0224 02:23:33.344686 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/92850e60-7126-43e8-a7f7-fe411b6fc2b7-service-ca\") pod \"console-576f8c76bf-2xx46\" (UID: \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\") " pod="openshift-console/console-576f8c76bf-2xx46" Feb 24 02:23:33.345987 master-0 kubenswrapper[31411]: I0224 02:23:33.344898 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7a90d46f-0217-49aa-a63c-c81a371fa218-nginx-conf\") pod \"networking-console-plugin-79f587d78f-6bkc6\" (UID: \"7a90d46f-0217-49aa-a63c-c81a371fa218\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-6bkc6" Feb 24 02:23:33.345987 master-0 kubenswrapper[31411]: I0224 02:23:33.345126 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/92850e60-7126-43e8-a7f7-fe411b6fc2b7-console-config\") pod \"console-576f8c76bf-2xx46\" (UID: \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\") " pod="openshift-console/console-576f8c76bf-2xx46" Feb 24 02:23:33.345987 master-0 kubenswrapper[31411]: I0224 02:23:33.345955 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/92850e60-7126-43e8-a7f7-fe411b6fc2b7-service-ca\") pod \"console-576f8c76bf-2xx46\" (UID: \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\") " pod="openshift-console/console-576f8c76bf-2xx46" Feb 24 02:23:33.346714 master-0 kubenswrapper[31411]: I0224 02:23:33.346676 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/92850e60-7126-43e8-a7f7-fe411b6fc2b7-oauth-serving-cert\") pod \"console-576f8c76bf-2xx46\" (UID: \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\") " pod="openshift-console/console-576f8c76bf-2xx46" Feb 24 02:23:33.347258 master-0 kubenswrapper[31411]: I0224 02:23:33.347174 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92850e60-7126-43e8-a7f7-fe411b6fc2b7-trusted-ca-bundle\") pod \"console-576f8c76bf-2xx46\" (UID: \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\") " pod="openshift-console/console-576f8c76bf-2xx46" Feb 24 02:23:33.358470 master-0 kubenswrapper[31411]: I0224 02:23:33.350805 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/823c983e-f9a6-4074-9a69-14ec0666dfd5-monitoring-plugin-cert\") pod \"monitoring-plugin-5d9ddb8754-xtrdd\" (UID: \"823c983e-f9a6-4074-9a69-14ec0666dfd5\") " pod="openshift-monitoring/monitoring-plugin-5d9ddb8754-xtrdd" Feb 24 02:23:33.358470 master-0 kubenswrapper[31411]: I0224 02:23:33.351539 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/92850e60-7126-43e8-a7f7-fe411b6fc2b7-console-oauth-config\") pod \"console-576f8c76bf-2xx46\" (UID: \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\") " pod="openshift-console/console-576f8c76bf-2xx46" Feb 24 02:23:33.358470 master-0 kubenswrapper[31411]: I0224 02:23:33.352605 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/7a90d46f-0217-49aa-a63c-c81a371fa218-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-6bkc6\" (UID: \"7a90d46f-0217-49aa-a63c-c81a371fa218\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-6bkc6" Feb 24 02:23:33.358470 master-0 kubenswrapper[31411]: I0224 02:23:33.354736 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/92850e60-7126-43e8-a7f7-fe411b6fc2b7-console-serving-cert\") pod \"console-576f8c76bf-2xx46\" (UID: \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\") " pod="openshift-console/console-576f8c76bf-2xx46" Feb 24 02:23:33.360299 master-0 kubenswrapper[31411]: I0224 02:23:33.360243 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 24 02:23:33.372368 master-0 kubenswrapper[31411]: I0224 02:23:33.372299 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdrd8\" (UniqueName: \"kubernetes.io/projected/2fbb8ae4-fc8b-46ff-a295-10a1207dd571-kube-api-access-mdrd8\") pod \"downloads-955b69498-x847l\" (UID: \"2fbb8ae4-fc8b-46ff-a295-10a1207dd571\") " pod="openshift-console/downloads-955b69498-x847l" Feb 24 02:23:33.374973 master-0 kubenswrapper[31411]: I0224 02:23:33.374914 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96mmc\" (UniqueName: \"kubernetes.io/projected/92850e60-7126-43e8-a7f7-fe411b6fc2b7-kube-api-access-96mmc\") pod \"console-576f8c76bf-2xx46\" (UID: \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\") " pod="openshift-console/console-576f8c76bf-2xx46" Feb 24 02:23:33.417239 master-0 kubenswrapper[31411]: I0224 02:23:33.417178 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-955b69498-x847l" Feb 24 02:23:33.421639 master-0 kubenswrapper[31411]: I0224 02:23:33.421568 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-79f587d78f-6bkc6" Feb 24 02:23:33.441850 master-0 kubenswrapper[31411]: I0224 02:23:33.441800 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-5d9ddb8754-xtrdd" Feb 24 02:23:33.462098 master-0 kubenswrapper[31411]: I0224 02:23:33.462046 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-576f8c76bf-2xx46" Feb 24 02:23:33.484224 master-0 kubenswrapper[31411]: I0224 02:23:33.484162 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 24 02:23:33.588849 master-0 kubenswrapper[31411]: I0224 02:23:33.588701 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 24 02:23:33.611810 master-0 kubenswrapper[31411]: I0224 02:23:33.610416 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 24 02:23:33.635677 master-0 kubenswrapper[31411]: I0224 02:23:33.635617 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 24 02:23:33.728735 master-0 kubenswrapper[31411]: I0224 02:23:33.711950 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:23:33.741651 master-0 kubenswrapper[31411]: I0224 02:23:33.730436 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 24 02:23:33.783430 master-0 kubenswrapper[31411]: I0224 02:23:33.783357 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cf66f6dd4-lbnq4"] Feb 24 02:23:33.783846 master-0 kubenswrapper[31411]: E0224 02:23:33.783807 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a2662a-d672-4a46-9b81-bfcaf334eedb" containerName="route-controller-manager" Feb 24 02:23:33.783846 master-0 kubenswrapper[31411]: I0224 02:23:33.783840 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a2662a-d672-4a46-9b81-bfcaf334eedb" containerName="route-controller-manager" Feb 24 02:23:33.784200 master-0 kubenswrapper[31411]: I0224 02:23:33.784163 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="55a2662a-d672-4a46-9b81-bfcaf334eedb" containerName="route-controller-manager" Feb 24 02:23:33.785002 master-0 kubenswrapper[31411]: I0224 02:23:33.784972 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cf66f6dd4-lbnq4" Feb 24 02:23:33.792024 master-0 kubenswrapper[31411]: I0224 02:23:33.791961 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:23:33.823815 master-0 kubenswrapper[31411]: I0224 02:23:33.823770 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 24 02:23:33.854260 master-0 kubenswrapper[31411]: I0224 02:23:33.854124 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55a2662a-d672-4a46-9b81-bfcaf334eedb-client-ca\") pod \"55a2662a-d672-4a46-9b81-bfcaf334eedb\" (UID: \"55a2662a-d672-4a46-9b81-bfcaf334eedb\") " Feb 24 02:23:33.854260 master-0 kubenswrapper[31411]: I0224 02:23:33.854222 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55a2662a-d672-4a46-9b81-bfcaf334eedb-serving-cert\") pod \"55a2662a-d672-4a46-9b81-bfcaf334eedb\" (UID: \"55a2662a-d672-4a46-9b81-bfcaf334eedb\") " Feb 24 02:23:33.854486 master-0 kubenswrapper[31411]: I0224 02:23:33.854420 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55a2662a-d672-4a46-9b81-bfcaf334eedb-config\") pod \"55a2662a-d672-4a46-9b81-bfcaf334eedb\" (UID: \"55a2662a-d672-4a46-9b81-bfcaf334eedb\") " Feb 24 02:23:33.854563 master-0 kubenswrapper[31411]: I0224 02:23:33.854535 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzghr\" (UniqueName: \"kubernetes.io/projected/55a2662a-d672-4a46-9b81-bfcaf334eedb-kube-api-access-gzghr\") pod \"55a2662a-d672-4a46-9b81-bfcaf334eedb\" (UID: \"55a2662a-d672-4a46-9b81-bfcaf334eedb\") " Feb 24 02:23:33.854994 master-0 kubenswrapper[31411]: I0224 02:23:33.854929 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55a2662a-d672-4a46-9b81-bfcaf334eedb-client-ca" (OuterVolumeSpecName: "client-ca") pod "55a2662a-d672-4a46-9b81-bfcaf334eedb" (UID: "55a2662a-d672-4a46-9b81-bfcaf334eedb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:23:33.855391 master-0 kubenswrapper[31411]: I0224 02:23:33.855328 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55a2662a-d672-4a46-9b81-bfcaf334eedb-config" (OuterVolumeSpecName: "config") pod "55a2662a-d672-4a46-9b81-bfcaf334eedb" (UID: "55a2662a-d672-4a46-9b81-bfcaf334eedb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:23:33.858151 master-0 kubenswrapper[31411]: I0224 02:23:33.858096 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55a2662a-d672-4a46-9b81-bfcaf334eedb-kube-api-access-gzghr" (OuterVolumeSpecName: "kube-api-access-gzghr") pod "55a2662a-d672-4a46-9b81-bfcaf334eedb" (UID: "55a2662a-d672-4a46-9b81-bfcaf334eedb"). InnerVolumeSpecName "kube-api-access-gzghr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:23:33.859137 master-0 kubenswrapper[31411]: I0224 02:23:33.859087 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55a2662a-d672-4a46-9b81-bfcaf334eedb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "55a2662a-d672-4a46-9b81-bfcaf334eedb" (UID: "55a2662a-d672-4a46-9b81-bfcaf334eedb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:23:33.862040 master-0 kubenswrapper[31411]: I0224 02:23:33.862006 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 24 02:23:33.874894 master-0 kubenswrapper[31411]: I0224 02:23:33.874863 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 24 02:23:33.955246 master-0 kubenswrapper[31411]: I0224 02:23:33.955206 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-config\") pod \"e8d6a6c0-b944-4206-9178-9a9930b303b9\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " Feb 24 02:23:33.955527 master-0 kubenswrapper[31411]: I0224 02:23:33.955512 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-client-ca\") pod \"e8d6a6c0-b944-4206-9178-9a9930b303b9\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " Feb 24 02:23:33.955827 master-0 kubenswrapper[31411]: I0224 02:23:33.955810 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-proxy-ca-bundles\") pod \"e8d6a6c0-b944-4206-9178-9a9930b303b9\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " Feb 24 02:23:33.955967 master-0 kubenswrapper[31411]: I0224 02:23:33.955952 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxjtb\" (UniqueName: \"kubernetes.io/projected/e8d6a6c0-b944-4206-9178-9a9930b303b9-kube-api-access-zxjtb\") pod \"e8d6a6c0-b944-4206-9178-9a9930b303b9\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " Feb 24 02:23:33.956087 master-0 kubenswrapper[31411]: I0224 02:23:33.956075 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8d6a6c0-b944-4206-9178-9a9930b303b9-serving-cert\") pod \"e8d6a6c0-b944-4206-9178-9a9930b303b9\" (UID: \"e8d6a6c0-b944-4206-9178-9a9930b303b9\") " Feb 24 02:23:33.956419 master-0 kubenswrapper[31411]: I0224 02:23:33.956397 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/171b7f58-509e-4c6a-a6c8-c0d98578c4a3-serving-cert\") pod \"route-controller-manager-6cf66f6dd4-lbnq4\" (UID: \"171b7f58-509e-4c6a-a6c8-c0d98578c4a3\") " pod="openshift-route-controller-manager/route-controller-manager-6cf66f6dd4-lbnq4" Feb 24 02:23:33.956551 master-0 kubenswrapper[31411]: I0224 02:23:33.956537 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/171b7f58-509e-4c6a-a6c8-c0d98578c4a3-client-ca\") pod \"route-controller-manager-6cf66f6dd4-lbnq4\" (UID: \"171b7f58-509e-4c6a-a6c8-c0d98578c4a3\") " pod="openshift-route-controller-manager/route-controller-manager-6cf66f6dd4-lbnq4" Feb 24 02:23:33.956669 master-0 kubenswrapper[31411]: I0224 02:23:33.956628 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e8d6a6c0-b944-4206-9178-9a9930b303b9" (UID: "e8d6a6c0-b944-4206-9178-9a9930b303b9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:23:33.956764 master-0 kubenswrapper[31411]: I0224 02:23:33.956743 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqxjd\" (UniqueName: \"kubernetes.io/projected/171b7f58-509e-4c6a-a6c8-c0d98578c4a3-kube-api-access-kqxjd\") pod \"route-controller-manager-6cf66f6dd4-lbnq4\" (UID: \"171b7f58-509e-4c6a-a6c8-c0d98578c4a3\") " pod="openshift-route-controller-manager/route-controller-manager-6cf66f6dd4-lbnq4" Feb 24 02:23:33.956910 master-0 kubenswrapper[31411]: I0224 02:23:33.956882 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/171b7f58-509e-4c6a-a6c8-c0d98578c4a3-config\") pod \"route-controller-manager-6cf66f6dd4-lbnq4\" (UID: \"171b7f58-509e-4c6a-a6c8-c0d98578c4a3\") " pod="openshift-route-controller-manager/route-controller-manager-6cf66f6dd4-lbnq4" Feb 24 02:23:33.957144 master-0 kubenswrapper[31411]: I0224 02:23:33.957118 31411 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55a2662a-d672-4a46-9b81-bfcaf334eedb-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:23:33.957295 master-0 kubenswrapper[31411]: I0224 02:23:33.957273 31411 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 24 02:23:33.957426 master-0 kubenswrapper[31411]: I0224 02:23:33.957407 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzghr\" (UniqueName: \"kubernetes.io/projected/55a2662a-d672-4a46-9b81-bfcaf334eedb-kube-api-access-gzghr\") on node \"master-0\" DevicePath \"\"" Feb 24 02:23:33.957602 master-0 kubenswrapper[31411]: I0224 02:23:33.957558 31411 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55a2662a-d672-4a46-9b81-bfcaf334eedb-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 02:23:33.957757 master-0 kubenswrapper[31411]: I0224 02:23:33.957736 31411 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55a2662a-d672-4a46-9b81-bfcaf334eedb-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 02:23:33.957890 master-0 kubenswrapper[31411]: I0224 02:23:33.956953 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-config" (OuterVolumeSpecName: "config") pod "e8d6a6c0-b944-4206-9178-9a9930b303b9" (UID: "e8d6a6c0-b944-4206-9178-9a9930b303b9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:23:33.958032 master-0 kubenswrapper[31411]: I0224 02:23:33.957008 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-client-ca" (OuterVolumeSpecName: "client-ca") pod "e8d6a6c0-b944-4206-9178-9a9930b303b9" (UID: "e8d6a6c0-b944-4206-9178-9a9930b303b9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:23:33.960784 master-0 kubenswrapper[31411]: I0224 02:23:33.960705 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8d6a6c0-b944-4206-9178-9a9930b303b9-kube-api-access-zxjtb" (OuterVolumeSpecName: "kube-api-access-zxjtb") pod "e8d6a6c0-b944-4206-9178-9a9930b303b9" (UID: "e8d6a6c0-b944-4206-9178-9a9930b303b9"). InnerVolumeSpecName "kube-api-access-zxjtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:23:33.961313 master-0 kubenswrapper[31411]: I0224 02:23:33.961257 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8d6a6c0-b944-4206-9178-9a9930b303b9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e8d6a6c0-b944-4206-9178-9a9930b303b9" (UID: "e8d6a6c0-b944-4206-9178-9a9930b303b9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:23:34.007254 master-0 kubenswrapper[31411]: I0224 02:23:34.007178 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 24 02:23:34.034295 master-0 kubenswrapper[31411]: I0224 02:23:34.034260 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 02:23:34.056554 master-0 kubenswrapper[31411]: I0224 02:23:34.056519 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-ltf57" Feb 24 02:23:34.059340 master-0 kubenswrapper[31411]: I0224 02:23:34.059297 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/171b7f58-509e-4c6a-a6c8-c0d98578c4a3-serving-cert\") pod \"route-controller-manager-6cf66f6dd4-lbnq4\" (UID: \"171b7f58-509e-4c6a-a6c8-c0d98578c4a3\") " pod="openshift-route-controller-manager/route-controller-manager-6cf66f6dd4-lbnq4" Feb 24 02:23:34.059636 master-0 kubenswrapper[31411]: I0224 02:23:34.059607 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/171b7f58-509e-4c6a-a6c8-c0d98578c4a3-client-ca\") pod \"route-controller-manager-6cf66f6dd4-lbnq4\" (UID: \"171b7f58-509e-4c6a-a6c8-c0d98578c4a3\") " pod="openshift-route-controller-manager/route-controller-manager-6cf66f6dd4-lbnq4" Feb 24 02:23:34.059895 master-0 kubenswrapper[31411]: I0224 02:23:34.059860 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqxjd\" (UniqueName: \"kubernetes.io/projected/171b7f58-509e-4c6a-a6c8-c0d98578c4a3-kube-api-access-kqxjd\") pod \"route-controller-manager-6cf66f6dd4-lbnq4\" (UID: \"171b7f58-509e-4c6a-a6c8-c0d98578c4a3\") " pod="openshift-route-controller-manager/route-controller-manager-6cf66f6dd4-lbnq4" Feb 24 02:23:34.060232 master-0 kubenswrapper[31411]: I0224 02:23:34.060202 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/171b7f58-509e-4c6a-a6c8-c0d98578c4a3-config\") pod \"route-controller-manager-6cf66f6dd4-lbnq4\" (UID: \"171b7f58-509e-4c6a-a6c8-c0d98578c4a3\") " pod="openshift-route-controller-manager/route-controller-manager-6cf66f6dd4-lbnq4" Feb 24 02:23:34.060516 master-0 kubenswrapper[31411]: I0224 02:23:34.060482 31411 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8d6a6c0-b944-4206-9178-9a9930b303b9-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 02:23:34.060762 master-0 kubenswrapper[31411]: I0224 02:23:34.060730 31411 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:23:34.060970 master-0 kubenswrapper[31411]: I0224 02:23:34.060939 31411 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8d6a6c0-b944-4206-9178-9a9930b303b9-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 02:23:34.061151 master-0 kubenswrapper[31411]: I0224 02:23:34.061117 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxjtb\" (UniqueName: \"kubernetes.io/projected/e8d6a6c0-b944-4206-9178-9a9930b303b9-kube-api-access-zxjtb\") on node \"master-0\" DevicePath \"\"" Feb 24 02:23:34.061548 master-0 kubenswrapper[31411]: I0224 02:23:34.061464 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/171b7f58-509e-4c6a-a6c8-c0d98578c4a3-client-ca\") pod \"route-controller-manager-6cf66f6dd4-lbnq4\" (UID: \"171b7f58-509e-4c6a-a6c8-c0d98578c4a3\") " pod="openshift-route-controller-manager/route-controller-manager-6cf66f6dd4-lbnq4" Feb 24 02:23:34.062215 master-0 kubenswrapper[31411]: I0224 02:23:34.062145 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/171b7f58-509e-4c6a-a6c8-c0d98578c4a3-config\") pod \"route-controller-manager-6cf66f6dd4-lbnq4\" (UID: \"171b7f58-509e-4c6a-a6c8-c0d98578c4a3\") " pod="openshift-route-controller-manager/route-controller-manager-6cf66f6dd4-lbnq4" Feb 24 02:23:34.066135 master-0 kubenswrapper[31411]: I0224 02:23:34.066067 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/171b7f58-509e-4c6a-a6c8-c0d98578c4a3-serving-cert\") pod \"route-controller-manager-6cf66f6dd4-lbnq4\" (UID: \"171b7f58-509e-4c6a-a6c8-c0d98578c4a3\") " pod="openshift-route-controller-manager/route-controller-manager-6cf66f6dd4-lbnq4" Feb 24 02:23:34.088553 master-0 kubenswrapper[31411]: I0224 02:23:34.088479 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqxjd\" (UniqueName: \"kubernetes.io/projected/171b7f58-509e-4c6a-a6c8-c0d98578c4a3-kube-api-access-kqxjd\") pod \"route-controller-manager-6cf66f6dd4-lbnq4\" (UID: \"171b7f58-509e-4c6a-a6c8-c0d98578c4a3\") " pod="openshift-route-controller-manager/route-controller-manager-6cf66f6dd4-lbnq4" Feb 24 02:23:34.123020 master-0 kubenswrapper[31411]: I0224 02:23:34.122884 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cf66f6dd4-lbnq4" Feb 24 02:23:34.172233 master-0 kubenswrapper[31411]: I0224 02:23:34.172122 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 24 02:23:34.269958 master-0 kubenswrapper[31411]: I0224 02:23:34.269901 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 24 02:23:34.278417 master-0 kubenswrapper[31411]: I0224 02:23:34.278345 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 24 02:23:34.282356 master-0 kubenswrapper[31411]: I0224 02:23:34.282294 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 24 02:23:34.318391 master-0 kubenswrapper[31411]: I0224 02:23:34.318309 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 24 02:23:34.342786 master-0 kubenswrapper[31411]: I0224 02:23:34.342701 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" event={"ID":"e8d6a6c0-b944-4206-9178-9a9930b303b9","Type":"ContainerDied","Data":"5c3398a6c263edc9332a777f898c18bf8d4d5354af4bc2396e80f920a1e77f07"} Feb 24 02:23:34.342939 master-0 kubenswrapper[31411]: I0224 02:23:34.342772 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6" Feb 24 02:23:34.342939 master-0 kubenswrapper[31411]: I0224 02:23:34.342801 31411 scope.go:117] "RemoveContainer" containerID="00fb88dd6ea7a9ddb5d7ddf189575d130d7630f12716d8a17f87c83ef377ea62" Feb 24 02:23:34.347079 master-0 kubenswrapper[31411]: I0224 02:23:34.346918 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" event={"ID":"55a2662a-d672-4a46-9b81-bfcaf334eedb","Type":"ContainerDied","Data":"45681e7db0a00432167c0ceb01dfa150d4182b397673d5a0da048e4b9054ffea"} Feb 24 02:23:34.347204 master-0 kubenswrapper[31411]: I0224 02:23:34.347036 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd" Feb 24 02:23:34.372454 master-0 kubenswrapper[31411]: I0224 02:23:34.372397 31411 scope.go:117] "RemoveContainer" containerID="bcd82e1ea303c732e8cd1c96072c832298944aee61e13de8101c6575d136f541" Feb 24 02:23:34.428625 master-0 kubenswrapper[31411]: I0224 02:23:34.428526 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6"] Feb 24 02:23:34.442529 master-0 kubenswrapper[31411]: I0224 02:23:34.442453 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-56b6d9c5b7-lxwt6"] Feb 24 02:23:34.451902 master-0 kubenswrapper[31411]: I0224 02:23:34.451837 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd"] Feb 24 02:23:34.459687 master-0 kubenswrapper[31411]: I0224 02:23:34.459608 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-676fddcd58-49xzd"] Feb 24 02:23:34.480740 master-0 kubenswrapper[31411]: I0224 02:23:34.480650 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 24 02:23:34.553908 master-0 kubenswrapper[31411]: I0224 02:23:34.553861 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 24 02:23:34.658328 master-0 kubenswrapper[31411]: I0224 02:23:34.658157 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 24 02:23:34.745737 master-0 kubenswrapper[31411]: I0224 02:23:34.745654 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 24 02:23:34.949851 master-0 kubenswrapper[31411]: I0224 02:23:34.949779 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 24 02:23:34.961789 master-0 kubenswrapper[31411]: I0224 02:23:34.961728 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 24 02:23:35.115377 master-0 kubenswrapper[31411]: I0224 02:23:35.115114 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 24 02:23:35.118523 master-0 kubenswrapper[31411]: I0224 02:23:35.118436 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55a2662a-d672-4a46-9b81-bfcaf334eedb" path="/var/lib/kubelet/pods/55a2662a-d672-4a46-9b81-bfcaf334eedb/volumes" Feb 24 02:23:35.121245 master-0 kubenswrapper[31411]: I0224 02:23:35.121174 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8d6a6c0-b944-4206-9178-9a9930b303b9" path="/var/lib/kubelet/pods/e8d6a6c0-b944-4206-9178-9a9930b303b9/volumes" Feb 24 02:23:35.121973 master-0 kubenswrapper[31411]: I0224 02:23:35.121910 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 24 02:23:35.123564 master-0 kubenswrapper[31411]: I0224 02:23:35.123397 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 24 02:23:35.152562 master-0 kubenswrapper[31411]: I0224 02:23:35.152504 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 24 02:23:35.198717 master-0 kubenswrapper[31411]: I0224 02:23:35.198656 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 24 02:23:35.237899 master-0 kubenswrapper[31411]: I0224 02:23:35.237726 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 24 02:23:35.260227 master-0 kubenswrapper[31411]: I0224 02:23:35.260150 31411 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 24 02:23:35.289373 master-0 kubenswrapper[31411]: I0224 02:23:35.289296 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 24 02:23:35.304247 master-0 kubenswrapper[31411]: I0224 02:23:35.304158 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 24 02:23:35.308466 master-0 kubenswrapper[31411]: I0224 02:23:35.308351 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 24 02:23:35.318619 master-0 kubenswrapper[31411]: I0224 02:23:35.317802 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 24 02:23:35.374619 master-0 kubenswrapper[31411]: I0224 02:23:35.374534 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 24 02:23:35.498672 master-0 kubenswrapper[31411]: I0224 02:23:35.498469 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 24 02:23:35.564608 master-0 kubenswrapper[31411]: I0224 02:23:35.564501 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 24 02:23:35.587903 master-0 kubenswrapper[31411]: I0224 02:23:35.587832 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 24 02:23:35.594949 master-0 kubenswrapper[31411]: I0224 02:23:35.594895 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 24 02:23:35.621012 master-0 kubenswrapper[31411]: I0224 02:23:35.620958 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 24 02:23:35.640080 master-0 kubenswrapper[31411]: I0224 02:23:35.640043 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-6fp4p" Feb 24 02:23:35.795751 master-0 kubenswrapper[31411]: I0224 02:23:35.795556 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 24 02:23:35.803543 master-0 kubenswrapper[31411]: I0224 02:23:35.803480 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 24 02:23:35.843858 master-0 kubenswrapper[31411]: I0224 02:23:35.843806 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 24 02:23:35.857202 master-0 kubenswrapper[31411]: I0224 02:23:35.857142 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 24 02:23:35.877396 master-0 kubenswrapper[31411]: I0224 02:23:35.877345 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 24 02:23:35.979308 master-0 kubenswrapper[31411]: I0224 02:23:35.979157 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c67bf58c9-mn7dg"] Feb 24 02:23:35.979748 master-0 kubenswrapper[31411]: E0224 02:23:35.979733 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d6a6c0-b944-4206-9178-9a9930b303b9" containerName="controller-manager" Feb 24 02:23:35.979841 master-0 kubenswrapper[31411]: I0224 02:23:35.979756 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d6a6c0-b944-4206-9178-9a9930b303b9" containerName="controller-manager" Feb 24 02:23:35.979841 master-0 kubenswrapper[31411]: E0224 02:23:35.979787 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d6a6c0-b944-4206-9178-9a9930b303b9" containerName="controller-manager" Feb 24 02:23:35.979841 master-0 kubenswrapper[31411]: I0224 02:23:35.979800 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d6a6c0-b944-4206-9178-9a9930b303b9" containerName="controller-manager" Feb 24 02:23:35.980100 master-0 kubenswrapper[31411]: I0224 02:23:35.980060 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8d6a6c0-b944-4206-9178-9a9930b303b9" containerName="controller-manager" Feb 24 02:23:35.980193 master-0 kubenswrapper[31411]: I0224 02:23:35.980102 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8d6a6c0-b944-4206-9178-9a9930b303b9" containerName="controller-manager" Feb 24 02:23:35.981013 master-0 kubenswrapper[31411]: I0224 02:23:35.980975 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c67bf58c9-mn7dg" Feb 24 02:23:35.985274 master-0 kubenswrapper[31411]: I0224 02:23:35.984938 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 02:23:35.985274 master-0 kubenswrapper[31411]: I0224 02:23:35.985011 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 02:23:35.986543 master-0 kubenswrapper[31411]: I0224 02:23:35.986483 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 02:23:35.986925 master-0 kubenswrapper[31411]: I0224 02:23:35.986863 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-ntn8v" Feb 24 02:23:35.987091 master-0 kubenswrapper[31411]: I0224 02:23:35.987058 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 02:23:35.988400 master-0 kubenswrapper[31411]: I0224 02:23:35.987219 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 02:23:35.994752 master-0 kubenswrapper[31411]: I0224 02:23:35.994700 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 02:23:36.102478 master-0 kubenswrapper[31411]: I0224 02:23:36.102309 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a26aae0-6a54-41a9-8532-26195010c7cc-config\") pod \"controller-manager-c67bf58c9-mn7dg\" (UID: \"6a26aae0-6a54-41a9-8532-26195010c7cc\") " pod="openshift-controller-manager/controller-manager-c67bf58c9-mn7dg" Feb 24 02:23:36.102478 master-0 kubenswrapper[31411]: I0224 02:23:36.102414 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a26aae0-6a54-41a9-8532-26195010c7cc-proxy-ca-bundles\") pod \"controller-manager-c67bf58c9-mn7dg\" (UID: \"6a26aae0-6a54-41a9-8532-26195010c7cc\") " pod="openshift-controller-manager/controller-manager-c67bf58c9-mn7dg" Feb 24 02:23:36.102804 master-0 kubenswrapper[31411]: I0224 02:23:36.102688 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a26aae0-6a54-41a9-8532-26195010c7cc-client-ca\") pod \"controller-manager-c67bf58c9-mn7dg\" (UID: \"6a26aae0-6a54-41a9-8532-26195010c7cc\") " pod="openshift-controller-manager/controller-manager-c67bf58c9-mn7dg" Feb 24 02:23:36.102804 master-0 kubenswrapper[31411]: I0224 02:23:36.102762 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a26aae0-6a54-41a9-8532-26195010c7cc-serving-cert\") pod \"controller-manager-c67bf58c9-mn7dg\" (UID: \"6a26aae0-6a54-41a9-8532-26195010c7cc\") " pod="openshift-controller-manager/controller-manager-c67bf58c9-mn7dg" Feb 24 02:23:36.103090 master-0 kubenswrapper[31411]: I0224 02:23:36.102916 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sckg\" (UniqueName: \"kubernetes.io/projected/6a26aae0-6a54-41a9-8532-26195010c7cc-kube-api-access-4sckg\") pod \"controller-manager-c67bf58c9-mn7dg\" (UID: \"6a26aae0-6a54-41a9-8532-26195010c7cc\") " pod="openshift-controller-manager/controller-manager-c67bf58c9-mn7dg" Feb 24 02:23:36.197745 master-0 kubenswrapper[31411]: I0224 02:23:36.197533 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-m4t4r" Feb 24 02:23:36.204889 master-0 kubenswrapper[31411]: I0224 02:23:36.204822 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a26aae0-6a54-41a9-8532-26195010c7cc-client-ca\") pod \"controller-manager-c67bf58c9-mn7dg\" (UID: \"6a26aae0-6a54-41a9-8532-26195010c7cc\") " pod="openshift-controller-manager/controller-manager-c67bf58c9-mn7dg" Feb 24 02:23:36.205048 master-0 kubenswrapper[31411]: I0224 02:23:36.204923 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a26aae0-6a54-41a9-8532-26195010c7cc-serving-cert\") pod \"controller-manager-c67bf58c9-mn7dg\" (UID: \"6a26aae0-6a54-41a9-8532-26195010c7cc\") " pod="openshift-controller-manager/controller-manager-c67bf58c9-mn7dg" Feb 24 02:23:36.205048 master-0 kubenswrapper[31411]: I0224 02:23:36.204974 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sckg\" (UniqueName: \"kubernetes.io/projected/6a26aae0-6a54-41a9-8532-26195010c7cc-kube-api-access-4sckg\") pod \"controller-manager-c67bf58c9-mn7dg\" (UID: \"6a26aae0-6a54-41a9-8532-26195010c7cc\") " pod="openshift-controller-manager/controller-manager-c67bf58c9-mn7dg" Feb 24 02:23:36.205809 master-0 kubenswrapper[31411]: I0224 02:23:36.205748 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a26aae0-6a54-41a9-8532-26195010c7cc-config\") pod \"controller-manager-c67bf58c9-mn7dg\" (UID: \"6a26aae0-6a54-41a9-8532-26195010c7cc\") " pod="openshift-controller-manager/controller-manager-c67bf58c9-mn7dg" Feb 24 02:23:36.206056 master-0 kubenswrapper[31411]: I0224 02:23:36.206019 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a26aae0-6a54-41a9-8532-26195010c7cc-proxy-ca-bundles\") pod \"controller-manager-c67bf58c9-mn7dg\" (UID: \"6a26aae0-6a54-41a9-8532-26195010c7cc\") " pod="openshift-controller-manager/controller-manager-c67bf58c9-mn7dg" Feb 24 02:23:36.207899 master-0 kubenswrapper[31411]: I0224 02:23:36.207830 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a26aae0-6a54-41a9-8532-26195010c7cc-client-ca\") pod \"controller-manager-c67bf58c9-mn7dg\" (UID: \"6a26aae0-6a54-41a9-8532-26195010c7cc\") " pod="openshift-controller-manager/controller-manager-c67bf58c9-mn7dg" Feb 24 02:23:36.209239 master-0 kubenswrapper[31411]: I0224 02:23:36.208521 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a26aae0-6a54-41a9-8532-26195010c7cc-config\") pod \"controller-manager-c67bf58c9-mn7dg\" (UID: \"6a26aae0-6a54-41a9-8532-26195010c7cc\") " pod="openshift-controller-manager/controller-manager-c67bf58c9-mn7dg" Feb 24 02:23:36.213664 master-0 kubenswrapper[31411]: I0224 02:23:36.209546 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a26aae0-6a54-41a9-8532-26195010c7cc-proxy-ca-bundles\") pod \"controller-manager-c67bf58c9-mn7dg\" (UID: \"6a26aae0-6a54-41a9-8532-26195010c7cc\") " pod="openshift-controller-manager/controller-manager-c67bf58c9-mn7dg" Feb 24 02:23:36.213664 master-0 kubenswrapper[31411]: I0224 02:23:36.212615 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a26aae0-6a54-41a9-8532-26195010c7cc-serving-cert\") pod \"controller-manager-c67bf58c9-mn7dg\" (UID: \"6a26aae0-6a54-41a9-8532-26195010c7cc\") " pod="openshift-controller-manager/controller-manager-c67bf58c9-mn7dg" Feb 24 02:23:36.236299 master-0 kubenswrapper[31411]: I0224 02:23:36.236233 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sckg\" (UniqueName: \"kubernetes.io/projected/6a26aae0-6a54-41a9-8532-26195010c7cc-kube-api-access-4sckg\") pod \"controller-manager-c67bf58c9-mn7dg\" (UID: \"6a26aae0-6a54-41a9-8532-26195010c7cc\") " pod="openshift-controller-manager/controller-manager-c67bf58c9-mn7dg" Feb 24 02:23:36.284926 master-0 kubenswrapper[31411]: I0224 02:23:36.284853 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 24 02:23:36.293823 master-0 kubenswrapper[31411]: I0224 02:23:36.293761 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 24 02:23:36.310839 master-0 kubenswrapper[31411]: I0224 02:23:36.310791 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 24 02:23:36.317030 master-0 kubenswrapper[31411]: I0224 02:23:36.316958 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c67bf58c9-mn7dg" Feb 24 02:23:36.399797 master-0 kubenswrapper[31411]: I0224 02:23:36.398561 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 24 02:23:36.401775 master-0 kubenswrapper[31411]: I0224 02:23:36.401715 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-shkn8" Feb 24 02:23:36.443297 master-0 kubenswrapper[31411]: I0224 02:23:36.443222 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 24 02:23:36.535129 master-0 kubenswrapper[31411]: I0224 02:23:36.535063 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-7phpl" Feb 24 02:23:36.679035 master-0 kubenswrapper[31411]: I0224 02:23:36.678842 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 24 02:23:36.768957 master-0 kubenswrapper[31411]: I0224 02:23:36.768879 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 24 02:23:36.791801 master-0 kubenswrapper[31411]: I0224 02:23:36.791713 31411 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 24 02:23:36.792790 master-0 kubenswrapper[31411]: I0224 02:23:36.792666 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="2146f0e3671998cad8bbc2464b009ab7" containerName="startup-monitor" containerID="cri-o://8a7d4b17e99ce12c7613b41df4f07cef823745e189c7d46a4fee547cc41d905c" gracePeriod=5 Feb 24 02:23:36.831070 master-0 kubenswrapper[31411]: I0224 02:23:36.831008 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 24 02:23:36.857495 master-0 kubenswrapper[31411]: I0224 02:23:36.857436 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 24 02:23:36.870800 master-0 kubenswrapper[31411]: I0224 02:23:36.870756 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 24 02:23:36.979062 master-0 kubenswrapper[31411]: I0224 02:23:36.978970 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 24 02:23:36.996305 master-0 kubenswrapper[31411]: I0224 02:23:36.996232 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 24 02:23:37.053073 master-0 kubenswrapper[31411]: I0224 02:23:37.053012 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 24 02:23:37.060252 master-0 kubenswrapper[31411]: I0224 02:23:37.060181 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 24 02:23:37.078275 master-0 kubenswrapper[31411]: I0224 02:23:37.078215 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 24 02:23:37.128492 master-0 kubenswrapper[31411]: I0224 02:23:37.128413 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 24 02:23:37.253697 master-0 kubenswrapper[31411]: I0224 02:23:37.253460 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 24 02:23:37.279635 master-0 kubenswrapper[31411]: I0224 02:23:37.279526 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-ncv6j" Feb 24 02:23:37.319212 master-0 kubenswrapper[31411]: I0224 02:23:37.319147 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 24 02:23:37.321682 master-0 kubenswrapper[31411]: I0224 02:23:37.321638 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 24 02:23:37.333720 master-0 kubenswrapper[31411]: I0224 02:23:37.333653 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 24 02:23:37.346295 master-0 kubenswrapper[31411]: I0224 02:23:37.346232 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 24 02:23:37.513557 master-0 kubenswrapper[31411]: I0224 02:23:37.513383 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 24 02:23:37.525465 master-0 kubenswrapper[31411]: I0224 02:23:37.525413 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 24 02:23:37.564942 master-0 kubenswrapper[31411]: I0224 02:23:37.564839 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 24 02:23:37.600530 master-0 kubenswrapper[31411]: I0224 02:23:37.600474 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 24 02:23:37.612859 master-0 kubenswrapper[31411]: I0224 02:23:37.612814 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 24 02:23:37.663436 master-0 kubenswrapper[31411]: I0224 02:23:37.663272 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 24 02:23:37.706817 master-0 kubenswrapper[31411]: I0224 02:23:37.706773 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 24 02:23:37.734220 master-0 kubenswrapper[31411]: I0224 02:23:37.734142 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 24 02:23:37.740784 master-0 kubenswrapper[31411]: I0224 02:23:37.740734 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 24 02:23:37.810076 master-0 kubenswrapper[31411]: I0224 02:23:37.809860 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 24 02:23:37.814321 master-0 kubenswrapper[31411]: I0224 02:23:37.814258 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 24 02:23:37.819139 master-0 kubenswrapper[31411]: I0224 02:23:37.819087 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 24 02:23:37.862570 master-0 kubenswrapper[31411]: I0224 02:23:37.862518 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 24 02:23:38.079222 master-0 kubenswrapper[31411]: I0224 02:23:38.079031 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 24 02:23:38.136265 master-0 kubenswrapper[31411]: I0224 02:23:38.136203 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 24 02:23:38.141953 master-0 kubenswrapper[31411]: I0224 02:23:38.141900 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-n76llk2nkkst" Feb 24 02:23:38.163413 master-0 kubenswrapper[31411]: I0224 02:23:38.163359 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 24 02:23:38.313836 master-0 kubenswrapper[31411]: I0224 02:23:38.313764 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 24 02:23:38.381603 master-0 kubenswrapper[31411]: I0224 02:23:38.381412 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 24 02:23:38.485936 master-0 kubenswrapper[31411]: I0224 02:23:38.485864 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 24 02:23:38.510019 master-0 kubenswrapper[31411]: I0224 02:23:38.509953 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 24 02:23:38.517467 master-0 kubenswrapper[31411]: I0224 02:23:38.517410 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 24 02:23:38.525248 master-0 kubenswrapper[31411]: I0224 02:23:38.525178 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 24 02:23:38.647488 master-0 kubenswrapper[31411]: I0224 02:23:38.647340 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 24 02:23:38.652331 master-0 kubenswrapper[31411]: I0224 02:23:38.652267 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 24 02:23:38.668645 master-0 kubenswrapper[31411]: I0224 02:23:38.668561 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-pfvnv" Feb 24 02:23:38.920339 master-0 kubenswrapper[31411]: I0224 02:23:38.920151 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 24 02:23:38.938762 master-0 kubenswrapper[31411]: I0224 02:23:38.938676 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-955b69498-x847l"] Feb 24 02:23:38.945687 master-0 kubenswrapper[31411]: I0224 02:23:38.945553 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-576f8c76bf-2xx46"] Feb 24 02:23:38.950428 master-0 kubenswrapper[31411]: I0224 02:23:38.950382 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-5d9ddb8754-xtrdd"] Feb 24 02:23:38.955318 master-0 kubenswrapper[31411]: I0224 02:23:38.955241 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-79f587d78f-6bkc6"] Feb 24 02:23:38.960211 master-0 kubenswrapper[31411]: I0224 02:23:38.960131 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c67bf58c9-mn7dg"] Feb 24 02:23:38.964758 master-0 kubenswrapper[31411]: I0224 02:23:38.964721 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cf66f6dd4-lbnq4"] Feb 24 02:23:38.981988 master-0 kubenswrapper[31411]: I0224 02:23:38.981925 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 24 02:23:38.984628 master-0 kubenswrapper[31411]: I0224 02:23:38.984596 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 24 02:23:38.999675 master-0 kubenswrapper[31411]: I0224 02:23:38.999601 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 24 02:23:39.004971 master-0 kubenswrapper[31411]: I0224 02:23:39.004942 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 24 02:23:39.155087 master-0 kubenswrapper[31411]: I0224 02:23:39.155029 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 24 02:23:39.158559 master-0 kubenswrapper[31411]: I0224 02:23:39.158513 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 02:23:39.169533 master-0 kubenswrapper[31411]: I0224 02:23:39.169488 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 24 02:23:39.188486 master-0 kubenswrapper[31411]: I0224 02:23:39.188454 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-5kctc" Feb 24 02:23:39.190771 master-0 kubenswrapper[31411]: I0224 02:23:39.190731 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 24 02:23:39.410702 master-0 kubenswrapper[31411]: I0224 02:23:39.410638 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 24 02:23:39.424087 master-0 kubenswrapper[31411]: I0224 02:23:39.424014 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-5d9ddb8754-xtrdd"] Feb 24 02:23:39.505432 master-0 kubenswrapper[31411]: I0224 02:23:39.505368 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-576f8c76bf-2xx46"] Feb 24 02:23:39.511066 master-0 kubenswrapper[31411]: I0224 02:23:39.509263 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 24 02:23:39.515661 master-0 kubenswrapper[31411]: I0224 02:23:39.515620 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-955b69498-x847l"] Feb 24 02:23:39.560974 master-0 kubenswrapper[31411]: I0224 02:23:39.560934 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 24 02:23:39.614091 master-0 kubenswrapper[31411]: I0224 02:23:39.614045 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-79f587d78f-6bkc6"] Feb 24 02:23:39.626290 master-0 kubenswrapper[31411]: I0224 02:23:39.626241 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cf66f6dd4-lbnq4"] Feb 24 02:23:39.630132 master-0 kubenswrapper[31411]: I0224 02:23:39.629832 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c67bf58c9-mn7dg"] Feb 24 02:23:39.630450 master-0 kubenswrapper[31411]: W0224 02:23:39.630409 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a26aae0_6a54_41a9_8532_26195010c7cc.slice/crio-df2867fdd951a4ce2a0762a3f53be1d340095c8d04135a01877e9104d3b39503 WatchSource:0}: Error finding container df2867fdd951a4ce2a0762a3f53be1d340095c8d04135a01877e9104d3b39503: Status 404 returned error can't find the container with id df2867fdd951a4ce2a0762a3f53be1d340095c8d04135a01877e9104d3b39503 Feb 24 02:23:39.664077 master-0 kubenswrapper[31411]: I0224 02:23:39.664041 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 24 02:23:39.692779 master-0 kubenswrapper[31411]: I0224 02:23:39.692728 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 24 02:23:39.831164 master-0 kubenswrapper[31411]: I0224 02:23:39.831113 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 24 02:23:39.864416 master-0 kubenswrapper[31411]: I0224 02:23:39.864360 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 24 02:23:40.081413 master-0 kubenswrapper[31411]: I0224 02:23:40.081352 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 24 02:23:40.193674 master-0 kubenswrapper[31411]: I0224 02:23:40.193569 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 24 02:23:40.350121 master-0 kubenswrapper[31411]: I0224 02:23:40.349900 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 24 02:23:40.388212 master-0 kubenswrapper[31411]: I0224 02:23:40.388171 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 24 02:23:40.426355 master-0 kubenswrapper[31411]: I0224 02:23:40.426297 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-79f587d78f-6bkc6" event={"ID":"7a90d46f-0217-49aa-a63c-c81a371fa218","Type":"ContainerStarted","Data":"f8c4f0d87bfd7b9c7ac1ea6bc739239dd01f4a062c8927e314b39de53cfefd73"} Feb 24 02:23:40.429160 master-0 kubenswrapper[31411]: I0224 02:23:40.428932 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c67bf58c9-mn7dg" event={"ID":"6a26aae0-6a54-41a9-8532-26195010c7cc","Type":"ContainerStarted","Data":"9e02bfa3bffe37828c134aaed4b8b7f016e6a544ad0271a508cb9496b9ed499a"} Feb 24 02:23:40.429160 master-0 kubenswrapper[31411]: I0224 02:23:40.428987 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c67bf58c9-mn7dg" event={"ID":"6a26aae0-6a54-41a9-8532-26195010c7cc","Type":"ContainerStarted","Data":"df2867fdd951a4ce2a0762a3f53be1d340095c8d04135a01877e9104d3b39503"} Feb 24 02:23:40.429395 master-0 kubenswrapper[31411]: I0224 02:23:40.429311 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-c67bf58c9-mn7dg" Feb 24 02:23:40.430621 master-0 kubenswrapper[31411]: I0224 02:23:40.430456 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-576f8c76bf-2xx46" event={"ID":"92850e60-7126-43e8-a7f7-fe411b6fc2b7","Type":"ContainerStarted","Data":"56a343b3f667321b02fb71aad0519d20e7f5a4b0fc86a758fd56c0bbaff5ee4b"} Feb 24 02:23:40.433205 master-0 kubenswrapper[31411]: I0224 02:23:40.433128 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-5d9ddb8754-xtrdd" event={"ID":"823c983e-f9a6-4074-9a69-14ec0666dfd5","Type":"ContainerStarted","Data":"68df632e40cde22947110b963feda8fcfe4e62c2d3df1ce48144dada63580be6"} Feb 24 02:23:40.434635 master-0 kubenswrapper[31411]: I0224 02:23:40.433422 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-c67bf58c9-mn7dg" Feb 24 02:23:40.437716 master-0 kubenswrapper[31411]: I0224 02:23:40.437365 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cf66f6dd4-lbnq4" event={"ID":"171b7f58-509e-4c6a-a6c8-c0d98578c4a3","Type":"ContainerStarted","Data":"5d3728b88328dd17309b559ad486be42781fc42ef4c844983f65cd791462aab5"} Feb 24 02:23:40.437716 master-0 kubenswrapper[31411]: I0224 02:23:40.437399 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cf66f6dd4-lbnq4" event={"ID":"171b7f58-509e-4c6a-a6c8-c0d98578c4a3","Type":"ContainerStarted","Data":"2fdde57c869891ddd0b46b43a976103a2a5af61954a1506763850a16e93f7e76"} Feb 24 02:23:40.437890 master-0 kubenswrapper[31411]: I0224 02:23:40.437861 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6cf66f6dd4-lbnq4" Feb 24 02:23:40.444443 master-0 kubenswrapper[31411]: I0224 02:23:40.444393 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-955b69498-x847l" event={"ID":"2fbb8ae4-fc8b-46ff-a295-10a1207dd571","Type":"ContainerStarted","Data":"5b0629508d4ceea4d583d2a880769c5edcd036bf1ecafe4cf7bf387622c82951"} Feb 24 02:23:40.462501 master-0 kubenswrapper[31411]: I0224 02:23:40.462418 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-c67bf58c9-mn7dg" podStartSLOduration=26.46239531 podStartE2EDuration="26.46239531s" podCreationTimestamp="2026-02-24 02:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:23:40.45560296 +0000 UTC m=+163.672800826" watchObservedRunningTime="2026-02-24 02:23:40.46239531 +0000 UTC m=+163.679593146" Feb 24 02:23:40.505963 master-0 kubenswrapper[31411]: I0224 02:23:40.505776 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6cf66f6dd4-lbnq4" podStartSLOduration=26.505752817 podStartE2EDuration="26.505752817s" podCreationTimestamp="2026-02-24 02:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:23:40.501633932 +0000 UTC m=+163.718831788" watchObservedRunningTime="2026-02-24 02:23:40.505752817 +0000 UTC m=+163.722950663" Feb 24 02:23:40.581643 master-0 kubenswrapper[31411]: I0224 02:23:40.581548 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 24 02:23:40.584173 master-0 kubenswrapper[31411]: I0224 02:23:40.584139 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6cf66f6dd4-lbnq4" Feb 24 02:23:40.826247 master-0 kubenswrapper[31411]: I0224 02:23:40.826196 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 24 02:23:40.885403 master-0 kubenswrapper[31411]: I0224 02:23:40.885338 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 24 02:23:41.053585 master-0 kubenswrapper[31411]: I0224 02:23:41.053534 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 24 02:23:41.109012 master-0 kubenswrapper[31411]: I0224 02:23:41.108921 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-thdws" Feb 24 02:23:42.232757 master-0 kubenswrapper[31411]: I0224 02:23:42.232709 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 24 02:23:42.465007 master-0 kubenswrapper[31411]: I0224 02:23:42.464957 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_2146f0e3671998cad8bbc2464b009ab7/startup-monitor/0.log" Feb 24 02:23:42.465287 master-0 kubenswrapper[31411]: I0224 02:23:42.465019 31411 generic.go:334] "Generic (PLEG): container finished" podID="2146f0e3671998cad8bbc2464b009ab7" containerID="8a7d4b17e99ce12c7613b41df4f07cef823745e189c7d46a4fee547cc41d905c" exitCode=137 Feb 24 02:23:42.468016 master-0 kubenswrapper[31411]: I0224 02:23:42.467949 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-79f587d78f-6bkc6" event={"ID":"7a90d46f-0217-49aa-a63c-c81a371fa218","Type":"ContainerStarted","Data":"a46d661897fb6fcaabb66e1afa2560d85317b005726ad92984efc24755383e86"} Feb 24 02:23:42.471113 master-0 kubenswrapper[31411]: I0224 02:23:42.471055 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-5d9ddb8754-xtrdd" event={"ID":"823c983e-f9a6-4074-9a69-14ec0666dfd5","Type":"ContainerStarted","Data":"0a66e7944b10e78d716f8f2a8fadd7504895909a801ec4981b804d33be363818"} Feb 24 02:23:42.471598 master-0 kubenswrapper[31411]: I0224 02:23:42.471516 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-5d9ddb8754-xtrdd" Feb 24 02:23:42.479449 master-0 kubenswrapper[31411]: I0224 02:23:42.479392 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-5d9ddb8754-xtrdd" Feb 24 02:23:42.502833 master-0 kubenswrapper[31411]: I0224 02:23:42.502714 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-79f587d78f-6bkc6" podStartSLOduration=26.542782248 podStartE2EDuration="28.502696119s" podCreationTimestamp="2026-02-24 02:23:14 +0000 UTC" firstStartedPulling="2026-02-24 02:23:39.626149851 +0000 UTC m=+162.843347707" lastFinishedPulling="2026-02-24 02:23:41.586063732 +0000 UTC m=+164.803261578" observedRunningTime="2026-02-24 02:23:42.497971528 +0000 UTC m=+165.715169414" watchObservedRunningTime="2026-02-24 02:23:42.502696119 +0000 UTC m=+165.719893965" Feb 24 02:23:42.519732 master-0 kubenswrapper[31411]: I0224 02:23:42.519646 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-5d9ddb8754-xtrdd" podStartSLOduration=26.451639071 podStartE2EDuration="28.519613761s" podCreationTimestamp="2026-02-24 02:23:14 +0000 UTC" firstStartedPulling="2026-02-24 02:23:39.430725739 +0000 UTC m=+162.647923605" lastFinishedPulling="2026-02-24 02:23:41.498700449 +0000 UTC m=+164.715898295" observedRunningTime="2026-02-24 02:23:42.517724958 +0000 UTC m=+165.734922804" watchObservedRunningTime="2026-02-24 02:23:42.519613761 +0000 UTC m=+165.736811637" Feb 24 02:23:43.601924 master-0 kubenswrapper[31411]: I0224 02:23:43.601876 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_2146f0e3671998cad8bbc2464b009ab7/startup-monitor/0.log" Feb 24 02:23:43.602384 master-0 kubenswrapper[31411]: I0224 02:23:43.601984 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:23:43.755738 master-0 kubenswrapper[31411]: I0224 02:23:43.755666 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-log\") pod \"2146f0e3671998cad8bbc2464b009ab7\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " Feb 24 02:23:43.755868 master-0 kubenswrapper[31411]: I0224 02:23:43.755782 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-pod-resource-dir\") pod \"2146f0e3671998cad8bbc2464b009ab7\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " Feb 24 02:23:43.755868 master-0 kubenswrapper[31411]: I0224 02:23:43.755823 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-manifests\") pod \"2146f0e3671998cad8bbc2464b009ab7\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " Feb 24 02:23:43.756059 master-0 kubenswrapper[31411]: I0224 02:23:43.755865 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-resource-dir\") pod \"2146f0e3671998cad8bbc2464b009ab7\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " Feb 24 02:23:43.756059 master-0 kubenswrapper[31411]: I0224 02:23:43.755911 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-lock\") pod \"2146f0e3671998cad8bbc2464b009ab7\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " Feb 24 02:23:43.756320 master-0 kubenswrapper[31411]: I0224 02:23:43.756255 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-log" (OuterVolumeSpecName: "var-log") pod "2146f0e3671998cad8bbc2464b009ab7" (UID: "2146f0e3671998cad8bbc2464b009ab7"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:23:43.756383 master-0 kubenswrapper[31411]: I0224 02:23:43.756334 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-lock" (OuterVolumeSpecName: "var-lock") pod "2146f0e3671998cad8bbc2464b009ab7" (UID: "2146f0e3671998cad8bbc2464b009ab7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:23:43.756434 master-0 kubenswrapper[31411]: I0224 02:23:43.756369 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-manifests" (OuterVolumeSpecName: "manifests") pod "2146f0e3671998cad8bbc2464b009ab7" (UID: "2146f0e3671998cad8bbc2464b009ab7"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:23:43.756487 master-0 kubenswrapper[31411]: I0224 02:23:43.756381 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "2146f0e3671998cad8bbc2464b009ab7" (UID: "2146f0e3671998cad8bbc2464b009ab7"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:23:43.765143 master-0 kubenswrapper[31411]: I0224 02:23:43.765106 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "2146f0e3671998cad8bbc2464b009ab7" (UID: "2146f0e3671998cad8bbc2464b009ab7"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:23:43.858875 master-0 kubenswrapper[31411]: I0224 02:23:43.858830 31411 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-log\") on node \"master-0\" DevicePath \"\"" Feb 24 02:23:43.859146 master-0 kubenswrapper[31411]: I0224 02:23:43.859097 31411 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:23:43.859287 master-0 kubenswrapper[31411]: I0224 02:23:43.859264 31411 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-manifests\") on node \"master-0\" DevicePath \"\"" Feb 24 02:23:43.859428 master-0 kubenswrapper[31411]: I0224 02:23:43.859403 31411 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:23:43.859547 master-0 kubenswrapper[31411]: I0224 02:23:43.859527 31411 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 02:23:44.490699 master-0 kubenswrapper[31411]: I0224 02:23:44.490618 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_2146f0e3671998cad8bbc2464b009ab7/startup-monitor/0.log" Feb 24 02:23:44.491075 master-0 kubenswrapper[31411]: I0224 02:23:44.490795 31411 scope.go:117] "RemoveContainer" containerID="8a7d4b17e99ce12c7613b41df4f07cef823745e189c7d46a4fee547cc41d905c" Feb 24 02:23:44.491075 master-0 kubenswrapper[31411]: I0224 02:23:44.490857 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 02:23:44.494126 master-0 kubenswrapper[31411]: I0224 02:23:44.494042 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-576f8c76bf-2xx46" event={"ID":"92850e60-7126-43e8-a7f7-fe411b6fc2b7","Type":"ContainerStarted","Data":"e924d3d7846f8ae322be2f6da7bc199a5e305c49a262320d6ac1269ac1a0cf9b"} Feb 24 02:23:44.535659 master-0 kubenswrapper[31411]: I0224 02:23:44.535510 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-576f8c76bf-2xx46" podStartSLOduration=26.453097461 podStartE2EDuration="30.535475629s" podCreationTimestamp="2026-02-24 02:23:14 +0000 UTC" firstStartedPulling="2026-02-24 02:23:39.518857293 +0000 UTC m=+162.736055149" lastFinishedPulling="2026-02-24 02:23:43.601235431 +0000 UTC m=+166.818433317" observedRunningTime="2026-02-24 02:23:44.524729849 +0000 UTC m=+167.741927705" watchObservedRunningTime="2026-02-24 02:23:44.535475629 +0000 UTC m=+167.752673505" Feb 24 02:23:45.109033 master-0 kubenswrapper[31411]: I0224 02:23:45.108839 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2146f0e3671998cad8bbc2464b009ab7" path="/var/lib/kubelet/pods/2146f0e3671998cad8bbc2464b009ab7/volumes" Feb 24 02:23:53.462763 master-0 kubenswrapper[31411]: I0224 02:23:53.462672 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-576f8c76bf-2xx46" Feb 24 02:23:53.462763 master-0 kubenswrapper[31411]: I0224 02:23:53.462768 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-576f8c76bf-2xx46" Feb 24 02:23:53.466066 master-0 kubenswrapper[31411]: I0224 02:23:53.465998 31411 patch_prober.go:28] interesting pod/console-576f8c76bf-2xx46 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Feb 24 02:23:53.466143 master-0 kubenswrapper[31411]: I0224 02:23:53.466095 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-576f8c76bf-2xx46" podUID="92850e60-7126-43e8-a7f7-fe411b6fc2b7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Feb 24 02:23:55.273951 master-0 kubenswrapper[31411]: I0224 02:23:55.272336 31411 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-4qf9p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" start-of-body= Feb 24 02:23:55.273951 master-0 kubenswrapper[31411]: I0224 02:23:55.272473 31411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" podUID="91d16f7b-390a-4d9d-99d6-cc8e210801d1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" Feb 24 02:23:55.619820 master-0 kubenswrapper[31411]: I0224 02:23:55.619550 31411 generic.go:334] "Generic (PLEG): container finished" podID="91d16f7b-390a-4d9d-99d6-cc8e210801d1" containerID="36bf2499ceb16a6789bfaea260bc661782023dffc5c354b07ad277186683d4ac" exitCode=0 Feb 24 02:23:55.619820 master-0 kubenswrapper[31411]: I0224 02:23:55.619631 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" event={"ID":"91d16f7b-390a-4d9d-99d6-cc8e210801d1","Type":"ContainerDied","Data":"36bf2499ceb16a6789bfaea260bc661782023dffc5c354b07ad277186683d4ac"} Feb 24 02:23:55.619820 master-0 kubenswrapper[31411]: I0224 02:23:55.619732 31411 scope.go:117] "RemoveContainer" containerID="2ba78e893a4a0218430d836aa7034890de4059e422e2aea2e06c126f90cc740a" Feb 24 02:23:55.620838 master-0 kubenswrapper[31411]: I0224 02:23:55.620782 31411 scope.go:117] "RemoveContainer" containerID="36bf2499ceb16a6789bfaea260bc661782023dffc5c354b07ad277186683d4ac" Feb 24 02:23:56.632867 master-0 kubenswrapper[31411]: I0224 02:23:56.632788 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" event={"ID":"91d16f7b-390a-4d9d-99d6-cc8e210801d1","Type":"ContainerStarted","Data":"b7b92d7ce9ac4a7d07e5a8ad349bf4f23ed3793e48510bd38d6535e38e507b08"} Feb 24 02:23:56.633772 master-0 kubenswrapper[31411]: I0224 02:23:56.633464 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:23:56.635611 master-0 kubenswrapper[31411]: I0224 02:23:56.635537 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6f5488b997-4qf9p" Feb 24 02:23:58.119586 master-0 kubenswrapper[31411]: I0224 02:23:58.119499 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-95876988f-c58ls" podUID="226b24ba-d12c-4453-a6c9-0d2f7f50c4ff" containerName="oauth-openshift" containerID="cri-o://78c1739440f378ea9547e3b56777f5292327e7321779d6ef20c9e2cab0a21f2f" gracePeriod=15 Feb 24 02:23:58.672685 master-0 kubenswrapper[31411]: I0224 02:23:58.670916 31411 generic.go:334] "Generic (PLEG): container finished" podID="226b24ba-d12c-4453-a6c9-0d2f7f50c4ff" containerID="78c1739440f378ea9547e3b56777f5292327e7321779d6ef20c9e2cab0a21f2f" exitCode=0 Feb 24 02:23:58.672685 master-0 kubenswrapper[31411]: I0224 02:23:58.671621 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-95876988f-c58ls" event={"ID":"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff","Type":"ContainerDied","Data":"78c1739440f378ea9547e3b56777f5292327e7321779d6ef20c9e2cab0a21f2f"} Feb 24 02:23:58.756776 master-0 kubenswrapper[31411]: I0224 02:23:58.756720 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:23:58.806058 master-0 kubenswrapper[31411]: I0224 02:23:58.805976 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n"] Feb 24 02:23:58.806722 master-0 kubenswrapper[31411]: E0224 02:23:58.806466 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2146f0e3671998cad8bbc2464b009ab7" containerName="startup-monitor" Feb 24 02:23:58.806722 master-0 kubenswrapper[31411]: I0224 02:23:58.806497 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="2146f0e3671998cad8bbc2464b009ab7" containerName="startup-monitor" Feb 24 02:23:58.806722 master-0 kubenswrapper[31411]: E0224 02:23:58.806549 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="226b24ba-d12c-4453-a6c9-0d2f7f50c4ff" containerName="oauth-openshift" Feb 24 02:23:58.806722 master-0 kubenswrapper[31411]: I0224 02:23:58.806564 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="226b24ba-d12c-4453-a6c9-0d2f7f50c4ff" containerName="oauth-openshift" Feb 24 02:23:58.806995 master-0 kubenswrapper[31411]: I0224 02:23:58.806857 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="226b24ba-d12c-4453-a6c9-0d2f7f50c4ff" containerName="oauth-openshift" Feb 24 02:23:58.806995 master-0 kubenswrapper[31411]: I0224 02:23:58.806885 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="2146f0e3671998cad8bbc2464b009ab7" containerName="startup-monitor" Feb 24 02:23:58.807636 master-0 kubenswrapper[31411]: I0224 02:23:58.807570 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:58.829356 master-0 kubenswrapper[31411]: I0224 02:23:58.829043 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-user-template-login\") pod \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " Feb 24 02:23:58.829356 master-0 kubenswrapper[31411]: I0224 02:23:58.829106 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-session\") pod \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " Feb 24 02:23:58.829356 master-0 kubenswrapper[31411]: I0224 02:23:58.829239 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-serving-cert\") pod \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " Feb 24 02:23:58.829356 master-0 kubenswrapper[31411]: I0224 02:23:58.829280 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-trusted-ca-bundle\") pod \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " Feb 24 02:23:58.829356 master-0 kubenswrapper[31411]: I0224 02:23:58.829331 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-audit-dir\") pod \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " Feb 24 02:23:58.829356 master-0 kubenswrapper[31411]: I0224 02:23:58.829358 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-user-template-provider-selection\") pod \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " Feb 24 02:23:58.829724 master-0 kubenswrapper[31411]: I0224 02:23:58.829398 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6h6lb\" (UniqueName: \"kubernetes.io/projected/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-kube-api-access-6h6lb\") pod \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " Feb 24 02:23:58.829724 master-0 kubenswrapper[31411]: I0224 02:23:58.829449 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-router-certs\") pod \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " Feb 24 02:23:58.829724 master-0 kubenswrapper[31411]: I0224 02:23:58.829484 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-cliconfig\") pod \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " Feb 24 02:23:58.829724 master-0 kubenswrapper[31411]: I0224 02:23:58.829543 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-service-ca\") pod \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " Feb 24 02:23:58.829724 master-0 kubenswrapper[31411]: I0224 02:23:58.829568 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-ocp-branding-template\") pod \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " Feb 24 02:23:58.829724 master-0 kubenswrapper[31411]: I0224 02:23:58.829681 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-user-template-error\") pod \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " Feb 24 02:23:58.829724 master-0 kubenswrapper[31411]: I0224 02:23:58.829707 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-audit-policies\") pod \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\" (UID: \"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff\") " Feb 24 02:23:58.833330 master-0 kubenswrapper[31411]: I0224 02:23:58.831793 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "226b24ba-d12c-4453-a6c9-0d2f7f50c4ff" (UID: "226b24ba-d12c-4453-a6c9-0d2f7f50c4ff"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:23:58.833330 master-0 kubenswrapper[31411]: I0224 02:23:58.832028 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "226b24ba-d12c-4453-a6c9-0d2f7f50c4ff" (UID: "226b24ba-d12c-4453-a6c9-0d2f7f50c4ff"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:23:58.833330 master-0 kubenswrapper[31411]: I0224 02:23:58.832274 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "226b24ba-d12c-4453-a6c9-0d2f7f50c4ff" (UID: "226b24ba-d12c-4453-a6c9-0d2f7f50c4ff"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:23:58.833330 master-0 kubenswrapper[31411]: I0224 02:23:58.832353 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n"] Feb 24 02:23:58.833330 master-0 kubenswrapper[31411]: I0224 02:23:58.832919 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "226b24ba-d12c-4453-a6c9-0d2f7f50c4ff" (UID: "226b24ba-d12c-4453-a6c9-0d2f7f50c4ff"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:23:58.833330 master-0 kubenswrapper[31411]: I0224 02:23:58.833154 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "226b24ba-d12c-4453-a6c9-0d2f7f50c4ff" (UID: "226b24ba-d12c-4453-a6c9-0d2f7f50c4ff"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:23:58.835957 master-0 kubenswrapper[31411]: I0224 02:23:58.835918 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "226b24ba-d12c-4453-a6c9-0d2f7f50c4ff" (UID: "226b24ba-d12c-4453-a6c9-0d2f7f50c4ff"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:23:58.836286 master-0 kubenswrapper[31411]: I0224 02:23:58.836121 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "226b24ba-d12c-4453-a6c9-0d2f7f50c4ff" (UID: "226b24ba-d12c-4453-a6c9-0d2f7f50c4ff"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:23:58.839370 master-0 kubenswrapper[31411]: I0224 02:23:58.839266 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-kube-api-access-6h6lb" (OuterVolumeSpecName: "kube-api-access-6h6lb") pod "226b24ba-d12c-4453-a6c9-0d2f7f50c4ff" (UID: "226b24ba-d12c-4453-a6c9-0d2f7f50c4ff"). InnerVolumeSpecName "kube-api-access-6h6lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:23:58.839974 master-0 kubenswrapper[31411]: I0224 02:23:58.839500 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "226b24ba-d12c-4453-a6c9-0d2f7f50c4ff" (UID: "226b24ba-d12c-4453-a6c9-0d2f7f50c4ff"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:23:58.840662 master-0 kubenswrapper[31411]: I0224 02:23:58.840504 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "226b24ba-d12c-4453-a6c9-0d2f7f50c4ff" (UID: "226b24ba-d12c-4453-a6c9-0d2f7f50c4ff"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:23:58.842176 master-0 kubenswrapper[31411]: I0224 02:23:58.842063 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "226b24ba-d12c-4453-a6c9-0d2f7f50c4ff" (UID: "226b24ba-d12c-4453-a6c9-0d2f7f50c4ff"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:23:58.842758 master-0 kubenswrapper[31411]: I0224 02:23:58.842712 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "226b24ba-d12c-4453-a6c9-0d2f7f50c4ff" (UID: "226b24ba-d12c-4453-a6c9-0d2f7f50c4ff"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:23:58.846842 master-0 kubenswrapper[31411]: I0224 02:23:58.846779 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "226b24ba-d12c-4453-a6c9-0d2f7f50c4ff" (UID: "226b24ba-d12c-4453-a6c9-0d2f7f50c4ff"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:23:58.933048 master-0 kubenswrapper[31411]: I0224 02:23:58.932621 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:58.933048 master-0 kubenswrapper[31411]: I0224 02:23:58.932715 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:58.933048 master-0 kubenswrapper[31411]: I0224 02:23:58.932803 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:58.933048 master-0 kubenswrapper[31411]: I0224 02:23:58.933034 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5pjz\" (UniqueName: \"kubernetes.io/projected/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-kube-api-access-w5pjz\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:58.933751 master-0 kubenswrapper[31411]: I0224 02:23:58.933071 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:58.933751 master-0 kubenswrapper[31411]: I0224 02:23:58.933096 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-v4-0-config-user-template-error\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:58.933751 master-0 kubenswrapper[31411]: I0224 02:23:58.933135 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-audit-dir\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:58.933751 master-0 kubenswrapper[31411]: I0224 02:23:58.933169 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-v4-0-config-user-template-login\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:58.933751 master-0 kubenswrapper[31411]: I0224 02:23:58.933194 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-v4-0-config-system-session\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:58.933751 master-0 kubenswrapper[31411]: I0224 02:23:58.933220 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:58.933751 master-0 kubenswrapper[31411]: I0224 02:23:58.933244 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:58.933751 master-0 kubenswrapper[31411]: I0224 02:23:58.933273 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-audit-policies\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:58.933751 master-0 kubenswrapper[31411]: I0224 02:23:58.933376 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:58.934293 master-0 kubenswrapper[31411]: I0224 02:23:58.933802 31411 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 02:23:58.934293 master-0 kubenswrapper[31411]: I0224 02:23:58.933851 31411 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:23:58.934293 master-0 kubenswrapper[31411]: I0224 02:23:58.933870 31411 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:23:58.934293 master-0 kubenswrapper[31411]: I0224 02:23:58.933893 31411 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Feb 24 02:23:58.934293 master-0 kubenswrapper[31411]: I0224 02:23:58.933909 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6h6lb\" (UniqueName: \"kubernetes.io/projected/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-kube-api-access-6h6lb\") on node \"master-0\" DevicePath \"\"" Feb 24 02:23:58.934293 master-0 kubenswrapper[31411]: I0224 02:23:58.933924 31411 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Feb 24 02:23:58.934293 master-0 kubenswrapper[31411]: I0224 02:23:58.933940 31411 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Feb 24 02:23:58.934293 master-0 kubenswrapper[31411]: I0224 02:23:58.933957 31411 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 02:23:58.934293 master-0 kubenswrapper[31411]: I0224 02:23:58.933971 31411 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Feb 24 02:23:58.934293 master-0 kubenswrapper[31411]: I0224 02:23:58.933989 31411 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Feb 24 02:23:58.934293 master-0 kubenswrapper[31411]: I0224 02:23:58.934002 31411 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-audit-policies\") on node \"master-0\" DevicePath \"\"" Feb 24 02:23:58.934293 master-0 kubenswrapper[31411]: I0224 02:23:58.934018 31411 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Feb 24 02:23:58.934293 master-0 kubenswrapper[31411]: I0224 02:23:58.934037 31411 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Feb 24 02:23:59.041039 master-0 kubenswrapper[31411]: I0224 02:23:59.035920 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-v4-0-config-user-template-login\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:59.041039 master-0 kubenswrapper[31411]: I0224 02:23:59.038188 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-v4-0-config-system-session\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:59.041039 master-0 kubenswrapper[31411]: I0224 02:23:59.038289 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:59.041039 master-0 kubenswrapper[31411]: I0224 02:23:59.038374 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:59.041039 master-0 kubenswrapper[31411]: I0224 02:23:59.038431 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-audit-policies\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:59.041039 master-0 kubenswrapper[31411]: I0224 02:23:59.038486 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:59.041039 master-0 kubenswrapper[31411]: I0224 02:23:59.038639 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:59.041039 master-0 kubenswrapper[31411]: I0224 02:23:59.038930 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:59.041039 master-0 kubenswrapper[31411]: I0224 02:23:59.038980 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:59.041039 master-0 kubenswrapper[31411]: I0224 02:23:59.039049 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5pjz\" (UniqueName: \"kubernetes.io/projected/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-kube-api-access-w5pjz\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:59.041039 master-0 kubenswrapper[31411]: I0224 02:23:59.039094 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:59.041039 master-0 kubenswrapper[31411]: I0224 02:23:59.039305 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-v4-0-config-user-template-error\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:59.041039 master-0 kubenswrapper[31411]: I0224 02:23:59.040025 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-audit-dir\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:59.041039 master-0 kubenswrapper[31411]: I0224 02:23:59.040168 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-audit-dir\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:59.042088 master-0 kubenswrapper[31411]: I0224 02:23:59.041925 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:59.044354 master-0 kubenswrapper[31411]: I0224 02:23:59.044289 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:59.044354 master-0 kubenswrapper[31411]: I0224 02:23:59.044317 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-audit-policies\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:59.045389 master-0 kubenswrapper[31411]: I0224 02:23:59.045322 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:59.050284 master-0 kubenswrapper[31411]: I0224 02:23:59.046861 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-v4-0-config-user-template-login\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:59.050284 master-0 kubenswrapper[31411]: I0224 02:23:59.047201 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:59.050284 master-0 kubenswrapper[31411]: I0224 02:23:59.047243 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:59.050284 master-0 kubenswrapper[31411]: I0224 02:23:59.049329 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-v4-0-config-system-session\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:59.051177 master-0 kubenswrapper[31411]: I0224 02:23:59.051090 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:59.052312 master-0 kubenswrapper[31411]: I0224 02:23:59.051784 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:59.058382 master-0 kubenswrapper[31411]: I0224 02:23:59.058343 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-v4-0-config-user-template-error\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:59.068351 master-0 kubenswrapper[31411]: I0224 02:23:59.068300 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5pjz\" (UniqueName: \"kubernetes.io/projected/3a070e67-63c1-4d58-8c68-1a2aa5a1702a-kube-api-access-w5pjz\") pod \"oauth-openshift-7f7cbb95f8-pfw2n\" (UID: \"3a070e67-63c1-4d58-8c68-1a2aa5a1702a\") " pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:59.130078 master-0 kubenswrapper[31411]: I0224 02:23:59.129995 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:23:59.665205 master-0 kubenswrapper[31411]: I0224 02:23:59.665147 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n"] Feb 24 02:23:59.683310 master-0 kubenswrapper[31411]: I0224 02:23:59.683227 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-95876988f-c58ls" event={"ID":"226b24ba-d12c-4453-a6c9-0d2f7f50c4ff","Type":"ContainerDied","Data":"1904f3d3180cba6d9a314ac02ca7481de11d374d89cc516057962012adc117a5"} Feb 24 02:23:59.683310 master-0 kubenswrapper[31411]: I0224 02:23:59.683298 31411 scope.go:117] "RemoveContainer" containerID="78c1739440f378ea9547e3b56777f5292327e7321779d6ef20c9e2cab0a21f2f" Feb 24 02:23:59.684428 master-0 kubenswrapper[31411]: I0224 02:23:59.683436 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-95876988f-c58ls" Feb 24 02:23:59.686084 master-0 kubenswrapper[31411]: I0224 02:23:59.686001 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" event={"ID":"3a070e67-63c1-4d58-8c68-1a2aa5a1702a","Type":"ContainerStarted","Data":"c4136eea0f309c4c8787d34843084a11e71006a14984f03065ef7fbf320184a0"} Feb 24 02:23:59.726476 master-0 kubenswrapper[31411]: I0224 02:23:59.726395 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-95876988f-c58ls"] Feb 24 02:23:59.747846 master-0 kubenswrapper[31411]: I0224 02:23:59.747778 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-95876988f-c58ls"] Feb 24 02:24:00.703471 master-0 kubenswrapper[31411]: I0224 02:24:00.703380 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" event={"ID":"3a070e67-63c1-4d58-8c68-1a2aa5a1702a","Type":"ContainerStarted","Data":"dbd75ab495bf303fada61d00d87bdf5f98411f1214c55b7d1adb97040647458e"} Feb 24 02:24:00.704130 master-0 kubenswrapper[31411]: I0224 02:24:00.703768 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:24:00.711259 master-0 kubenswrapper[31411]: I0224 02:24:00.711201 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" Feb 24 02:24:00.744708 master-0 kubenswrapper[31411]: I0224 02:24:00.744598 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7f7cbb95f8-pfw2n" podStartSLOduration=46.744539266 podStartE2EDuration="46.744539266s" podCreationTimestamp="2026-02-24 02:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:24:00.733154739 +0000 UTC m=+183.950352575" watchObservedRunningTime="2026-02-24 02:24:00.744539266 +0000 UTC m=+183.961737142" Feb 24 02:24:01.107775 master-0 kubenswrapper[31411]: I0224 02:24:01.107268 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="226b24ba-d12c-4453-a6c9-0d2f7f50c4ff" path="/var/lib/kubelet/pods/226b24ba-d12c-4453-a6c9-0d2f7f50c4ff/volumes" Feb 24 02:24:03.463703 master-0 kubenswrapper[31411]: I0224 02:24:03.463631 31411 patch_prober.go:28] interesting pod/console-576f8c76bf-2xx46 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Feb 24 02:24:03.464859 master-0 kubenswrapper[31411]: I0224 02:24:03.463736 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-576f8c76bf-2xx46" podUID="92850e60-7126-43e8-a7f7-fe411b6fc2b7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Feb 24 02:24:13.470042 master-0 kubenswrapper[31411]: I0224 02:24:13.469919 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-576f8c76bf-2xx46" Feb 24 02:24:13.478281 master-0 kubenswrapper[31411]: I0224 02:24:13.478199 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-576f8c76bf-2xx46" Feb 24 02:24:13.823926 master-0 kubenswrapper[31411]: I0224 02:24:13.823678 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-955b69498-x847l" event={"ID":"2fbb8ae4-fc8b-46ff-a295-10a1207dd571","Type":"ContainerStarted","Data":"096fac0b43fc30a04c08a590ae6c0b822092c4ee8b14294a0fd7c028bfb7ce9f"} Feb 24 02:24:13.854667 master-0 kubenswrapper[31411]: I0224 02:24:13.854530 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-955b69498-x847l" podStartSLOduration=26.739427545 podStartE2EDuration="59.854499659s" podCreationTimestamp="2026-02-24 02:23:14 +0000 UTC" firstStartedPulling="2026-02-24 02:23:39.524972574 +0000 UTC m=+162.742170450" lastFinishedPulling="2026-02-24 02:24:12.640044688 +0000 UTC m=+195.857242564" observedRunningTime="2026-02-24 02:24:13.852263757 +0000 UTC m=+197.069461643" watchObservedRunningTime="2026-02-24 02:24:13.854499659 +0000 UTC m=+197.071697565" Feb 24 02:24:14.833440 master-0 kubenswrapper[31411]: I0224 02:24:14.833325 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-955b69498-x847l" Feb 24 02:24:14.835929 master-0 kubenswrapper[31411]: I0224 02:24:14.835864 31411 patch_prober.go:28] interesting pod/downloads-955b69498-x847l container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.94:8080/\": dial tcp 10.128.0.94:8080: connect: connection refused" start-of-body= Feb 24 02:24:14.836059 master-0 kubenswrapper[31411]: I0224 02:24:14.835952 31411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-955b69498-x847l" podUID="2fbb8ae4-fc8b-46ff-a295-10a1207dd571" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.94:8080/\": dial tcp 10.128.0.94:8080: connect: connection refused" Feb 24 02:24:15.843347 master-0 kubenswrapper[31411]: I0224 02:24:15.843213 31411 patch_prober.go:28] interesting pod/downloads-955b69498-x847l container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.94:8080/\": dial tcp 10.128.0.94:8080: connect: connection refused" start-of-body= Feb 24 02:24:15.844257 master-0 kubenswrapper[31411]: I0224 02:24:15.843361 31411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-955b69498-x847l" podUID="2fbb8ae4-fc8b-46ff-a295-10a1207dd571" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.94:8080/\": dial tcp 10.128.0.94:8080: connect: connection refused" Feb 24 02:24:23.439766 master-0 kubenswrapper[31411]: I0224 02:24:23.439655 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-955b69498-x847l" Feb 24 02:24:37.621541 master-0 kubenswrapper[31411]: I0224 02:24:37.621457 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-xrqvm"] Feb 24 02:24:37.623819 master-0 kubenswrapper[31411]: I0224 02:24:37.623707 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xrqvm" Feb 24 02:24:37.626538 master-0 kubenswrapper[31411]: I0224 02:24:37.626476 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 24 02:24:37.629686 master-0 kubenswrapper[31411]: I0224 02:24:37.629555 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-s9rhm" Feb 24 02:24:37.676451 master-0 kubenswrapper[31411]: I0224 02:24:37.676328 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a7e8d1d-3fc9-4111-9a9f-a1c939b1978a-host\") pod \"node-ca-xrqvm\" (UID: \"9a7e8d1d-3fc9-4111-9a9f-a1c939b1978a\") " pod="openshift-image-registry/node-ca-xrqvm" Feb 24 02:24:37.676827 master-0 kubenswrapper[31411]: I0224 02:24:37.676463 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbs7v\" (UniqueName: \"kubernetes.io/projected/9a7e8d1d-3fc9-4111-9a9f-a1c939b1978a-kube-api-access-wbs7v\") pod \"node-ca-xrqvm\" (UID: \"9a7e8d1d-3fc9-4111-9a9f-a1c939b1978a\") " pod="openshift-image-registry/node-ca-xrqvm" Feb 24 02:24:37.676827 master-0 kubenswrapper[31411]: I0224 02:24:37.676655 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9a7e8d1d-3fc9-4111-9a9f-a1c939b1978a-serviceca\") pod \"node-ca-xrqvm\" (UID: \"9a7e8d1d-3fc9-4111-9a9f-a1c939b1978a\") " pod="openshift-image-registry/node-ca-xrqvm" Feb 24 02:24:37.778313 master-0 kubenswrapper[31411]: I0224 02:24:37.778200 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbs7v\" (UniqueName: \"kubernetes.io/projected/9a7e8d1d-3fc9-4111-9a9f-a1c939b1978a-kube-api-access-wbs7v\") pod \"node-ca-xrqvm\" (UID: \"9a7e8d1d-3fc9-4111-9a9f-a1c939b1978a\") " pod="openshift-image-registry/node-ca-xrqvm" Feb 24 02:24:37.778313 master-0 kubenswrapper[31411]: I0224 02:24:37.778309 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9a7e8d1d-3fc9-4111-9a9f-a1c939b1978a-serviceca\") pod \"node-ca-xrqvm\" (UID: \"9a7e8d1d-3fc9-4111-9a9f-a1c939b1978a\") " pod="openshift-image-registry/node-ca-xrqvm" Feb 24 02:24:37.778732 master-0 kubenswrapper[31411]: I0224 02:24:37.778631 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a7e8d1d-3fc9-4111-9a9f-a1c939b1978a-host\") pod \"node-ca-xrqvm\" (UID: \"9a7e8d1d-3fc9-4111-9a9f-a1c939b1978a\") " pod="openshift-image-registry/node-ca-xrqvm" Feb 24 02:24:37.778874 master-0 kubenswrapper[31411]: I0224 02:24:37.778788 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a7e8d1d-3fc9-4111-9a9f-a1c939b1978a-host\") pod \"node-ca-xrqvm\" (UID: \"9a7e8d1d-3fc9-4111-9a9f-a1c939b1978a\") " pod="openshift-image-registry/node-ca-xrqvm" Feb 24 02:24:37.779686 master-0 kubenswrapper[31411]: I0224 02:24:37.779622 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9a7e8d1d-3fc9-4111-9a9f-a1c939b1978a-serviceca\") pod \"node-ca-xrqvm\" (UID: \"9a7e8d1d-3fc9-4111-9a9f-a1c939b1978a\") " pod="openshift-image-registry/node-ca-xrqvm" Feb 24 02:24:37.796660 master-0 kubenswrapper[31411]: I0224 02:24:37.796595 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbs7v\" (UniqueName: \"kubernetes.io/projected/9a7e8d1d-3fc9-4111-9a9f-a1c939b1978a-kube-api-access-wbs7v\") pod \"node-ca-xrqvm\" (UID: \"9a7e8d1d-3fc9-4111-9a9f-a1c939b1978a\") " pod="openshift-image-registry/node-ca-xrqvm" Feb 24 02:24:37.960465 master-0 kubenswrapper[31411]: I0224 02:24:37.960372 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xrqvm" Feb 24 02:24:38.000500 master-0 kubenswrapper[31411]: W0224 02:24:38.000423 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a7e8d1d_3fc9_4111_9a9f_a1c939b1978a.slice/crio-7a3c22d8d30bbf6df765dfa74e567cdd10f72e9363faff68ab435333625eaede WatchSource:0}: Error finding container 7a3c22d8d30bbf6df765dfa74e567cdd10f72e9363faff68ab435333625eaede: Status 404 returned error can't find the container with id 7a3c22d8d30bbf6df765dfa74e567cdd10f72e9363faff68ab435333625eaede Feb 24 02:24:38.070487 master-0 kubenswrapper[31411]: I0224 02:24:38.070406 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xrqvm" event={"ID":"9a7e8d1d-3fc9-4111-9a9f-a1c939b1978a","Type":"ContainerStarted","Data":"7a3c22d8d30bbf6df765dfa74e567cdd10f72e9363faff68ab435333625eaede"} Feb 24 02:24:41.137820 master-0 kubenswrapper[31411]: I0224 02:24:41.136897 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xrqvm" event={"ID":"9a7e8d1d-3fc9-4111-9a9f-a1c939b1978a","Type":"ContainerStarted","Data":"c93c8cee58653eea39e5576330bdfb8cd3a3d3258a7c2f44ca894cccce921ebb"} Feb 24 02:24:41.170043 master-0 kubenswrapper[31411]: I0224 02:24:41.169949 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-xrqvm" podStartSLOduration=1.8991387309999999 podStartE2EDuration="4.169921478s" podCreationTimestamp="2026-02-24 02:24:37 +0000 UTC" firstStartedPulling="2026-02-24 02:24:38.00248798 +0000 UTC m=+221.219685866" lastFinishedPulling="2026-02-24 02:24:40.273270757 +0000 UTC m=+223.490468613" observedRunningTime="2026-02-24 02:24:41.160983239 +0000 UTC m=+224.378181125" watchObservedRunningTime="2026-02-24 02:24:41.169921478 +0000 UTC m=+224.387119354" Feb 24 02:25:25.306562 master-0 kubenswrapper[31411]: I0224 02:25:25.303193 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 24 02:25:25.306562 master-0 kubenswrapper[31411]: I0224 02:25:25.306483 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.314876 master-0 kubenswrapper[31411]: I0224 02:25:25.314801 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 24 02:25:25.318621 master-0 kubenswrapper[31411]: I0224 02:25:25.315004 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 24 02:25:25.318621 master-0 kubenswrapper[31411]: I0224 02:25:25.315036 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 24 02:25:25.318621 master-0 kubenswrapper[31411]: I0224 02:25:25.315136 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 24 02:25:25.318621 master-0 kubenswrapper[31411]: I0224 02:25:25.315041 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 24 02:25:25.318621 master-0 kubenswrapper[31411]: I0224 02:25:25.315423 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 24 02:25:25.318621 master-0 kubenswrapper[31411]: I0224 02:25:25.316092 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 24 02:25:25.318621 master-0 kubenswrapper[31411]: I0224 02:25:25.316238 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 24 02:25:25.329980 master-0 kubenswrapper[31411]: I0224 02:25:25.319777 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 24 02:25:25.394840 master-0 kubenswrapper[31411]: I0224 02:25:25.394773 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qgxx\" (UniqueName: \"kubernetes.io/projected/977f33ee-175a-43f4-8b50-f539e1c2c583-kube-api-access-9qgxx\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.395101 master-0 kubenswrapper[31411]: I0224 02:25:25.394858 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/977f33ee-175a-43f4-8b50-f539e1c2c583-config-volume\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.395101 master-0 kubenswrapper[31411]: I0224 02:25:25.394982 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/977f33ee-175a-43f4-8b50-f539e1c2c583-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.395176 master-0 kubenswrapper[31411]: I0224 02:25:25.395112 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/977f33ee-175a-43f4-8b50-f539e1c2c583-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.395176 master-0 kubenswrapper[31411]: I0224 02:25:25.395163 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/977f33ee-175a-43f4-8b50-f539e1c2c583-tls-assets\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.395242 master-0 kubenswrapper[31411]: I0224 02:25:25.395189 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/977f33ee-175a-43f4-8b50-f539e1c2c583-web-config\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.395242 master-0 kubenswrapper[31411]: I0224 02:25:25.395234 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/977f33ee-175a-43f4-8b50-f539e1c2c583-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.395319 master-0 kubenswrapper[31411]: I0224 02:25:25.395306 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/977f33ee-175a-43f4-8b50-f539e1c2c583-config-out\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.395456 master-0 kubenswrapper[31411]: I0224 02:25:25.395374 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/977f33ee-175a-43f4-8b50-f539e1c2c583-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.395456 master-0 kubenswrapper[31411]: I0224 02:25:25.395439 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/977f33ee-175a-43f4-8b50-f539e1c2c583-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.395534 master-0 kubenswrapper[31411]: I0224 02:25:25.395515 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/977f33ee-175a-43f4-8b50-f539e1c2c583-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.395727 master-0 kubenswrapper[31411]: I0224 02:25:25.395651 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/977f33ee-175a-43f4-8b50-f539e1c2c583-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.497481 master-0 kubenswrapper[31411]: I0224 02:25:25.497382 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/977f33ee-175a-43f4-8b50-f539e1c2c583-config-out\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.497859 master-0 kubenswrapper[31411]: I0224 02:25:25.497499 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/977f33ee-175a-43f4-8b50-f539e1c2c583-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.497859 master-0 kubenswrapper[31411]: I0224 02:25:25.497564 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/977f33ee-175a-43f4-8b50-f539e1c2c583-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.497859 master-0 kubenswrapper[31411]: I0224 02:25:25.497657 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/977f33ee-175a-43f4-8b50-f539e1c2c583-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.497859 master-0 kubenswrapper[31411]: I0224 02:25:25.497752 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/977f33ee-175a-43f4-8b50-f539e1c2c583-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.498428 master-0 kubenswrapper[31411]: I0224 02:25:25.498325 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qgxx\" (UniqueName: \"kubernetes.io/projected/977f33ee-175a-43f4-8b50-f539e1c2c583-kube-api-access-9qgxx\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.498715 master-0 kubenswrapper[31411]: I0224 02:25:25.498634 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/977f33ee-175a-43f4-8b50-f539e1c2c583-config-volume\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.498826 master-0 kubenswrapper[31411]: I0224 02:25:25.498794 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/977f33ee-175a-43f4-8b50-f539e1c2c583-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.498940 master-0 kubenswrapper[31411]: I0224 02:25:25.498904 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/977f33ee-175a-43f4-8b50-f539e1c2c583-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.499023 master-0 kubenswrapper[31411]: I0224 02:25:25.498982 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/977f33ee-175a-43f4-8b50-f539e1c2c583-tls-assets\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.499102 master-0 kubenswrapper[31411]: I0224 02:25:25.498660 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/977f33ee-175a-43f4-8b50-f539e1c2c583-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.499102 master-0 kubenswrapper[31411]: I0224 02:25:25.499039 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/977f33ee-175a-43f4-8b50-f539e1c2c583-web-config\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.500186 master-0 kubenswrapper[31411]: I0224 02:25:25.500090 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/977f33ee-175a-43f4-8b50-f539e1c2c583-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.501040 master-0 kubenswrapper[31411]: I0224 02:25:25.500980 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/977f33ee-175a-43f4-8b50-f539e1c2c583-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.502601 master-0 kubenswrapper[31411]: I0224 02:25:25.502523 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/977f33ee-175a-43f4-8b50-f539e1c2c583-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.504153 master-0 kubenswrapper[31411]: I0224 02:25:25.504091 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/977f33ee-175a-43f4-8b50-f539e1c2c583-config-out\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.505895 master-0 kubenswrapper[31411]: I0224 02:25:25.505842 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/977f33ee-175a-43f4-8b50-f539e1c2c583-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.506352 master-0 kubenswrapper[31411]: I0224 02:25:25.506304 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/977f33ee-175a-43f4-8b50-f539e1c2c583-web-config\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.506715 master-0 kubenswrapper[31411]: I0224 02:25:25.506656 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/977f33ee-175a-43f4-8b50-f539e1c2c583-config-volume\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.507018 master-0 kubenswrapper[31411]: I0224 02:25:25.506942 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/977f33ee-175a-43f4-8b50-f539e1c2c583-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.507125 master-0 kubenswrapper[31411]: I0224 02:25:25.507042 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/977f33ee-175a-43f4-8b50-f539e1c2c583-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.508810 master-0 kubenswrapper[31411]: I0224 02:25:25.508712 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/977f33ee-175a-43f4-8b50-f539e1c2c583-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.509051 master-0 kubenswrapper[31411]: I0224 02:25:25.508980 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/977f33ee-175a-43f4-8b50-f539e1c2c583-tls-assets\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.522192 master-0 kubenswrapper[31411]: I0224 02:25:25.522101 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qgxx\" (UniqueName: \"kubernetes.io/projected/977f33ee-175a-43f4-8b50-f539e1c2c583-kube-api-access-9qgxx\") pod \"alertmanager-main-0\" (UID: \"977f33ee-175a-43f4-8b50-f539e1c2c583\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:25.639543 master-0 kubenswrapper[31411]: I0224 02:25:25.639339 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 24 02:25:26.184609 master-0 kubenswrapper[31411]: I0224 02:25:26.184509 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 24 02:25:26.196605 master-0 kubenswrapper[31411]: W0224 02:25:26.194363 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod977f33ee_175a_43f4_8b50_f539e1c2c583.slice/crio-aabbc77d6fd766a1ad721df55a582bd0960316920aa52fd138b7bfe9ced5ff5b WatchSource:0}: Error finding container aabbc77d6fd766a1ad721df55a582bd0960316920aa52fd138b7bfe9ced5ff5b: Status 404 returned error can't find the container with id aabbc77d6fd766a1ad721df55a582bd0960316920aa52fd138b7bfe9ced5ff5b Feb 24 02:25:26.280122 master-0 kubenswrapper[31411]: I0224 02:25:26.280056 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-69565684c5-snfqm"] Feb 24 02:25:26.284232 master-0 kubenswrapper[31411]: I0224 02:25:26.284196 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" Feb 24 02:25:26.287702 master-0 kubenswrapper[31411]: I0224 02:25:26.287656 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 24 02:25:26.287859 master-0 kubenswrapper[31411]: I0224 02:25:26.287699 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 24 02:25:26.287992 master-0 kubenswrapper[31411]: I0224 02:25:26.287953 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-4run762hnmqqc" Feb 24 02:25:26.288125 master-0 kubenswrapper[31411]: I0224 02:25:26.288080 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 24 02:25:26.289687 master-0 kubenswrapper[31411]: I0224 02:25:26.289640 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 24 02:25:26.292676 master-0 kubenswrapper[31411]: I0224 02:25:26.292603 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 24 02:25:26.307996 master-0 kubenswrapper[31411]: I0224 02:25:26.307917 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-69565684c5-snfqm"] Feb 24 02:25:26.442717 master-0 kubenswrapper[31411]: I0224 02:25:26.442535 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/133397d5-a069-4b31-b4d8-a7442bc62eba-secret-thanos-querier-tls\") pod \"thanos-querier-69565684c5-snfqm\" (UID: \"133397d5-a069-4b31-b4d8-a7442bc62eba\") " pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" Feb 24 02:25:26.442717 master-0 kubenswrapper[31411]: I0224 02:25:26.442614 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/133397d5-a069-4b31-b4d8-a7442bc62eba-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-69565684c5-snfqm\" (UID: \"133397d5-a069-4b31-b4d8-a7442bc62eba\") " pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" Feb 24 02:25:26.442717 master-0 kubenswrapper[31411]: I0224 02:25:26.442660 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/133397d5-a069-4b31-b4d8-a7442bc62eba-secret-grpc-tls\") pod \"thanos-querier-69565684c5-snfqm\" (UID: \"133397d5-a069-4b31-b4d8-a7442bc62eba\") " pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" Feb 24 02:25:26.442717 master-0 kubenswrapper[31411]: I0224 02:25:26.442708 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/133397d5-a069-4b31-b4d8-a7442bc62eba-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-69565684c5-snfqm\" (UID: \"133397d5-a069-4b31-b4d8-a7442bc62eba\") " pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" Feb 24 02:25:26.443209 master-0 kubenswrapper[31411]: I0224 02:25:26.442740 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/133397d5-a069-4b31-b4d8-a7442bc62eba-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-69565684c5-snfqm\" (UID: \"133397d5-a069-4b31-b4d8-a7442bc62eba\") " pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" Feb 24 02:25:26.443209 master-0 kubenswrapper[31411]: I0224 02:25:26.442769 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/133397d5-a069-4b31-b4d8-a7442bc62eba-metrics-client-ca\") pod \"thanos-querier-69565684c5-snfqm\" (UID: \"133397d5-a069-4b31-b4d8-a7442bc62eba\") " pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" Feb 24 02:25:26.443209 master-0 kubenswrapper[31411]: I0224 02:25:26.442807 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpw6k\" (UniqueName: \"kubernetes.io/projected/133397d5-a069-4b31-b4d8-a7442bc62eba-kube-api-access-qpw6k\") pod \"thanos-querier-69565684c5-snfqm\" (UID: \"133397d5-a069-4b31-b4d8-a7442bc62eba\") " pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" Feb 24 02:25:26.443209 master-0 kubenswrapper[31411]: I0224 02:25:26.442840 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/133397d5-a069-4b31-b4d8-a7442bc62eba-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-69565684c5-snfqm\" (UID: \"133397d5-a069-4b31-b4d8-a7442bc62eba\") " pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" Feb 24 02:25:26.544474 master-0 kubenswrapper[31411]: I0224 02:25:26.544407 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/133397d5-a069-4b31-b4d8-a7442bc62eba-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-69565684c5-snfqm\" (UID: \"133397d5-a069-4b31-b4d8-a7442bc62eba\") " pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" Feb 24 02:25:26.544981 master-0 kubenswrapper[31411]: I0224 02:25:26.544950 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/133397d5-a069-4b31-b4d8-a7442bc62eba-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-69565684c5-snfqm\" (UID: \"133397d5-a069-4b31-b4d8-a7442bc62eba\") " pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" Feb 24 02:25:26.545183 master-0 kubenswrapper[31411]: I0224 02:25:26.545155 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/133397d5-a069-4b31-b4d8-a7442bc62eba-metrics-client-ca\") pod \"thanos-querier-69565684c5-snfqm\" (UID: \"133397d5-a069-4b31-b4d8-a7442bc62eba\") " pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" Feb 24 02:25:26.545359 master-0 kubenswrapper[31411]: I0224 02:25:26.545334 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpw6k\" (UniqueName: \"kubernetes.io/projected/133397d5-a069-4b31-b4d8-a7442bc62eba-kube-api-access-qpw6k\") pod \"thanos-querier-69565684c5-snfqm\" (UID: \"133397d5-a069-4b31-b4d8-a7442bc62eba\") " pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" Feb 24 02:25:26.545784 master-0 kubenswrapper[31411]: I0224 02:25:26.545755 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/133397d5-a069-4b31-b4d8-a7442bc62eba-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-69565684c5-snfqm\" (UID: \"133397d5-a069-4b31-b4d8-a7442bc62eba\") " pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" Feb 24 02:25:26.546003 master-0 kubenswrapper[31411]: I0224 02:25:26.545976 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/133397d5-a069-4b31-b4d8-a7442bc62eba-secret-thanos-querier-tls\") pod \"thanos-querier-69565684c5-snfqm\" (UID: \"133397d5-a069-4b31-b4d8-a7442bc62eba\") " pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" Feb 24 02:25:26.546197 master-0 kubenswrapper[31411]: I0224 02:25:26.546168 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/133397d5-a069-4b31-b4d8-a7442bc62eba-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-69565684c5-snfqm\" (UID: \"133397d5-a069-4b31-b4d8-a7442bc62eba\") " pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" Feb 24 02:25:26.546371 master-0 kubenswrapper[31411]: I0224 02:25:26.546323 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/133397d5-a069-4b31-b4d8-a7442bc62eba-metrics-client-ca\") pod \"thanos-querier-69565684c5-snfqm\" (UID: \"133397d5-a069-4b31-b4d8-a7442bc62eba\") " pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" Feb 24 02:25:26.546534 master-0 kubenswrapper[31411]: I0224 02:25:26.546505 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/133397d5-a069-4b31-b4d8-a7442bc62eba-secret-grpc-tls\") pod \"thanos-querier-69565684c5-snfqm\" (UID: \"133397d5-a069-4b31-b4d8-a7442bc62eba\") " pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" Feb 24 02:25:26.549141 master-0 kubenswrapper[31411]: I0224 02:25:26.549080 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/133397d5-a069-4b31-b4d8-a7442bc62eba-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-69565684c5-snfqm\" (UID: \"133397d5-a069-4b31-b4d8-a7442bc62eba\") " pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" Feb 24 02:25:26.551297 master-0 kubenswrapper[31411]: I0224 02:25:26.551113 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/133397d5-a069-4b31-b4d8-a7442bc62eba-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-69565684c5-snfqm\" (UID: \"133397d5-a069-4b31-b4d8-a7442bc62eba\") " pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" Feb 24 02:25:26.552824 master-0 kubenswrapper[31411]: I0224 02:25:26.552136 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/133397d5-a069-4b31-b4d8-a7442bc62eba-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-69565684c5-snfqm\" (UID: \"133397d5-a069-4b31-b4d8-a7442bc62eba\") " pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" Feb 24 02:25:26.553248 master-0 kubenswrapper[31411]: I0224 02:25:26.553180 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/133397d5-a069-4b31-b4d8-a7442bc62eba-secret-thanos-querier-tls\") pod \"thanos-querier-69565684c5-snfqm\" (UID: \"133397d5-a069-4b31-b4d8-a7442bc62eba\") " pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" Feb 24 02:25:26.553361 master-0 kubenswrapper[31411]: I0224 02:25:26.553326 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/133397d5-a069-4b31-b4d8-a7442bc62eba-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-69565684c5-snfqm\" (UID: \"133397d5-a069-4b31-b4d8-a7442bc62eba\") " pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" Feb 24 02:25:26.553569 master-0 kubenswrapper[31411]: I0224 02:25:26.553508 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/133397d5-a069-4b31-b4d8-a7442bc62eba-secret-grpc-tls\") pod \"thanos-querier-69565684c5-snfqm\" (UID: \"133397d5-a069-4b31-b4d8-a7442bc62eba\") " pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" Feb 24 02:25:26.567535 master-0 kubenswrapper[31411]: I0224 02:25:26.566624 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpw6k\" (UniqueName: \"kubernetes.io/projected/133397d5-a069-4b31-b4d8-a7442bc62eba-kube-api-access-qpw6k\") pod \"thanos-querier-69565684c5-snfqm\" (UID: \"133397d5-a069-4b31-b4d8-a7442bc62eba\") " pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" Feb 24 02:25:26.595553 master-0 kubenswrapper[31411]: I0224 02:25:26.595461 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"977f33ee-175a-43f4-8b50-f539e1c2c583","Type":"ContainerStarted","Data":"aabbc77d6fd766a1ad721df55a582bd0960316920aa52fd138b7bfe9ced5ff5b"} Feb 24 02:25:26.664821 master-0 kubenswrapper[31411]: I0224 02:25:26.664640 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" Feb 24 02:25:27.183790 master-0 kubenswrapper[31411]: I0224 02:25:27.183534 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-69565684c5-snfqm"] Feb 24 02:25:27.607414 master-0 kubenswrapper[31411]: I0224 02:25:27.607331 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" event={"ID":"133397d5-a069-4b31-b4d8-a7442bc62eba","Type":"ContainerStarted","Data":"0c0e405c5c3c22912bdccf8437b1672106b4a34759d90d89ec214dd6eb901caa"} Feb 24 02:25:28.619242 master-0 kubenswrapper[31411]: I0224 02:25:28.619158 31411 generic.go:334] "Generic (PLEG): container finished" podID="977f33ee-175a-43f4-8b50-f539e1c2c583" containerID="9e4e0210bdfdee59475965575061fbff443357f9797b7bbf2d6434857d0e88f4" exitCode=0 Feb 24 02:25:28.619242 master-0 kubenswrapper[31411]: I0224 02:25:28.619221 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"977f33ee-175a-43f4-8b50-f539e1c2c583","Type":"ContainerDied","Data":"9e4e0210bdfdee59475965575061fbff443357f9797b7bbf2d6434857d0e88f4"} Feb 24 02:25:29.049756 master-0 kubenswrapper[31411]: I0224 02:25:29.040383 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-67ddc7b799-zlnvf"] Feb 24 02:25:29.049756 master-0 kubenswrapper[31411]: I0224 02:25:29.045911 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" Feb 24 02:25:29.051184 master-0 kubenswrapper[31411]: I0224 02:25:29.050307 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-56t3bo1jupebb" Feb 24 02:25:29.065864 master-0 kubenswrapper[31411]: I0224 02:25:29.065742 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-67ddc7b799-zlnvf"] Feb 24 02:25:29.079650 master-0 kubenswrapper[31411]: I0224 02:25:29.079140 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-7b9cc5984b-smpdl"] Feb 24 02:25:29.079650 master-0 kubenswrapper[31411]: I0224 02:25:29.079454 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" podUID="8c396c41-c617-4631-9700-a7052af5a276" containerName="metrics-server" containerID="cri-o://6e72c582c52cc1175706db2ef4a54c95fdecae69c4b7d4caf28fde6f98e8eaa4" gracePeriod=170 Feb 24 02:25:29.232940 master-0 kubenswrapper[31411]: I0224 02:25:29.232855 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/02fc214d-8c40-4ed5-9f18-8bf5863d8d70-secret-metrics-client-certs\") pod \"metrics-server-67ddc7b799-zlnvf\" (UID: \"02fc214d-8c40-4ed5-9f18-8bf5863d8d70\") " pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" Feb 24 02:25:29.233723 master-0 kubenswrapper[31411]: I0224 02:25:29.232952 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/02fc214d-8c40-4ed5-9f18-8bf5863d8d70-secret-metrics-server-tls\") pod \"metrics-server-67ddc7b799-zlnvf\" (UID: \"02fc214d-8c40-4ed5-9f18-8bf5863d8d70\") " pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" Feb 24 02:25:29.233723 master-0 kubenswrapper[31411]: I0224 02:25:29.233093 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wftwb\" (UniqueName: \"kubernetes.io/projected/02fc214d-8c40-4ed5-9f18-8bf5863d8d70-kube-api-access-wftwb\") pod \"metrics-server-67ddc7b799-zlnvf\" (UID: \"02fc214d-8c40-4ed5-9f18-8bf5863d8d70\") " pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" Feb 24 02:25:29.233723 master-0 kubenswrapper[31411]: I0224 02:25:29.233156 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/02fc214d-8c40-4ed5-9f18-8bf5863d8d70-audit-log\") pod \"metrics-server-67ddc7b799-zlnvf\" (UID: \"02fc214d-8c40-4ed5-9f18-8bf5863d8d70\") " pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" Feb 24 02:25:29.233723 master-0 kubenswrapper[31411]: I0224 02:25:29.233351 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02fc214d-8c40-4ed5-9f18-8bf5863d8d70-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-67ddc7b799-zlnvf\" (UID: \"02fc214d-8c40-4ed5-9f18-8bf5863d8d70\") " pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" Feb 24 02:25:29.233723 master-0 kubenswrapper[31411]: I0224 02:25:29.233569 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02fc214d-8c40-4ed5-9f18-8bf5863d8d70-client-ca-bundle\") pod \"metrics-server-67ddc7b799-zlnvf\" (UID: \"02fc214d-8c40-4ed5-9f18-8bf5863d8d70\") " pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" Feb 24 02:25:29.233723 master-0 kubenswrapper[31411]: I0224 02:25:29.233686 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/02fc214d-8c40-4ed5-9f18-8bf5863d8d70-metrics-server-audit-profiles\") pod \"metrics-server-67ddc7b799-zlnvf\" (UID: \"02fc214d-8c40-4ed5-9f18-8bf5863d8d70\") " pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" Feb 24 02:25:29.279527 master-0 kubenswrapper[31411]: I0224 02:25:29.279450 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g"] Feb 24 02:25:29.282356 master-0 kubenswrapper[31411]: I0224 02:25:29.282315 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g" Feb 24 02:25:29.285048 master-0 kubenswrapper[31411]: I0224 02:25:29.284969 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Feb 24 02:25:29.285373 master-0 kubenswrapper[31411]: I0224 02:25:29.285334 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Feb 24 02:25:29.286093 master-0 kubenswrapper[31411]: I0224 02:25:29.285655 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Feb 24 02:25:29.286093 master-0 kubenswrapper[31411]: I0224 02:25:29.285858 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Feb 24 02:25:29.288668 master-0 kubenswrapper[31411]: I0224 02:25:29.288611 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Feb 24 02:25:29.303414 master-0 kubenswrapper[31411]: I0224 02:25:29.303318 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Feb 24 02:25:29.309741 master-0 kubenswrapper[31411]: I0224 02:25:29.309687 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g"] Feb 24 02:25:29.338133 master-0 kubenswrapper[31411]: I0224 02:25:29.336504 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02fc214d-8c40-4ed5-9f18-8bf5863d8d70-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-67ddc7b799-zlnvf\" (UID: \"02fc214d-8c40-4ed5-9f18-8bf5863d8d70\") " pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" Feb 24 02:25:29.338133 master-0 kubenswrapper[31411]: I0224 02:25:29.336699 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02fc214d-8c40-4ed5-9f18-8bf5863d8d70-client-ca-bundle\") pod \"metrics-server-67ddc7b799-zlnvf\" (UID: \"02fc214d-8c40-4ed5-9f18-8bf5863d8d70\") " pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" Feb 24 02:25:29.338133 master-0 kubenswrapper[31411]: I0224 02:25:29.336776 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/02fc214d-8c40-4ed5-9f18-8bf5863d8d70-metrics-server-audit-profiles\") pod \"metrics-server-67ddc7b799-zlnvf\" (UID: \"02fc214d-8c40-4ed5-9f18-8bf5863d8d70\") " pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" Feb 24 02:25:29.338133 master-0 kubenswrapper[31411]: I0224 02:25:29.336841 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/02fc214d-8c40-4ed5-9f18-8bf5863d8d70-secret-metrics-client-certs\") pod \"metrics-server-67ddc7b799-zlnvf\" (UID: \"02fc214d-8c40-4ed5-9f18-8bf5863d8d70\") " pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" Feb 24 02:25:29.338133 master-0 kubenswrapper[31411]: I0224 02:25:29.336879 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/02fc214d-8c40-4ed5-9f18-8bf5863d8d70-secret-metrics-server-tls\") pod \"metrics-server-67ddc7b799-zlnvf\" (UID: \"02fc214d-8c40-4ed5-9f18-8bf5863d8d70\") " pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" Feb 24 02:25:29.338133 master-0 kubenswrapper[31411]: I0224 02:25:29.336990 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wftwb\" (UniqueName: \"kubernetes.io/projected/02fc214d-8c40-4ed5-9f18-8bf5863d8d70-kube-api-access-wftwb\") pod \"metrics-server-67ddc7b799-zlnvf\" (UID: \"02fc214d-8c40-4ed5-9f18-8bf5863d8d70\") " pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" Feb 24 02:25:29.338133 master-0 kubenswrapper[31411]: I0224 02:25:29.337038 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/02fc214d-8c40-4ed5-9f18-8bf5863d8d70-audit-log\") pod \"metrics-server-67ddc7b799-zlnvf\" (UID: \"02fc214d-8c40-4ed5-9f18-8bf5863d8d70\") " pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" Feb 24 02:25:29.338133 master-0 kubenswrapper[31411]: I0224 02:25:29.337685 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02fc214d-8c40-4ed5-9f18-8bf5863d8d70-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-67ddc7b799-zlnvf\" (UID: \"02fc214d-8c40-4ed5-9f18-8bf5863d8d70\") " pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" Feb 24 02:25:29.338133 master-0 kubenswrapper[31411]: I0224 02:25:29.337917 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/02fc214d-8c40-4ed5-9f18-8bf5863d8d70-audit-log\") pod \"metrics-server-67ddc7b799-zlnvf\" (UID: \"02fc214d-8c40-4ed5-9f18-8bf5863d8d70\") " pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" Feb 24 02:25:29.338816 master-0 kubenswrapper[31411]: I0224 02:25:29.338712 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/02fc214d-8c40-4ed5-9f18-8bf5863d8d70-metrics-server-audit-profiles\") pod \"metrics-server-67ddc7b799-zlnvf\" (UID: \"02fc214d-8c40-4ed5-9f18-8bf5863d8d70\") " pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" Feb 24 02:25:29.341758 master-0 kubenswrapper[31411]: I0224 02:25:29.341705 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02fc214d-8c40-4ed5-9f18-8bf5863d8d70-client-ca-bundle\") pod \"metrics-server-67ddc7b799-zlnvf\" (UID: \"02fc214d-8c40-4ed5-9f18-8bf5863d8d70\") " pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" Feb 24 02:25:29.342638 master-0 kubenswrapper[31411]: I0224 02:25:29.342594 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/02fc214d-8c40-4ed5-9f18-8bf5863d8d70-secret-metrics-server-tls\") pod \"metrics-server-67ddc7b799-zlnvf\" (UID: \"02fc214d-8c40-4ed5-9f18-8bf5863d8d70\") " pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" Feb 24 02:25:29.350338 master-0 kubenswrapper[31411]: I0224 02:25:29.350298 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/02fc214d-8c40-4ed5-9f18-8bf5863d8d70-secret-metrics-client-certs\") pod \"metrics-server-67ddc7b799-zlnvf\" (UID: \"02fc214d-8c40-4ed5-9f18-8bf5863d8d70\") " pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" Feb 24 02:25:29.364997 master-0 kubenswrapper[31411]: I0224 02:25:29.364959 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wftwb\" (UniqueName: \"kubernetes.io/projected/02fc214d-8c40-4ed5-9f18-8bf5863d8d70-kube-api-access-wftwb\") pod \"metrics-server-67ddc7b799-zlnvf\" (UID: \"02fc214d-8c40-4ed5-9f18-8bf5863d8d70\") " pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" Feb 24 02:25:29.374253 master-0 kubenswrapper[31411]: I0224 02:25:29.374162 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" Feb 24 02:25:29.439238 master-0 kubenswrapper[31411]: I0224 02:25:29.439185 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/d57a5233-cd86-45d7-9f96-92eb0cc06b7d-secret-telemeter-client\") pod \"telemeter-client-cc55f5fb6-hcn4g\" (UID: \"d57a5233-cd86-45d7-9f96-92eb0cc06b7d\") " pod="openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g" Feb 24 02:25:29.439542 master-0 kubenswrapper[31411]: I0224 02:25:29.439471 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d57a5233-cd86-45d7-9f96-92eb0cc06b7d-serving-certs-ca-bundle\") pod \"telemeter-client-cc55f5fb6-hcn4g\" (UID: \"d57a5233-cd86-45d7-9f96-92eb0cc06b7d\") " pod="openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g" Feb 24 02:25:29.439682 master-0 kubenswrapper[31411]: I0224 02:25:29.439549 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d57a5233-cd86-45d7-9f96-92eb0cc06b7d-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-cc55f5fb6-hcn4g\" (UID: \"d57a5233-cd86-45d7-9f96-92eb0cc06b7d\") " pod="openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g" Feb 24 02:25:29.439682 master-0 kubenswrapper[31411]: I0224 02:25:29.439647 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/d57a5233-cd86-45d7-9f96-92eb0cc06b7d-telemeter-client-tls\") pod \"telemeter-client-cc55f5fb6-hcn4g\" (UID: \"d57a5233-cd86-45d7-9f96-92eb0cc06b7d\") " pod="openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g" Feb 24 02:25:29.439830 master-0 kubenswrapper[31411]: I0224 02:25:29.439801 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/d57a5233-cd86-45d7-9f96-92eb0cc06b7d-federate-client-tls\") pod \"telemeter-client-cc55f5fb6-hcn4g\" (UID: \"d57a5233-cd86-45d7-9f96-92eb0cc06b7d\") " pod="openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g" Feb 24 02:25:29.439915 master-0 kubenswrapper[31411]: I0224 02:25:29.439863 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d57a5233-cd86-45d7-9f96-92eb0cc06b7d-metrics-client-ca\") pod \"telemeter-client-cc55f5fb6-hcn4g\" (UID: \"d57a5233-cd86-45d7-9f96-92eb0cc06b7d\") " pod="openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g" Feb 24 02:25:29.440120 master-0 kubenswrapper[31411]: I0224 02:25:29.440005 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d57a5233-cd86-45d7-9f96-92eb0cc06b7d-telemeter-trusted-ca-bundle\") pod \"telemeter-client-cc55f5fb6-hcn4g\" (UID: \"d57a5233-cd86-45d7-9f96-92eb0cc06b7d\") " pod="openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g" Feb 24 02:25:29.440120 master-0 kubenswrapper[31411]: I0224 02:25:29.440047 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzl95\" (UniqueName: \"kubernetes.io/projected/d57a5233-cd86-45d7-9f96-92eb0cc06b7d-kube-api-access-gzl95\") pod \"telemeter-client-cc55f5fb6-hcn4g\" (UID: \"d57a5233-cd86-45d7-9f96-92eb0cc06b7d\") " pod="openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g" Feb 24 02:25:29.541696 master-0 kubenswrapper[31411]: I0224 02:25:29.541654 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/d57a5233-cd86-45d7-9f96-92eb0cc06b7d-telemeter-client-tls\") pod \"telemeter-client-cc55f5fb6-hcn4g\" (UID: \"d57a5233-cd86-45d7-9f96-92eb0cc06b7d\") " pod="openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g" Feb 24 02:25:29.541793 master-0 kubenswrapper[31411]: I0224 02:25:29.541717 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/d57a5233-cd86-45d7-9f96-92eb0cc06b7d-federate-client-tls\") pod \"telemeter-client-cc55f5fb6-hcn4g\" (UID: \"d57a5233-cd86-45d7-9f96-92eb0cc06b7d\") " pod="openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g" Feb 24 02:25:29.541793 master-0 kubenswrapper[31411]: I0224 02:25:29.541751 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d57a5233-cd86-45d7-9f96-92eb0cc06b7d-metrics-client-ca\") pod \"telemeter-client-cc55f5fb6-hcn4g\" (UID: \"d57a5233-cd86-45d7-9f96-92eb0cc06b7d\") " pod="openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g" Feb 24 02:25:29.541793 master-0 kubenswrapper[31411]: I0224 02:25:29.541786 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d57a5233-cd86-45d7-9f96-92eb0cc06b7d-telemeter-trusted-ca-bundle\") pod \"telemeter-client-cc55f5fb6-hcn4g\" (UID: \"d57a5233-cd86-45d7-9f96-92eb0cc06b7d\") " pod="openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g" Feb 24 02:25:29.542104 master-0 kubenswrapper[31411]: I0224 02:25:29.542044 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzl95\" (UniqueName: \"kubernetes.io/projected/d57a5233-cd86-45d7-9f96-92eb0cc06b7d-kube-api-access-gzl95\") pod \"telemeter-client-cc55f5fb6-hcn4g\" (UID: \"d57a5233-cd86-45d7-9f96-92eb0cc06b7d\") " pod="openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g" Feb 24 02:25:29.542272 master-0 kubenswrapper[31411]: I0224 02:25:29.542242 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/d57a5233-cd86-45d7-9f96-92eb0cc06b7d-secret-telemeter-client\") pod \"telemeter-client-cc55f5fb6-hcn4g\" (UID: \"d57a5233-cd86-45d7-9f96-92eb0cc06b7d\") " pod="openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g" Feb 24 02:25:29.542416 master-0 kubenswrapper[31411]: I0224 02:25:29.542395 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d57a5233-cd86-45d7-9f96-92eb0cc06b7d-serving-certs-ca-bundle\") pod \"telemeter-client-cc55f5fb6-hcn4g\" (UID: \"d57a5233-cd86-45d7-9f96-92eb0cc06b7d\") " pod="openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g" Feb 24 02:25:29.542464 master-0 kubenswrapper[31411]: I0224 02:25:29.542421 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d57a5233-cd86-45d7-9f96-92eb0cc06b7d-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-cc55f5fb6-hcn4g\" (UID: \"d57a5233-cd86-45d7-9f96-92eb0cc06b7d\") " pod="openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g" Feb 24 02:25:29.544010 master-0 kubenswrapper[31411]: I0224 02:25:29.543979 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d57a5233-cd86-45d7-9f96-92eb0cc06b7d-telemeter-trusted-ca-bundle\") pod \"telemeter-client-cc55f5fb6-hcn4g\" (UID: \"d57a5233-cd86-45d7-9f96-92eb0cc06b7d\") " pod="openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g" Feb 24 02:25:29.544064 master-0 kubenswrapper[31411]: I0224 02:25:29.544038 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d57a5233-cd86-45d7-9f96-92eb0cc06b7d-serving-certs-ca-bundle\") pod \"telemeter-client-cc55f5fb6-hcn4g\" (UID: \"d57a5233-cd86-45d7-9f96-92eb0cc06b7d\") " pod="openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g" Feb 24 02:25:29.544224 master-0 kubenswrapper[31411]: I0224 02:25:29.544187 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d57a5233-cd86-45d7-9f96-92eb0cc06b7d-metrics-client-ca\") pod \"telemeter-client-cc55f5fb6-hcn4g\" (UID: \"d57a5233-cd86-45d7-9f96-92eb0cc06b7d\") " pod="openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g" Feb 24 02:25:29.547942 master-0 kubenswrapper[31411]: I0224 02:25:29.547888 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/d57a5233-cd86-45d7-9f96-92eb0cc06b7d-telemeter-client-tls\") pod \"telemeter-client-cc55f5fb6-hcn4g\" (UID: \"d57a5233-cd86-45d7-9f96-92eb0cc06b7d\") " pod="openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g" Feb 24 02:25:29.548537 master-0 kubenswrapper[31411]: I0224 02:25:29.548505 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/d57a5233-cd86-45d7-9f96-92eb0cc06b7d-secret-telemeter-client\") pod \"telemeter-client-cc55f5fb6-hcn4g\" (UID: \"d57a5233-cd86-45d7-9f96-92eb0cc06b7d\") " pod="openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g" Feb 24 02:25:29.548839 master-0 kubenswrapper[31411]: I0224 02:25:29.548801 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/d57a5233-cd86-45d7-9f96-92eb0cc06b7d-federate-client-tls\") pod \"telemeter-client-cc55f5fb6-hcn4g\" (UID: \"d57a5233-cd86-45d7-9f96-92eb0cc06b7d\") " pod="openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g" Feb 24 02:25:29.551746 master-0 kubenswrapper[31411]: I0224 02:25:29.551705 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d57a5233-cd86-45d7-9f96-92eb0cc06b7d-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-cc55f5fb6-hcn4g\" (UID: \"d57a5233-cd86-45d7-9f96-92eb0cc06b7d\") " pod="openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g" Feb 24 02:25:29.571854 master-0 kubenswrapper[31411]: I0224 02:25:29.571769 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzl95\" (UniqueName: \"kubernetes.io/projected/d57a5233-cd86-45d7-9f96-92eb0cc06b7d-kube-api-access-gzl95\") pod \"telemeter-client-cc55f5fb6-hcn4g\" (UID: \"d57a5233-cd86-45d7-9f96-92eb0cc06b7d\") " pod="openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g" Feb 24 02:25:29.625866 master-0 kubenswrapper[31411]: I0224 02:25:29.625820 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g" Feb 24 02:25:30.326591 master-0 kubenswrapper[31411]: I0224 02:25:30.326527 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-67ddc7b799-zlnvf"] Feb 24 02:25:30.398161 master-0 kubenswrapper[31411]: I0224 02:25:30.398101 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g"] Feb 24 02:25:30.616396 master-0 kubenswrapper[31411]: I0224 02:25:30.616330 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 24 02:25:30.620427 master-0 kubenswrapper[31411]: I0224 02:25:30.620363 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.629636 master-0 kubenswrapper[31411]: I0224 02:25:30.626765 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 24 02:25:30.629636 master-0 kubenswrapper[31411]: I0224 02:25:30.627264 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 24 02:25:30.629636 master-0 kubenswrapper[31411]: I0224 02:25:30.627266 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 24 02:25:30.629636 master-0 kubenswrapper[31411]: I0224 02:25:30.627489 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 24 02:25:30.629636 master-0 kubenswrapper[31411]: I0224 02:25:30.628052 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 24 02:25:30.634500 master-0 kubenswrapper[31411]: I0224 02:25:30.634448 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 24 02:25:30.634837 master-0 kubenswrapper[31411]: I0224 02:25:30.634649 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 24 02:25:30.634837 master-0 kubenswrapper[31411]: I0224 02:25:30.634835 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-19m8gtk5v5gsq" Feb 24 02:25:30.635072 master-0 kubenswrapper[31411]: I0224 02:25:30.635040 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 24 02:25:30.635072 master-0 kubenswrapper[31411]: I0224 02:25:30.635036 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 24 02:25:30.635533 master-0 kubenswrapper[31411]: I0224 02:25:30.635469 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 24 02:25:30.639598 master-0 kubenswrapper[31411]: I0224 02:25:30.637221 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 24 02:25:30.643091 master-0 kubenswrapper[31411]: I0224 02:25:30.643033 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g" event={"ID":"d57a5233-cd86-45d7-9f96-92eb0cc06b7d","Type":"ContainerStarted","Data":"0a7ce3e0642f7765488d27e71cb7e1fa0c281bfac08ecf48eefa251a1cd0af1e"} Feb 24 02:25:30.647784 master-0 kubenswrapper[31411]: I0224 02:25:30.645436 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" event={"ID":"02fc214d-8c40-4ed5-9f18-8bf5863d8d70","Type":"ContainerStarted","Data":"3050b13e06f70d9ce89742994f8bfaa46fbea827f2d236e21816e016f0582cf2"} Feb 24 02:25:30.647784 master-0 kubenswrapper[31411]: I0224 02:25:30.645481 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" event={"ID":"02fc214d-8c40-4ed5-9f18-8bf5863d8d70","Type":"ContainerStarted","Data":"2f622e2b4353178266297ce550597fb38b25511f656c40f5a8074eb759a8d429"} Feb 24 02:25:30.648797 master-0 kubenswrapper[31411]: I0224 02:25:30.648564 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 24 02:25:30.648797 master-0 kubenswrapper[31411]: I0224 02:25:30.648677 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" event={"ID":"133397d5-a069-4b31-b4d8-a7442bc62eba","Type":"ContainerStarted","Data":"d3c4490a4a6f988a94ac8f5f1aa28c17935866b2d6775f695c4f51c893303007"} Feb 24 02:25:30.648797 master-0 kubenswrapper[31411]: I0224 02:25:30.648702 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" event={"ID":"133397d5-a069-4b31-b4d8-a7442bc62eba","Type":"ContainerStarted","Data":"0205bf1a038e0eac9918d252411a15f12e4925e06c10eebb72849eea19f84fbc"} Feb 24 02:25:30.648797 master-0 kubenswrapper[31411]: I0224 02:25:30.648714 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" event={"ID":"133397d5-a069-4b31-b4d8-a7442bc62eba","Type":"ContainerStarted","Data":"66f81a00dac6c64b3d1231892a76ffb99f2cca6e2ccaecd8cfa96d97d67b6b83"} Feb 24 02:25:30.690922 master-0 kubenswrapper[31411]: I0224 02:25:30.690831 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" podStartSLOduration=2.690809186 podStartE2EDuration="2.690809186s" podCreationTimestamp="2026-02-24 02:25:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:25:30.683646247 +0000 UTC m=+273.900844093" watchObservedRunningTime="2026-02-24 02:25:30.690809186 +0000 UTC m=+273.908007032" Feb 24 02:25:30.776894 master-0 kubenswrapper[31411]: I0224 02:25:30.774147 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.776894 master-0 kubenswrapper[31411]: I0224 02:25:30.774237 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.776894 master-0 kubenswrapper[31411]: I0224 02:25:30.774268 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7q54\" (UniqueName: \"kubernetes.io/projected/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-kube-api-access-d7q54\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.776894 master-0 kubenswrapper[31411]: I0224 02:25:30.774307 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.776894 master-0 kubenswrapper[31411]: I0224 02:25:30.774343 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-config-out\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.776894 master-0 kubenswrapper[31411]: I0224 02:25:30.774369 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.776894 master-0 kubenswrapper[31411]: I0224 02:25:30.774396 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.776894 master-0 kubenswrapper[31411]: I0224 02:25:30.774439 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.776894 master-0 kubenswrapper[31411]: I0224 02:25:30.774464 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.776894 master-0 kubenswrapper[31411]: I0224 02:25:30.774487 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.776894 master-0 kubenswrapper[31411]: I0224 02:25:30.774511 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-config\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.776894 master-0 kubenswrapper[31411]: I0224 02:25:30.774536 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.776894 master-0 kubenswrapper[31411]: I0224 02:25:30.774623 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.776894 master-0 kubenswrapper[31411]: I0224 02:25:30.774664 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.776894 master-0 kubenswrapper[31411]: I0224 02:25:30.774726 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.776894 master-0 kubenswrapper[31411]: I0224 02:25:30.774748 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.776894 master-0 kubenswrapper[31411]: I0224 02:25:30.774768 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-web-config\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.776894 master-0 kubenswrapper[31411]: I0224 02:25:30.774788 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.877273 master-0 kubenswrapper[31411]: I0224 02:25:30.877164 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.877273 master-0 kubenswrapper[31411]: I0224 02:25:30.877252 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.877728 master-0 kubenswrapper[31411]: I0224 02:25:30.877299 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7q54\" (UniqueName: \"kubernetes.io/projected/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-kube-api-access-d7q54\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.877728 master-0 kubenswrapper[31411]: I0224 02:25:30.877583 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.877976 master-0 kubenswrapper[31411]: I0224 02:25:30.877911 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-config-out\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.878076 master-0 kubenswrapper[31411]: I0224 02:25:30.878057 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.881376 master-0 kubenswrapper[31411]: I0224 02:25:30.878691 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.881376 master-0 kubenswrapper[31411]: I0224 02:25:30.878733 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.881376 master-0 kubenswrapper[31411]: I0224 02:25:30.878846 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.881376 master-0 kubenswrapper[31411]: I0224 02:25:30.879232 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.881376 master-0 kubenswrapper[31411]: I0224 02:25:30.879275 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.881376 master-0 kubenswrapper[31411]: I0224 02:25:30.879321 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-config\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.881376 master-0 kubenswrapper[31411]: I0224 02:25:30.879368 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.881376 master-0 kubenswrapper[31411]: I0224 02:25:30.879405 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.881376 master-0 kubenswrapper[31411]: I0224 02:25:30.879499 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.881376 master-0 kubenswrapper[31411]: I0224 02:25:30.879590 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.881376 master-0 kubenswrapper[31411]: I0224 02:25:30.879629 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.881376 master-0 kubenswrapper[31411]: I0224 02:25:30.879665 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-web-config\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.881376 master-0 kubenswrapper[31411]: I0224 02:25:30.879691 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.881376 master-0 kubenswrapper[31411]: I0224 02:25:30.880368 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.881376 master-0 kubenswrapper[31411]: I0224 02:25:30.880856 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.882473 master-0 kubenswrapper[31411]: I0224 02:25:30.881482 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.882473 master-0 kubenswrapper[31411]: I0224 02:25:30.882319 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.884060 master-0 kubenswrapper[31411]: I0224 02:25:30.882982 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-config-out\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.884431 master-0 kubenswrapper[31411]: I0224 02:25:30.884323 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.885104 master-0 kubenswrapper[31411]: I0224 02:25:30.885003 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.888869 master-0 kubenswrapper[31411]: I0224 02:25:30.885239 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.888869 master-0 kubenswrapper[31411]: I0224 02:25:30.885598 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.888869 master-0 kubenswrapper[31411]: I0224 02:25:30.888282 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-web-config\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.888869 master-0 kubenswrapper[31411]: I0224 02:25:30.888756 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-config\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.891100 master-0 kubenswrapper[31411]: I0224 02:25:30.890472 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.892183 master-0 kubenswrapper[31411]: I0224 02:25:30.891794 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.892183 master-0 kubenswrapper[31411]: I0224 02:25:30.892115 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.896226 master-0 kubenswrapper[31411]: I0224 02:25:30.894310 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.896226 master-0 kubenswrapper[31411]: I0224 02:25:30.896098 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7q54\" (UniqueName: \"kubernetes.io/projected/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-kube-api-access-d7q54\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.912337 master-0 kubenswrapper[31411]: I0224 02:25:30.912262 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:30.963315 master-0 kubenswrapper[31411]: I0224 02:25:30.963221 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:32.302393 master-0 kubenswrapper[31411]: I0224 02:25:32.302322 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 24 02:25:32.673010 master-0 kubenswrapper[31411]: I0224 02:25:32.672883 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"977f33ee-175a-43f4-8b50-f539e1c2c583","Type":"ContainerStarted","Data":"1e90148272b2aae1b1aad411bd94b17b01d0e75fb1ceb02012c5e3172cb75398"} Feb 24 02:25:32.677426 master-0 kubenswrapper[31411]: I0224 02:25:32.677386 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" event={"ID":"133397d5-a069-4b31-b4d8-a7442bc62eba","Type":"ContainerStarted","Data":"0e3e8fbc24aa7ced6853669e780b9d3f0f60af31c4308e295639c2072e4e9668"} Feb 24 02:25:32.706460 master-0 kubenswrapper[31411]: W0224 02:25:32.706328 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3f5dc53_a9d4_4d18_a3ed_9415a1849bf0.slice/crio-3ce60aed37692610416a502b3285f72c41b45fc7703e3e9baeaffe7b5c330429 WatchSource:0}: Error finding container 3ce60aed37692610416a502b3285f72c41b45fc7703e3e9baeaffe7b5c330429: Status 404 returned error can't find the container with id 3ce60aed37692610416a502b3285f72c41b45fc7703e3e9baeaffe7b5c330429 Feb 24 02:25:33.695229 master-0 kubenswrapper[31411]: I0224 02:25:33.695114 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" event={"ID":"133397d5-a069-4b31-b4d8-a7442bc62eba","Type":"ContainerStarted","Data":"696b61a9281111bb8c0cc02bdbd90903055b2fbcce7652fc49688eafe1a281d2"} Feb 24 02:25:33.696183 master-0 kubenswrapper[31411]: I0224 02:25:33.695247 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" event={"ID":"133397d5-a069-4b31-b4d8-a7442bc62eba","Type":"ContainerStarted","Data":"0ed937d530999bc75c1b5935442a3a5d96950d4d4ba066f7db798e6223348d39"} Feb 24 02:25:33.696183 master-0 kubenswrapper[31411]: I0224 02:25:33.695332 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" Feb 24 02:25:33.698217 master-0 kubenswrapper[31411]: I0224 02:25:33.698129 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g" event={"ID":"d57a5233-cd86-45d7-9f96-92eb0cc06b7d","Type":"ContainerStarted","Data":"864c0ea893db7ecb82bddaa6edd67ced1ba3bdeefb3b83167c32bbe783141ba7"} Feb 24 02:25:33.698317 master-0 kubenswrapper[31411]: I0224 02:25:33.698227 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g" event={"ID":"d57a5233-cd86-45d7-9f96-92eb0cc06b7d","Type":"ContainerStarted","Data":"b38894a015ae638a0e3a0e79034ecaab9555d70e892ebd2ae320898c91d77f45"} Feb 24 02:25:33.703116 master-0 kubenswrapper[31411]: I0224 02:25:33.703028 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"977f33ee-175a-43f4-8b50-f539e1c2c583","Type":"ContainerStarted","Data":"6e552d87afc0d69fc59c53455fad66f9e0484f514992213df60bd3f13e0c9cff"} Feb 24 02:25:33.703116 master-0 kubenswrapper[31411]: I0224 02:25:33.703101 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"977f33ee-175a-43f4-8b50-f539e1c2c583","Type":"ContainerStarted","Data":"3ff74bf0141becc2c6649389e4209f99de557a6f9d600c0df5260be30be20f8f"} Feb 24 02:25:33.703116 master-0 kubenswrapper[31411]: I0224 02:25:33.703115 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"977f33ee-175a-43f4-8b50-f539e1c2c583","Type":"ContainerStarted","Data":"0a7fe1db9e17742be716a3e27e409388bccc831bddd53ccfdce88f30fbc797cf"} Feb 24 02:25:33.707264 master-0 kubenswrapper[31411]: I0224 02:25:33.707197 31411 generic.go:334] "Generic (PLEG): container finished" podID="c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0" containerID="1ae8426e4791ce62b0c1642199717d7808b508a5d2ac9b6368860c5102e89563" exitCode=0 Feb 24 02:25:33.707340 master-0 kubenswrapper[31411]: I0224 02:25:33.707268 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0","Type":"ContainerDied","Data":"1ae8426e4791ce62b0c1642199717d7808b508a5d2ac9b6368860c5102e89563"} Feb 24 02:25:33.707340 master-0 kubenswrapper[31411]: I0224 02:25:33.707305 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0","Type":"ContainerStarted","Data":"3ce60aed37692610416a502b3285f72c41b45fc7703e3e9baeaffe7b5c330429"} Feb 24 02:25:33.748493 master-0 kubenswrapper[31411]: I0224 02:25:33.748357 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" podStartSLOduration=3.176617838 podStartE2EDuration="7.748323023s" podCreationTimestamp="2026-02-24 02:25:26 +0000 UTC" firstStartedPulling="2026-02-24 02:25:27.20566091 +0000 UTC m=+270.422858766" lastFinishedPulling="2026-02-24 02:25:31.777366105 +0000 UTC m=+274.994563951" observedRunningTime="2026-02-24 02:25:33.737325636 +0000 UTC m=+276.954523512" watchObservedRunningTime="2026-02-24 02:25:33.748323023 +0000 UTC m=+276.965520909" Feb 24 02:25:34.727819 master-0 kubenswrapper[31411]: I0224 02:25:34.727707 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"977f33ee-175a-43f4-8b50-f539e1c2c583","Type":"ContainerStarted","Data":"7804fd9866fdaed78206c10634613b2c2891a8cdba7a37661f0b36deb87e3b62"} Feb 24 02:25:34.727819 master-0 kubenswrapper[31411]: I0224 02:25:34.727810 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"977f33ee-175a-43f4-8b50-f539e1c2c583","Type":"ContainerStarted","Data":"3efb0a9c17883bb02b380792274f3da9ec8467dcb84b77771752b35e1712e61b"} Feb 24 02:25:34.731356 master-0 kubenswrapper[31411]: I0224 02:25:34.731254 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g" event={"ID":"d57a5233-cd86-45d7-9f96-92eb0cc06b7d","Type":"ContainerStarted","Data":"c8b1364e723282f675c9a08ae6dd9fd8b48ad65376122cf7b8ad5143249c7798"} Feb 24 02:25:34.800520 master-0 kubenswrapper[31411]: I0224 02:25:34.800400 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=4.247507713 podStartE2EDuration="9.800357941s" podCreationTimestamp="2026-02-24 02:25:25 +0000 UTC" firstStartedPulling="2026-02-24 02:25:26.214403996 +0000 UTC m=+269.431601872" lastFinishedPulling="2026-02-24 02:25:31.767254224 +0000 UTC m=+274.984452100" observedRunningTime="2026-02-24 02:25:34.7622629 +0000 UTC m=+277.979460786" watchObservedRunningTime="2026-02-24 02:25:34.800357941 +0000 UTC m=+278.017555797" Feb 24 02:25:34.836434 master-0 kubenswrapper[31411]: I0224 02:25:34.836311 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-cc55f5fb6-hcn4g" podStartSLOduration=3.461718213 podStartE2EDuration="5.836277161s" podCreationTimestamp="2026-02-24 02:25:29 +0000 UTC" firstStartedPulling="2026-02-24 02:25:30.40366822 +0000 UTC m=+273.620866066" lastFinishedPulling="2026-02-24 02:25:32.778227128 +0000 UTC m=+275.995425014" observedRunningTime="2026-02-24 02:25:34.816158961 +0000 UTC m=+278.033356837" watchObservedRunningTime="2026-02-24 02:25:34.836277161 +0000 UTC m=+278.053475017" Feb 24 02:25:35.648646 master-0 kubenswrapper[31411]: I0224 02:25:35.648503 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-79b5f69b87-9qbb4"] Feb 24 02:25:35.649887 master-0 kubenswrapper[31411]: I0224 02:25:35.649862 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79b5f69b87-9qbb4" Feb 24 02:25:35.678039 master-0 kubenswrapper[31411]: I0224 02:25:35.677970 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-79b5f69b87-9qbb4"] Feb 24 02:25:35.698619 master-0 kubenswrapper[31411]: I0224 02:25:35.698040 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/db8e2a95-f0dd-4199-8ba1-51aac840146e-oauth-serving-cert\") pod \"console-79b5f69b87-9qbb4\" (UID: \"db8e2a95-f0dd-4199-8ba1-51aac840146e\") " pod="openshift-console/console-79b5f69b87-9qbb4" Feb 24 02:25:35.698619 master-0 kubenswrapper[31411]: I0224 02:25:35.698084 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/db8e2a95-f0dd-4199-8ba1-51aac840146e-console-serving-cert\") pod \"console-79b5f69b87-9qbb4\" (UID: \"db8e2a95-f0dd-4199-8ba1-51aac840146e\") " pod="openshift-console/console-79b5f69b87-9qbb4" Feb 24 02:25:35.699070 master-0 kubenswrapper[31411]: I0224 02:25:35.698417 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpgnf\" (UniqueName: \"kubernetes.io/projected/db8e2a95-f0dd-4199-8ba1-51aac840146e-kube-api-access-vpgnf\") pod \"console-79b5f69b87-9qbb4\" (UID: \"db8e2a95-f0dd-4199-8ba1-51aac840146e\") " pod="openshift-console/console-79b5f69b87-9qbb4" Feb 24 02:25:35.699070 master-0 kubenswrapper[31411]: I0224 02:25:35.698901 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/db8e2a95-f0dd-4199-8ba1-51aac840146e-service-ca\") pod \"console-79b5f69b87-9qbb4\" (UID: \"db8e2a95-f0dd-4199-8ba1-51aac840146e\") " pod="openshift-console/console-79b5f69b87-9qbb4" Feb 24 02:25:35.699070 master-0 kubenswrapper[31411]: I0224 02:25:35.699000 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db8e2a95-f0dd-4199-8ba1-51aac840146e-trusted-ca-bundle\") pod \"console-79b5f69b87-9qbb4\" (UID: \"db8e2a95-f0dd-4199-8ba1-51aac840146e\") " pod="openshift-console/console-79b5f69b87-9qbb4" Feb 24 02:25:35.699070 master-0 kubenswrapper[31411]: I0224 02:25:35.699032 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/db8e2a95-f0dd-4199-8ba1-51aac840146e-console-oauth-config\") pod \"console-79b5f69b87-9qbb4\" (UID: \"db8e2a95-f0dd-4199-8ba1-51aac840146e\") " pod="openshift-console/console-79b5f69b87-9qbb4" Feb 24 02:25:35.699324 master-0 kubenswrapper[31411]: I0224 02:25:35.699114 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/db8e2a95-f0dd-4199-8ba1-51aac840146e-console-config\") pod \"console-79b5f69b87-9qbb4\" (UID: \"db8e2a95-f0dd-4199-8ba1-51aac840146e\") " pod="openshift-console/console-79b5f69b87-9qbb4" Feb 24 02:25:35.801910 master-0 kubenswrapper[31411]: I0224 02:25:35.801824 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/db8e2a95-f0dd-4199-8ba1-51aac840146e-oauth-serving-cert\") pod \"console-79b5f69b87-9qbb4\" (UID: \"db8e2a95-f0dd-4199-8ba1-51aac840146e\") " pod="openshift-console/console-79b5f69b87-9qbb4" Feb 24 02:25:35.801910 master-0 kubenswrapper[31411]: I0224 02:25:35.801938 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/db8e2a95-f0dd-4199-8ba1-51aac840146e-console-serving-cert\") pod \"console-79b5f69b87-9qbb4\" (UID: \"db8e2a95-f0dd-4199-8ba1-51aac840146e\") " pod="openshift-console/console-79b5f69b87-9qbb4" Feb 24 02:25:35.803742 master-0 kubenswrapper[31411]: I0224 02:25:35.801987 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpgnf\" (UniqueName: \"kubernetes.io/projected/db8e2a95-f0dd-4199-8ba1-51aac840146e-kube-api-access-vpgnf\") pod \"console-79b5f69b87-9qbb4\" (UID: \"db8e2a95-f0dd-4199-8ba1-51aac840146e\") " pod="openshift-console/console-79b5f69b87-9qbb4" Feb 24 02:25:35.803742 master-0 kubenswrapper[31411]: I0224 02:25:35.802060 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/db8e2a95-f0dd-4199-8ba1-51aac840146e-service-ca\") pod \"console-79b5f69b87-9qbb4\" (UID: \"db8e2a95-f0dd-4199-8ba1-51aac840146e\") " pod="openshift-console/console-79b5f69b87-9qbb4" Feb 24 02:25:35.803742 master-0 kubenswrapper[31411]: I0224 02:25:35.802190 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/db8e2a95-f0dd-4199-8ba1-51aac840146e-console-oauth-config\") pod \"console-79b5f69b87-9qbb4\" (UID: \"db8e2a95-f0dd-4199-8ba1-51aac840146e\") " pod="openshift-console/console-79b5f69b87-9qbb4" Feb 24 02:25:35.803742 master-0 kubenswrapper[31411]: I0224 02:25:35.802238 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db8e2a95-f0dd-4199-8ba1-51aac840146e-trusted-ca-bundle\") pod \"console-79b5f69b87-9qbb4\" (UID: \"db8e2a95-f0dd-4199-8ba1-51aac840146e\") " pod="openshift-console/console-79b5f69b87-9qbb4" Feb 24 02:25:35.803742 master-0 kubenswrapper[31411]: I0224 02:25:35.802309 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/db8e2a95-f0dd-4199-8ba1-51aac840146e-console-config\") pod \"console-79b5f69b87-9qbb4\" (UID: \"db8e2a95-f0dd-4199-8ba1-51aac840146e\") " pod="openshift-console/console-79b5f69b87-9qbb4" Feb 24 02:25:35.805028 master-0 kubenswrapper[31411]: I0224 02:25:35.804399 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/db8e2a95-f0dd-4199-8ba1-51aac840146e-oauth-serving-cert\") pod \"console-79b5f69b87-9qbb4\" (UID: \"db8e2a95-f0dd-4199-8ba1-51aac840146e\") " pod="openshift-console/console-79b5f69b87-9qbb4" Feb 24 02:25:35.805028 master-0 kubenswrapper[31411]: I0224 02:25:35.804899 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db8e2a95-f0dd-4199-8ba1-51aac840146e-trusted-ca-bundle\") pod \"console-79b5f69b87-9qbb4\" (UID: \"db8e2a95-f0dd-4199-8ba1-51aac840146e\") " pod="openshift-console/console-79b5f69b87-9qbb4" Feb 24 02:25:35.805386 master-0 kubenswrapper[31411]: I0224 02:25:35.805131 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/db8e2a95-f0dd-4199-8ba1-51aac840146e-console-config\") pod \"console-79b5f69b87-9qbb4\" (UID: \"db8e2a95-f0dd-4199-8ba1-51aac840146e\") " pod="openshift-console/console-79b5f69b87-9qbb4" Feb 24 02:25:35.805386 master-0 kubenswrapper[31411]: I0224 02:25:35.805195 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/db8e2a95-f0dd-4199-8ba1-51aac840146e-service-ca\") pod \"console-79b5f69b87-9qbb4\" (UID: \"db8e2a95-f0dd-4199-8ba1-51aac840146e\") " pod="openshift-console/console-79b5f69b87-9qbb4" Feb 24 02:25:35.809031 master-0 kubenswrapper[31411]: I0224 02:25:35.808973 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/db8e2a95-f0dd-4199-8ba1-51aac840146e-console-oauth-config\") pod \"console-79b5f69b87-9qbb4\" (UID: \"db8e2a95-f0dd-4199-8ba1-51aac840146e\") " pod="openshift-console/console-79b5f69b87-9qbb4" Feb 24 02:25:35.821459 master-0 kubenswrapper[31411]: I0224 02:25:35.821404 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/db8e2a95-f0dd-4199-8ba1-51aac840146e-console-serving-cert\") pod \"console-79b5f69b87-9qbb4\" (UID: \"db8e2a95-f0dd-4199-8ba1-51aac840146e\") " pod="openshift-console/console-79b5f69b87-9qbb4" Feb 24 02:25:35.833139 master-0 kubenswrapper[31411]: I0224 02:25:35.833093 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpgnf\" (UniqueName: \"kubernetes.io/projected/db8e2a95-f0dd-4199-8ba1-51aac840146e-kube-api-access-vpgnf\") pod \"console-79b5f69b87-9qbb4\" (UID: \"db8e2a95-f0dd-4199-8ba1-51aac840146e\") " pod="openshift-console/console-79b5f69b87-9qbb4" Feb 24 02:25:36.032939 master-0 kubenswrapper[31411]: I0224 02:25:36.032856 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79b5f69b87-9qbb4" Feb 24 02:25:36.517187 master-0 kubenswrapper[31411]: I0224 02:25:36.517123 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-79b5f69b87-9qbb4"] Feb 24 02:25:36.672198 master-0 kubenswrapper[31411]: I0224 02:25:36.672125 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" Feb 24 02:25:37.884642 master-0 kubenswrapper[31411]: W0224 02:25:37.884301 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb8e2a95_f0dd_4199_8ba1_51aac840146e.slice/crio-03d97ad09bba8d15baea6c8f331911b8d1bcb7190a8cb9793c7f8fa5ce314a57 WatchSource:0}: Error finding container 03d97ad09bba8d15baea6c8f331911b8d1bcb7190a8cb9793c7f8fa5ce314a57: Status 404 returned error can't find the container with id 03d97ad09bba8d15baea6c8f331911b8d1bcb7190a8cb9793c7f8fa5ce314a57 Feb 24 02:25:38.772607 master-0 kubenswrapper[31411]: I0224 02:25:38.772367 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79b5f69b87-9qbb4" event={"ID":"db8e2a95-f0dd-4199-8ba1-51aac840146e","Type":"ContainerStarted","Data":"a7d82d2784f15a68cc4e32a983450691b1ab3e6e22537f9275c6afe158055b9d"} Feb 24 02:25:38.772607 master-0 kubenswrapper[31411]: I0224 02:25:38.772439 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79b5f69b87-9qbb4" event={"ID":"db8e2a95-f0dd-4199-8ba1-51aac840146e","Type":"ContainerStarted","Data":"03d97ad09bba8d15baea6c8f331911b8d1bcb7190a8cb9793c7f8fa5ce314a57"} Feb 24 02:25:38.784841 master-0 kubenswrapper[31411]: I0224 02:25:38.784784 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0","Type":"ContainerStarted","Data":"ed9b25501f6a3d2ca0cdd939999b192d3c529ab68c29c92ee9ffe3d1f13464ca"} Feb 24 02:25:38.784841 master-0 kubenswrapper[31411]: I0224 02:25:38.784850 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0","Type":"ContainerStarted","Data":"8b9f93bbfc36bdda034def0da178b8f6d824a21d4fe412dbc015373073a19372"} Feb 24 02:25:38.785133 master-0 kubenswrapper[31411]: I0224 02:25:38.784865 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0","Type":"ContainerStarted","Data":"6ba8ae083b1d8367afdec4986a8128aaf9710808fa221455ea5aa7d1ad4d97ce"} Feb 24 02:25:38.813601 master-0 kubenswrapper[31411]: I0224 02:25:38.812921 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-79b5f69b87-9qbb4" podStartSLOduration=3.812883463 podStartE2EDuration="3.812883463s" podCreationTimestamp="2026-02-24 02:25:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:25:38.799409058 +0000 UTC m=+282.016606954" watchObservedRunningTime="2026-02-24 02:25:38.812883463 +0000 UTC m=+282.030081359" Feb 24 02:25:39.819792 master-0 kubenswrapper[31411]: I0224 02:25:39.819708 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0","Type":"ContainerStarted","Data":"51a67fa956cf95aaa0aa6befb947de1cde534ca588af71f4745cad7f5dda158a"} Feb 24 02:25:39.820790 master-0 kubenswrapper[31411]: I0224 02:25:39.819862 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0","Type":"ContainerStarted","Data":"1baaceb0592ffd77b4cadc7306f88b11769b94368d6485a63685a55ede4016e0"} Feb 24 02:25:39.820790 master-0 kubenswrapper[31411]: I0224 02:25:39.819895 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0","Type":"ContainerStarted","Data":"e8473bb95f4dd6711a80801c5fe6773810d3b6e321ad2f2a2aad16aa0db88032"} Feb 24 02:25:40.964257 master-0 kubenswrapper[31411]: I0224 02:25:40.964153 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:25:46.033191 master-0 kubenswrapper[31411]: I0224 02:25:46.033098 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-79b5f69b87-9qbb4" Feb 24 02:25:46.034275 master-0 kubenswrapper[31411]: I0224 02:25:46.033699 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-79b5f69b87-9qbb4" Feb 24 02:25:46.041491 master-0 kubenswrapper[31411]: I0224 02:25:46.041442 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-79b5f69b87-9qbb4" Feb 24 02:25:46.078358 master-0 kubenswrapper[31411]: I0224 02:25:46.078229 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=11.788766954 podStartE2EDuration="16.078200338s" podCreationTimestamp="2026-02-24 02:25:30 +0000 UTC" firstStartedPulling="2026-02-24 02:25:33.710333255 +0000 UTC m=+276.927531101" lastFinishedPulling="2026-02-24 02:25:37.999766599 +0000 UTC m=+281.216964485" observedRunningTime="2026-02-24 02:25:39.874098137 +0000 UTC m=+283.091296013" watchObservedRunningTime="2026-02-24 02:25:46.078200338 +0000 UTC m=+289.295398224" Feb 24 02:25:46.905014 master-0 kubenswrapper[31411]: I0224 02:25:46.904901 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-79b5f69b87-9qbb4" Feb 24 02:25:47.506001 master-0 kubenswrapper[31411]: I0224 02:25:47.505359 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-576f8c76bf-2xx46"] Feb 24 02:25:49.374749 master-0 kubenswrapper[31411]: I0224 02:25:49.374646 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" Feb 24 02:25:49.375747 master-0 kubenswrapper[31411]: I0224 02:25:49.374793 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" Feb 24 02:25:57.090095 master-0 kubenswrapper[31411]: I0224 02:25:57.085172 31411 kubelet.go:1505] "Image garbage collection succeeded" Feb 24 02:26:09.383108 master-0 kubenswrapper[31411]: I0224 02:26:09.383016 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" Feb 24 02:26:09.389496 master-0 kubenswrapper[31411]: I0224 02:26:09.389451 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" Feb 24 02:26:12.572846 master-0 kubenswrapper[31411]: I0224 02:26:12.572741 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-576f8c76bf-2xx46" podUID="92850e60-7126-43e8-a7f7-fe411b6fc2b7" containerName="console" containerID="cri-o://e924d3d7846f8ae322be2f6da7bc199a5e305c49a262320d6ac1269ac1a0cf9b" gracePeriod=15 Feb 24 02:26:13.191687 master-0 kubenswrapper[31411]: I0224 02:26:13.191624 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-576f8c76bf-2xx46_92850e60-7126-43e8-a7f7-fe411b6fc2b7/console/0.log" Feb 24 02:26:13.191884 master-0 kubenswrapper[31411]: I0224 02:26:13.191777 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-576f8c76bf-2xx46" Feb 24 02:26:13.265289 master-0 kubenswrapper[31411]: I0224 02:26:13.265185 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/92850e60-7126-43e8-a7f7-fe411b6fc2b7-service-ca\") pod \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\" (UID: \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\") " Feb 24 02:26:13.265289 master-0 kubenswrapper[31411]: I0224 02:26:13.265300 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/92850e60-7126-43e8-a7f7-fe411b6fc2b7-console-serving-cert\") pod \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\" (UID: \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\") " Feb 24 02:26:13.265683 master-0 kubenswrapper[31411]: I0224 02:26:13.265376 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/92850e60-7126-43e8-a7f7-fe411b6fc2b7-console-oauth-config\") pod \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\" (UID: \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\") " Feb 24 02:26:13.265683 master-0 kubenswrapper[31411]: I0224 02:26:13.265493 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/92850e60-7126-43e8-a7f7-fe411b6fc2b7-oauth-serving-cert\") pod \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\" (UID: \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\") " Feb 24 02:26:13.265823 master-0 kubenswrapper[31411]: I0224 02:26:13.265693 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92850e60-7126-43e8-a7f7-fe411b6fc2b7-trusted-ca-bundle\") pod \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\" (UID: \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\") " Feb 24 02:26:13.265823 master-0 kubenswrapper[31411]: I0224 02:26:13.265766 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96mmc\" (UniqueName: \"kubernetes.io/projected/92850e60-7126-43e8-a7f7-fe411b6fc2b7-kube-api-access-96mmc\") pod \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\" (UID: \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\") " Feb 24 02:26:13.265823 master-0 kubenswrapper[31411]: I0224 02:26:13.265811 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/92850e60-7126-43e8-a7f7-fe411b6fc2b7-console-config\") pod \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\" (UID: \"92850e60-7126-43e8-a7f7-fe411b6fc2b7\") " Feb 24 02:26:13.268617 master-0 kubenswrapper[31411]: I0224 02:26:13.267054 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92850e60-7126-43e8-a7f7-fe411b6fc2b7-console-config" (OuterVolumeSpecName: "console-config") pod "92850e60-7126-43e8-a7f7-fe411b6fc2b7" (UID: "92850e60-7126-43e8-a7f7-fe411b6fc2b7"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:26:13.268617 master-0 kubenswrapper[31411]: I0224 02:26:13.267379 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92850e60-7126-43e8-a7f7-fe411b6fc2b7-service-ca" (OuterVolumeSpecName: "service-ca") pod "92850e60-7126-43e8-a7f7-fe411b6fc2b7" (UID: "92850e60-7126-43e8-a7f7-fe411b6fc2b7"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:26:13.268617 master-0 kubenswrapper[31411]: I0224 02:26:13.268111 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92850e60-7126-43e8-a7f7-fe411b6fc2b7-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "92850e60-7126-43e8-a7f7-fe411b6fc2b7" (UID: "92850e60-7126-43e8-a7f7-fe411b6fc2b7"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:26:13.268617 master-0 kubenswrapper[31411]: I0224 02:26:13.268230 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92850e60-7126-43e8-a7f7-fe411b6fc2b7-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "92850e60-7126-43e8-a7f7-fe411b6fc2b7" (UID: "92850e60-7126-43e8-a7f7-fe411b6fc2b7"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:26:13.274295 master-0 kubenswrapper[31411]: I0224 02:26:13.273106 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92850e60-7126-43e8-a7f7-fe411b6fc2b7-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "92850e60-7126-43e8-a7f7-fe411b6fc2b7" (UID: "92850e60-7126-43e8-a7f7-fe411b6fc2b7"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:26:13.274295 master-0 kubenswrapper[31411]: I0224 02:26:13.273189 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92850e60-7126-43e8-a7f7-fe411b6fc2b7-kube-api-access-96mmc" (OuterVolumeSpecName: "kube-api-access-96mmc") pod "92850e60-7126-43e8-a7f7-fe411b6fc2b7" (UID: "92850e60-7126-43e8-a7f7-fe411b6fc2b7"). InnerVolumeSpecName "kube-api-access-96mmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:26:13.277688 master-0 kubenswrapper[31411]: I0224 02:26:13.275612 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92850e60-7126-43e8-a7f7-fe411b6fc2b7-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "92850e60-7126-43e8-a7f7-fe411b6fc2b7" (UID: "92850e60-7126-43e8-a7f7-fe411b6fc2b7"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:26:13.368692 master-0 kubenswrapper[31411]: I0224 02:26:13.368465 31411 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/92850e60-7126-43e8-a7f7-fe411b6fc2b7-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:26:13.368692 master-0 kubenswrapper[31411]: I0224 02:26:13.368522 31411 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/92850e60-7126-43e8-a7f7-fe411b6fc2b7-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 02:26:13.368692 master-0 kubenswrapper[31411]: I0224 02:26:13.368542 31411 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92850e60-7126-43e8-a7f7-fe411b6fc2b7-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:26:13.368692 master-0 kubenswrapper[31411]: I0224 02:26:13.368562 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96mmc\" (UniqueName: \"kubernetes.io/projected/92850e60-7126-43e8-a7f7-fe411b6fc2b7-kube-api-access-96mmc\") on node \"master-0\" DevicePath \"\"" Feb 24 02:26:13.369975 master-0 kubenswrapper[31411]: I0224 02:26:13.369914 31411 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/92850e60-7126-43e8-a7f7-fe411b6fc2b7-console-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:26:13.369975 master-0 kubenswrapper[31411]: I0224 02:26:13.369961 31411 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/92850e60-7126-43e8-a7f7-fe411b6fc2b7-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 02:26:13.369975 master-0 kubenswrapper[31411]: I0224 02:26:13.369978 31411 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/92850e60-7126-43e8-a7f7-fe411b6fc2b7-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 02:26:13.375889 master-0 kubenswrapper[31411]: I0224 02:26:13.375847 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-576f8c76bf-2xx46_92850e60-7126-43e8-a7f7-fe411b6fc2b7/console/0.log" Feb 24 02:26:13.376116 master-0 kubenswrapper[31411]: I0224 02:26:13.375916 31411 generic.go:334] "Generic (PLEG): container finished" podID="92850e60-7126-43e8-a7f7-fe411b6fc2b7" containerID="e924d3d7846f8ae322be2f6da7bc199a5e305c49a262320d6ac1269ac1a0cf9b" exitCode=2 Feb 24 02:26:13.376116 master-0 kubenswrapper[31411]: I0224 02:26:13.375959 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-576f8c76bf-2xx46" event={"ID":"92850e60-7126-43e8-a7f7-fe411b6fc2b7","Type":"ContainerDied","Data":"e924d3d7846f8ae322be2f6da7bc199a5e305c49a262320d6ac1269ac1a0cf9b"} Feb 24 02:26:13.376116 master-0 kubenswrapper[31411]: I0224 02:26:13.375995 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-576f8c76bf-2xx46" event={"ID":"92850e60-7126-43e8-a7f7-fe411b6fc2b7","Type":"ContainerDied","Data":"56a343b3f667321b02fb71aad0519d20e7f5a4b0fc86a758fd56c0bbaff5ee4b"} Feb 24 02:26:13.376116 master-0 kubenswrapper[31411]: I0224 02:26:13.375995 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-576f8c76bf-2xx46" Feb 24 02:26:13.376116 master-0 kubenswrapper[31411]: I0224 02:26:13.376019 31411 scope.go:117] "RemoveContainer" containerID="e924d3d7846f8ae322be2f6da7bc199a5e305c49a262320d6ac1269ac1a0cf9b" Feb 24 02:26:13.408095 master-0 kubenswrapper[31411]: I0224 02:26:13.408036 31411 scope.go:117] "RemoveContainer" containerID="e924d3d7846f8ae322be2f6da7bc199a5e305c49a262320d6ac1269ac1a0cf9b" Feb 24 02:26:13.408833 master-0 kubenswrapper[31411]: E0224 02:26:13.408773 31411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e924d3d7846f8ae322be2f6da7bc199a5e305c49a262320d6ac1269ac1a0cf9b\": container with ID starting with e924d3d7846f8ae322be2f6da7bc199a5e305c49a262320d6ac1269ac1a0cf9b not found: ID does not exist" containerID="e924d3d7846f8ae322be2f6da7bc199a5e305c49a262320d6ac1269ac1a0cf9b" Feb 24 02:26:13.408960 master-0 kubenswrapper[31411]: I0224 02:26:13.408836 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e924d3d7846f8ae322be2f6da7bc199a5e305c49a262320d6ac1269ac1a0cf9b"} err="failed to get container status \"e924d3d7846f8ae322be2f6da7bc199a5e305c49a262320d6ac1269ac1a0cf9b\": rpc error: code = NotFound desc = could not find container \"e924d3d7846f8ae322be2f6da7bc199a5e305c49a262320d6ac1269ac1a0cf9b\": container with ID starting with e924d3d7846f8ae322be2f6da7bc199a5e305c49a262320d6ac1269ac1a0cf9b not found: ID does not exist" Feb 24 02:26:13.430881 master-0 kubenswrapper[31411]: I0224 02:26:13.430805 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-576f8c76bf-2xx46"] Feb 24 02:26:13.437613 master-0 kubenswrapper[31411]: I0224 02:26:13.437534 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-576f8c76bf-2xx46"] Feb 24 02:26:15.109870 master-0 kubenswrapper[31411]: I0224 02:26:15.109749 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92850e60-7126-43e8-a7f7-fe411b6fc2b7" path="/var/lib/kubelet/pods/92850e60-7126-43e8-a7f7-fe411b6fc2b7/volumes" Feb 24 02:26:30.963957 master-0 kubenswrapper[31411]: I0224 02:26:30.963900 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:26:31.015641 master-0 kubenswrapper[31411]: I0224 02:26:31.015527 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:26:31.607281 master-0 kubenswrapper[31411]: I0224 02:26:31.607218 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 24 02:26:38.130314 master-0 kubenswrapper[31411]: I0224 02:26:38.130060 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 24 02:26:38.131872 master-0 kubenswrapper[31411]: E0224 02:26:38.130551 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92850e60-7126-43e8-a7f7-fe411b6fc2b7" containerName="console" Feb 24 02:26:38.131872 master-0 kubenswrapper[31411]: I0224 02:26:38.130589 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="92850e60-7126-43e8-a7f7-fe411b6fc2b7" containerName="console" Feb 24 02:26:38.131872 master-0 kubenswrapper[31411]: I0224 02:26:38.131699 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="92850e60-7126-43e8-a7f7-fe411b6fc2b7" containerName="console" Feb 24 02:26:38.132733 master-0 kubenswrapper[31411]: I0224 02:26:38.132646 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 24 02:26:38.136814 master-0 kubenswrapper[31411]: I0224 02:26:38.136568 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-k86pk" Feb 24 02:26:38.137136 master-0 kubenswrapper[31411]: I0224 02:26:38.136802 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 24 02:26:38.151413 master-0 kubenswrapper[31411]: I0224 02:26:38.151355 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 24 02:26:38.250465 master-0 kubenswrapper[31411]: I0224 02:26:38.250371 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ce98a71-76d2-4dca-a329-12edcd926f2e-kube-api-access\") pod \"installer-4-master-0\" (UID: \"9ce98a71-76d2-4dca-a329-12edcd926f2e\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 24 02:26:38.250465 master-0 kubenswrapper[31411]: I0224 02:26:38.250475 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ce98a71-76d2-4dca-a329-12edcd926f2e-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"9ce98a71-76d2-4dca-a329-12edcd926f2e\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 24 02:26:38.250893 master-0 kubenswrapper[31411]: I0224 02:26:38.250730 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9ce98a71-76d2-4dca-a329-12edcd926f2e-var-lock\") pod \"installer-4-master-0\" (UID: \"9ce98a71-76d2-4dca-a329-12edcd926f2e\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 24 02:26:38.353121 master-0 kubenswrapper[31411]: I0224 02:26:38.353031 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ce98a71-76d2-4dca-a329-12edcd926f2e-kube-api-access\") pod \"installer-4-master-0\" (UID: \"9ce98a71-76d2-4dca-a329-12edcd926f2e\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 24 02:26:38.353121 master-0 kubenswrapper[31411]: I0224 02:26:38.353118 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ce98a71-76d2-4dca-a329-12edcd926f2e-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"9ce98a71-76d2-4dca-a329-12edcd926f2e\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 24 02:26:38.353510 master-0 kubenswrapper[31411]: I0224 02:26:38.353411 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9ce98a71-76d2-4dca-a329-12edcd926f2e-var-lock\") pod \"installer-4-master-0\" (UID: \"9ce98a71-76d2-4dca-a329-12edcd926f2e\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 24 02:26:38.353622 master-0 kubenswrapper[31411]: I0224 02:26:38.353513 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ce98a71-76d2-4dca-a329-12edcd926f2e-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"9ce98a71-76d2-4dca-a329-12edcd926f2e\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 24 02:26:38.353622 master-0 kubenswrapper[31411]: I0224 02:26:38.353595 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9ce98a71-76d2-4dca-a329-12edcd926f2e-var-lock\") pod \"installer-4-master-0\" (UID: \"9ce98a71-76d2-4dca-a329-12edcd926f2e\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 24 02:26:38.384905 master-0 kubenswrapper[31411]: I0224 02:26:38.384740 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ce98a71-76d2-4dca-a329-12edcd926f2e-kube-api-access\") pod \"installer-4-master-0\" (UID: \"9ce98a71-76d2-4dca-a329-12edcd926f2e\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 24 02:26:38.465996 master-0 kubenswrapper[31411]: I0224 02:26:38.465926 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 24 02:26:38.971341 master-0 kubenswrapper[31411]: I0224 02:26:38.971271 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 24 02:26:38.973144 master-0 kubenswrapper[31411]: W0224 02:26:38.973028 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod9ce98a71_76d2_4dca_a329_12edcd926f2e.slice/crio-5160537e55cfce859a2c6dac2756b1d812115c4f27b28d2771512903b8926c06 WatchSource:0}: Error finding container 5160537e55cfce859a2c6dac2756b1d812115c4f27b28d2771512903b8926c06: Status 404 returned error can't find the container with id 5160537e55cfce859a2c6dac2756b1d812115c4f27b28d2771512903b8926c06 Feb 24 02:26:39.641301 master-0 kubenswrapper[31411]: I0224 02:26:39.640387 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"9ce98a71-76d2-4dca-a329-12edcd926f2e","Type":"ContainerStarted","Data":"1af12da9edd7fbe2e8a879b98aebbdf16570d94c6a0c0cf1965ea814db1b7c46"} Feb 24 02:26:39.641301 master-0 kubenswrapper[31411]: I0224 02:26:39.640477 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"9ce98a71-76d2-4dca-a329-12edcd926f2e","Type":"ContainerStarted","Data":"5160537e55cfce859a2c6dac2756b1d812115c4f27b28d2771512903b8926c06"} Feb 24 02:26:39.670899 master-0 kubenswrapper[31411]: I0224 02:26:39.670771 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-0" podStartSLOduration=1.6707443309999999 podStartE2EDuration="1.670744331s" podCreationTimestamp="2026-02-24 02:26:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:26:39.666147571 +0000 UTC m=+342.883345447" watchObservedRunningTime="2026-02-24 02:26:39.670744331 +0000 UTC m=+342.887942207" Feb 24 02:26:39.939665 master-0 kubenswrapper[31411]: I0224 02:26:39.939446 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5d9776c47f-6p4nc"] Feb 24 02:26:39.940746 master-0 kubenswrapper[31411]: I0224 02:26:39.940705 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d9776c47f-6p4nc" Feb 24 02:26:39.997132 master-0 kubenswrapper[31411]: I0224 02:26:39.997051 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d9776c47f-6p4nc"] Feb 24 02:26:40.095096 master-0 kubenswrapper[31411]: I0224 02:26:40.095009 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-trusted-ca-bundle\") pod \"console-5d9776c47f-6p4nc\" (UID: \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\") " pod="openshift-console/console-5d9776c47f-6p4nc" Feb 24 02:26:40.095448 master-0 kubenswrapper[31411]: I0224 02:26:40.095130 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-oauth-serving-cert\") pod \"console-5d9776c47f-6p4nc\" (UID: \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\") " pod="openshift-console/console-5d9776c47f-6p4nc" Feb 24 02:26:40.095448 master-0 kubenswrapper[31411]: I0224 02:26:40.095182 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wxqb\" (UniqueName: \"kubernetes.io/projected/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-kube-api-access-4wxqb\") pod \"console-5d9776c47f-6p4nc\" (UID: \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\") " pod="openshift-console/console-5d9776c47f-6p4nc" Feb 24 02:26:40.095448 master-0 kubenswrapper[31411]: I0224 02:26:40.095235 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-console-serving-cert\") pod \"console-5d9776c47f-6p4nc\" (UID: \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\") " pod="openshift-console/console-5d9776c47f-6p4nc" Feb 24 02:26:40.095448 master-0 kubenswrapper[31411]: I0224 02:26:40.095310 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-console-oauth-config\") pod \"console-5d9776c47f-6p4nc\" (UID: \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\") " pod="openshift-console/console-5d9776c47f-6p4nc" Feb 24 02:26:40.095739 master-0 kubenswrapper[31411]: I0224 02:26:40.095459 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-console-config\") pod \"console-5d9776c47f-6p4nc\" (UID: \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\") " pod="openshift-console/console-5d9776c47f-6p4nc" Feb 24 02:26:40.095739 master-0 kubenswrapper[31411]: I0224 02:26:40.095615 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-service-ca\") pod \"console-5d9776c47f-6p4nc\" (UID: \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\") " pod="openshift-console/console-5d9776c47f-6p4nc" Feb 24 02:26:40.198070 master-0 kubenswrapper[31411]: I0224 02:26:40.197983 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-console-oauth-config\") pod \"console-5d9776c47f-6p4nc\" (UID: \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\") " pod="openshift-console/console-5d9776c47f-6p4nc" Feb 24 02:26:40.198338 master-0 kubenswrapper[31411]: I0224 02:26:40.198138 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-console-config\") pod \"console-5d9776c47f-6p4nc\" (UID: \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\") " pod="openshift-console/console-5d9776c47f-6p4nc" Feb 24 02:26:40.198517 master-0 kubenswrapper[31411]: I0224 02:26:40.198444 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-service-ca\") pod \"console-5d9776c47f-6p4nc\" (UID: \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\") " pod="openshift-console/console-5d9776c47f-6p4nc" Feb 24 02:26:40.198777 master-0 kubenswrapper[31411]: I0224 02:26:40.198720 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-trusted-ca-bundle\") pod \"console-5d9776c47f-6p4nc\" (UID: \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\") " pod="openshift-console/console-5d9776c47f-6p4nc" Feb 24 02:26:40.198901 master-0 kubenswrapper[31411]: I0224 02:26:40.198871 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-oauth-serving-cert\") pod \"console-5d9776c47f-6p4nc\" (UID: \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\") " pod="openshift-console/console-5d9776c47f-6p4nc" Feb 24 02:26:40.198985 master-0 kubenswrapper[31411]: I0224 02:26:40.198935 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wxqb\" (UniqueName: \"kubernetes.io/projected/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-kube-api-access-4wxqb\") pod \"console-5d9776c47f-6p4nc\" (UID: \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\") " pod="openshift-console/console-5d9776c47f-6p4nc" Feb 24 02:26:40.199064 master-0 kubenswrapper[31411]: I0224 02:26:40.199036 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-console-serving-cert\") pod \"console-5d9776c47f-6p4nc\" (UID: \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\") " pod="openshift-console/console-5d9776c47f-6p4nc" Feb 24 02:26:40.201661 master-0 kubenswrapper[31411]: I0224 02:26:40.201127 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-console-config\") pod \"console-5d9776c47f-6p4nc\" (UID: \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\") " pod="openshift-console/console-5d9776c47f-6p4nc" Feb 24 02:26:40.201661 master-0 kubenswrapper[31411]: I0224 02:26:40.201451 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-service-ca\") pod \"console-5d9776c47f-6p4nc\" (UID: \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\") " pod="openshift-console/console-5d9776c47f-6p4nc" Feb 24 02:26:40.201661 master-0 kubenswrapper[31411]: I0224 02:26:40.201525 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-oauth-serving-cert\") pod \"console-5d9776c47f-6p4nc\" (UID: \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\") " pod="openshift-console/console-5d9776c47f-6p4nc" Feb 24 02:26:40.201979 master-0 kubenswrapper[31411]: I0224 02:26:40.201666 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-trusted-ca-bundle\") pod \"console-5d9776c47f-6p4nc\" (UID: \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\") " pod="openshift-console/console-5d9776c47f-6p4nc" Feb 24 02:26:40.204658 master-0 kubenswrapper[31411]: I0224 02:26:40.204549 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-console-oauth-config\") pod \"console-5d9776c47f-6p4nc\" (UID: \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\") " pod="openshift-console/console-5d9776c47f-6p4nc" Feb 24 02:26:40.205428 master-0 kubenswrapper[31411]: I0224 02:26:40.205363 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-console-serving-cert\") pod \"console-5d9776c47f-6p4nc\" (UID: \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\") " pod="openshift-console/console-5d9776c47f-6p4nc" Feb 24 02:26:40.227877 master-0 kubenswrapper[31411]: I0224 02:26:40.227792 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wxqb\" (UniqueName: \"kubernetes.io/projected/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-kube-api-access-4wxqb\") pod \"console-5d9776c47f-6p4nc\" (UID: \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\") " pod="openshift-console/console-5d9776c47f-6p4nc" Feb 24 02:26:40.295643 master-0 kubenswrapper[31411]: I0224 02:26:40.295519 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d9776c47f-6p4nc" Feb 24 02:26:40.837913 master-0 kubenswrapper[31411]: I0224 02:26:40.837854 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d9776c47f-6p4nc"] Feb 24 02:26:40.845319 master-0 kubenswrapper[31411]: W0224 02:26:40.845230 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bfb3b1e_6ed0_4816_b812_7d0c38cb7812.slice/crio-a9c80f9d97ec6655b25c704cefad10b9602acf5ed75f1dea0a5ece4f2fae34b1 WatchSource:0}: Error finding container a9c80f9d97ec6655b25c704cefad10b9602acf5ed75f1dea0a5ece4f2fae34b1: Status 404 returned error can't find the container with id a9c80f9d97ec6655b25c704cefad10b9602acf5ed75f1dea0a5ece4f2fae34b1 Feb 24 02:26:41.347723 master-0 kubenswrapper[31411]: I0224 02:26:41.347634 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5d9776c47f-6p4nc"] Feb 24 02:26:41.382135 master-0 kubenswrapper[31411]: I0224 02:26:41.382040 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6647cb86fc-wzjr8"] Feb 24 02:26:41.383266 master-0 kubenswrapper[31411]: I0224 02:26:41.383205 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6647cb86fc-wzjr8" Feb 24 02:26:41.404553 master-0 kubenswrapper[31411]: I0224 02:26:41.404446 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6647cb86fc-wzjr8"] Feb 24 02:26:41.530233 master-0 kubenswrapper[31411]: I0224 02:26:41.530168 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-console-config\") pod \"console-6647cb86fc-wzjr8\" (UID: \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\") " pod="openshift-console/console-6647cb86fc-wzjr8" Feb 24 02:26:41.530233 master-0 kubenswrapper[31411]: I0224 02:26:41.530234 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-console-oauth-config\") pod \"console-6647cb86fc-wzjr8\" (UID: \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\") " pod="openshift-console/console-6647cb86fc-wzjr8" Feb 24 02:26:41.530233 master-0 kubenswrapper[31411]: I0224 02:26:41.530279 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-oauth-serving-cert\") pod \"console-6647cb86fc-wzjr8\" (UID: \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\") " pod="openshift-console/console-6647cb86fc-wzjr8" Feb 24 02:26:41.530861 master-0 kubenswrapper[31411]: I0224 02:26:41.530463 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-service-ca\") pod \"console-6647cb86fc-wzjr8\" (UID: \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\") " pod="openshift-console/console-6647cb86fc-wzjr8" Feb 24 02:26:41.530861 master-0 kubenswrapper[31411]: I0224 02:26:41.530637 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-trusted-ca-bundle\") pod \"console-6647cb86fc-wzjr8\" (UID: \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\") " pod="openshift-console/console-6647cb86fc-wzjr8" Feb 24 02:26:41.530861 master-0 kubenswrapper[31411]: I0224 02:26:41.530771 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft6s6\" (UniqueName: \"kubernetes.io/projected/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-kube-api-access-ft6s6\") pod \"console-6647cb86fc-wzjr8\" (UID: \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\") " pod="openshift-console/console-6647cb86fc-wzjr8" Feb 24 02:26:41.531146 master-0 kubenswrapper[31411]: I0224 02:26:41.530942 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-console-serving-cert\") pod \"console-6647cb86fc-wzjr8\" (UID: \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\") " pod="openshift-console/console-6647cb86fc-wzjr8" Feb 24 02:26:41.633163 master-0 kubenswrapper[31411]: I0224 02:26:41.632984 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-console-serving-cert\") pod \"console-6647cb86fc-wzjr8\" (UID: \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\") " pod="openshift-console/console-6647cb86fc-wzjr8" Feb 24 02:26:41.633163 master-0 kubenswrapper[31411]: I0224 02:26:41.633094 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-console-config\") pod \"console-6647cb86fc-wzjr8\" (UID: \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\") " pod="openshift-console/console-6647cb86fc-wzjr8" Feb 24 02:26:41.633163 master-0 kubenswrapper[31411]: I0224 02:26:41.633146 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-console-oauth-config\") pod \"console-6647cb86fc-wzjr8\" (UID: \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\") " pod="openshift-console/console-6647cb86fc-wzjr8" Feb 24 02:26:41.633677 master-0 kubenswrapper[31411]: I0224 02:26:41.633198 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-oauth-serving-cert\") pod \"console-6647cb86fc-wzjr8\" (UID: \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\") " pod="openshift-console/console-6647cb86fc-wzjr8" Feb 24 02:26:41.633677 master-0 kubenswrapper[31411]: I0224 02:26:41.633507 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-service-ca\") pod \"console-6647cb86fc-wzjr8\" (UID: \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\") " pod="openshift-console/console-6647cb86fc-wzjr8" Feb 24 02:26:41.635701 master-0 kubenswrapper[31411]: I0224 02:26:41.633765 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-trusted-ca-bundle\") pod \"console-6647cb86fc-wzjr8\" (UID: \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\") " pod="openshift-console/console-6647cb86fc-wzjr8" Feb 24 02:26:41.635701 master-0 kubenswrapper[31411]: I0224 02:26:41.634063 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft6s6\" (UniqueName: \"kubernetes.io/projected/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-kube-api-access-ft6s6\") pod \"console-6647cb86fc-wzjr8\" (UID: \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\") " pod="openshift-console/console-6647cb86fc-wzjr8" Feb 24 02:26:41.635701 master-0 kubenswrapper[31411]: I0224 02:26:41.635241 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-service-ca\") pod \"console-6647cb86fc-wzjr8\" (UID: \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\") " pod="openshift-console/console-6647cb86fc-wzjr8" Feb 24 02:26:41.635701 master-0 kubenswrapper[31411]: I0224 02:26:41.635238 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-console-config\") pod \"console-6647cb86fc-wzjr8\" (UID: \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\") " pod="openshift-console/console-6647cb86fc-wzjr8" Feb 24 02:26:41.635701 master-0 kubenswrapper[31411]: I0224 02:26:41.635426 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-oauth-serving-cert\") pod \"console-6647cb86fc-wzjr8\" (UID: \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\") " pod="openshift-console/console-6647cb86fc-wzjr8" Feb 24 02:26:41.636027 master-0 kubenswrapper[31411]: I0224 02:26:41.635889 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-trusted-ca-bundle\") pod \"console-6647cb86fc-wzjr8\" (UID: \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\") " pod="openshift-console/console-6647cb86fc-wzjr8" Feb 24 02:26:41.638835 master-0 kubenswrapper[31411]: I0224 02:26:41.638772 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-console-serving-cert\") pod \"console-6647cb86fc-wzjr8\" (UID: \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\") " pod="openshift-console/console-6647cb86fc-wzjr8" Feb 24 02:26:41.639655 master-0 kubenswrapper[31411]: I0224 02:26:41.639412 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-console-oauth-config\") pod \"console-6647cb86fc-wzjr8\" (UID: \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\") " pod="openshift-console/console-6647cb86fc-wzjr8" Feb 24 02:26:41.663566 master-0 kubenswrapper[31411]: I0224 02:26:41.663496 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d9776c47f-6p4nc" event={"ID":"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812","Type":"ContainerStarted","Data":"cc28bd166f6148ad127e2ed83f30134ea4c987a96f8961de727341ec365ef396"} Feb 24 02:26:41.663566 master-0 kubenswrapper[31411]: I0224 02:26:41.663563 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d9776c47f-6p4nc" event={"ID":"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812","Type":"ContainerStarted","Data":"a9c80f9d97ec6655b25c704cefad10b9602acf5ed75f1dea0a5ece4f2fae34b1"} Feb 24 02:26:41.665989 master-0 kubenswrapper[31411]: I0224 02:26:41.665771 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft6s6\" (UniqueName: \"kubernetes.io/projected/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-kube-api-access-ft6s6\") pod \"console-6647cb86fc-wzjr8\" (UID: \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\") " pod="openshift-console/console-6647cb86fc-wzjr8" Feb 24 02:26:41.696398 master-0 kubenswrapper[31411]: I0224 02:26:41.696288 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5d9776c47f-6p4nc" podStartSLOduration=2.696260036 podStartE2EDuration="2.696260036s" podCreationTimestamp="2026-02-24 02:26:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:26:41.693648892 +0000 UTC m=+344.910846768" watchObservedRunningTime="2026-02-24 02:26:41.696260036 +0000 UTC m=+344.913457912" Feb 24 02:26:41.742220 master-0 kubenswrapper[31411]: I0224 02:26:41.742154 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6647cb86fc-wzjr8" Feb 24 02:26:42.225237 master-0 kubenswrapper[31411]: I0224 02:26:42.225156 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6647cb86fc-wzjr8"] Feb 24 02:26:42.231892 master-0 kubenswrapper[31411]: W0224 02:26:42.231777 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c5669c1_c4ed_4f40_b9f7_4e51d1fad369.slice/crio-7450de429ca07579a07d0b8e24d4f9c4c1527bb17a077ce11bf1b62717a13795 WatchSource:0}: Error finding container 7450de429ca07579a07d0b8e24d4f9c4c1527bb17a077ce11bf1b62717a13795: Status 404 returned error can't find the container with id 7450de429ca07579a07d0b8e24d4f9c4c1527bb17a077ce11bf1b62717a13795 Feb 24 02:26:42.675856 master-0 kubenswrapper[31411]: I0224 02:26:42.675570 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6647cb86fc-wzjr8" event={"ID":"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369","Type":"ContainerStarted","Data":"6e4ad7afb43c18234ab6e709dbec5a6f90b6c6185396ae54cfe05f08c10f1fd8"} Feb 24 02:26:42.675856 master-0 kubenswrapper[31411]: I0224 02:26:42.675696 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6647cb86fc-wzjr8" event={"ID":"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369","Type":"ContainerStarted","Data":"7450de429ca07579a07d0b8e24d4f9c4c1527bb17a077ce11bf1b62717a13795"} Feb 24 02:26:50.296304 master-0 kubenswrapper[31411]: I0224 02:26:50.296115 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d9776c47f-6p4nc" Feb 24 02:26:51.743059 master-0 kubenswrapper[31411]: I0224 02:26:51.742972 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6647cb86fc-wzjr8" Feb 24 02:26:51.743059 master-0 kubenswrapper[31411]: I0224 02:26:51.743075 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6647cb86fc-wzjr8" Feb 24 02:26:51.752212 master-0 kubenswrapper[31411]: I0224 02:26:51.752142 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6647cb86fc-wzjr8" Feb 24 02:26:51.787732 master-0 kubenswrapper[31411]: I0224 02:26:51.787651 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6647cb86fc-wzjr8" Feb 24 02:26:51.815515 master-0 kubenswrapper[31411]: I0224 02:26:51.815414 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6647cb86fc-wzjr8" podStartSLOduration=10.815398134 podStartE2EDuration="10.815398134s" podCreationTimestamp="2026-02-24 02:26:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:26:42.713181186 +0000 UTC m=+345.930379042" watchObservedRunningTime="2026-02-24 02:26:51.815398134 +0000 UTC m=+355.032595980" Feb 24 02:26:51.952088 master-0 kubenswrapper[31411]: I0224 02:26:51.950841 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-79b5f69b87-9qbb4"] Feb 24 02:27:07.741422 master-0 kubenswrapper[31411]: I0224 02:27:07.741281 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5d9776c47f-6p4nc" podUID="3bfb3b1e-6ed0-4816-b812-7d0c38cb7812" containerName="console" containerID="cri-o://cc28bd166f6148ad127e2ed83f30134ea4c987a96f8961de727341ec365ef396" gracePeriod=15 Feb 24 02:27:07.941504 master-0 kubenswrapper[31411]: I0224 02:27:07.941412 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d9776c47f-6p4nc_3bfb3b1e-6ed0-4816-b812-7d0c38cb7812/console/0.log" Feb 24 02:27:07.941928 master-0 kubenswrapper[31411]: I0224 02:27:07.941604 31411 generic.go:334] "Generic (PLEG): container finished" podID="3bfb3b1e-6ed0-4816-b812-7d0c38cb7812" containerID="cc28bd166f6148ad127e2ed83f30134ea4c987a96f8961de727341ec365ef396" exitCode=2 Feb 24 02:27:07.941928 master-0 kubenswrapper[31411]: I0224 02:27:07.941661 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d9776c47f-6p4nc" event={"ID":"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812","Type":"ContainerDied","Data":"cc28bd166f6148ad127e2ed83f30134ea4c987a96f8961de727341ec365ef396"} Feb 24 02:27:08.450909 master-0 kubenswrapper[31411]: I0224 02:27:08.450851 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d9776c47f-6p4nc_3bfb3b1e-6ed0-4816-b812-7d0c38cb7812/console/0.log" Feb 24 02:27:08.451176 master-0 kubenswrapper[31411]: I0224 02:27:08.450972 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d9776c47f-6p4nc" Feb 24 02:27:08.632358 master-0 kubenswrapper[31411]: I0224 02:27:08.632103 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-oauth-serving-cert\") pod \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\" (UID: \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\") " Feb 24 02:27:08.632358 master-0 kubenswrapper[31411]: I0224 02:27:08.632257 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wxqb\" (UniqueName: \"kubernetes.io/projected/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-kube-api-access-4wxqb\") pod \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\" (UID: \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\") " Feb 24 02:27:08.632358 master-0 kubenswrapper[31411]: I0224 02:27:08.632365 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-trusted-ca-bundle\") pod \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\" (UID: \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\") " Feb 24 02:27:08.632882 master-0 kubenswrapper[31411]: I0224 02:27:08.632412 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-service-ca\") pod \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\" (UID: \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\") " Feb 24 02:27:08.632882 master-0 kubenswrapper[31411]: I0224 02:27:08.632464 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-console-oauth-config\") pod \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\" (UID: \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\") " Feb 24 02:27:08.632882 master-0 kubenswrapper[31411]: I0224 02:27:08.632504 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-console-serving-cert\") pod \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\" (UID: \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\") " Feb 24 02:27:08.632882 master-0 kubenswrapper[31411]: I0224 02:27:08.632556 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-console-config\") pod \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\" (UID: \"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812\") " Feb 24 02:27:08.633171 master-0 kubenswrapper[31411]: I0224 02:27:08.633055 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "3bfb3b1e-6ed0-4816-b812-7d0c38cb7812" (UID: "3bfb3b1e-6ed0-4816-b812-7d0c38cb7812"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:27:08.633654 master-0 kubenswrapper[31411]: I0224 02:27:08.633549 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-service-ca" (OuterVolumeSpecName: "service-ca") pod "3bfb3b1e-6ed0-4816-b812-7d0c38cb7812" (UID: "3bfb3b1e-6ed0-4816-b812-7d0c38cb7812"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:27:08.634095 master-0 kubenswrapper[31411]: I0224 02:27:08.634008 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "3bfb3b1e-6ed0-4816-b812-7d0c38cb7812" (UID: "3bfb3b1e-6ed0-4816-b812-7d0c38cb7812"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:27:08.634095 master-0 kubenswrapper[31411]: I0224 02:27:08.634061 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-console-config" (OuterVolumeSpecName: "console-config") pod "3bfb3b1e-6ed0-4816-b812-7d0c38cb7812" (UID: "3bfb3b1e-6ed0-4816-b812-7d0c38cb7812"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:27:08.638376 master-0 kubenswrapper[31411]: I0224 02:27:08.638285 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-kube-api-access-4wxqb" (OuterVolumeSpecName: "kube-api-access-4wxqb") pod "3bfb3b1e-6ed0-4816-b812-7d0c38cb7812" (UID: "3bfb3b1e-6ed0-4816-b812-7d0c38cb7812"). InnerVolumeSpecName "kube-api-access-4wxqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:27:08.638858 master-0 kubenswrapper[31411]: I0224 02:27:08.638776 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "3bfb3b1e-6ed0-4816-b812-7d0c38cb7812" (UID: "3bfb3b1e-6ed0-4816-b812-7d0c38cb7812"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:27:08.638858 master-0 kubenswrapper[31411]: I0224 02:27:08.638818 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "3bfb3b1e-6ed0-4816-b812-7d0c38cb7812" (UID: "3bfb3b1e-6ed0-4816-b812-7d0c38cb7812"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:27:08.735556 master-0 kubenswrapper[31411]: I0224 02:27:08.735489 31411 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:27:08.735556 master-0 kubenswrapper[31411]: I0224 02:27:08.735551 31411 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 02:27:08.735556 master-0 kubenswrapper[31411]: I0224 02:27:08.735602 31411 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:27:08.735881 master-0 kubenswrapper[31411]: I0224 02:27:08.735626 31411 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 02:27:08.735881 master-0 kubenswrapper[31411]: I0224 02:27:08.735647 31411 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-console-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:27:08.735881 master-0 kubenswrapper[31411]: I0224 02:27:08.735671 31411 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 02:27:08.735881 master-0 kubenswrapper[31411]: I0224 02:27:08.735689 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wxqb\" (UniqueName: \"kubernetes.io/projected/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812-kube-api-access-4wxqb\") on node \"master-0\" DevicePath \"\"" Feb 24 02:27:08.956138 master-0 kubenswrapper[31411]: I0224 02:27:08.956063 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d9776c47f-6p4nc_3bfb3b1e-6ed0-4816-b812-7d0c38cb7812/console/0.log" Feb 24 02:27:08.957070 master-0 kubenswrapper[31411]: I0224 02:27:08.956169 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d9776c47f-6p4nc" event={"ID":"3bfb3b1e-6ed0-4816-b812-7d0c38cb7812","Type":"ContainerDied","Data":"a9c80f9d97ec6655b25c704cefad10b9602acf5ed75f1dea0a5ece4f2fae34b1"} Feb 24 02:27:08.957070 master-0 kubenswrapper[31411]: I0224 02:27:08.956237 31411 scope.go:117] "RemoveContainer" containerID="cc28bd166f6148ad127e2ed83f30134ea4c987a96f8961de727341ec365ef396" Feb 24 02:27:08.957070 master-0 kubenswrapper[31411]: I0224 02:27:08.956336 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d9776c47f-6p4nc" Feb 24 02:27:09.038451 master-0 kubenswrapper[31411]: I0224 02:27:09.038360 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5d9776c47f-6p4nc"] Feb 24 02:27:09.050932 master-0 kubenswrapper[31411]: I0224 02:27:09.050847 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5d9776c47f-6p4nc"] Feb 24 02:27:09.116869 master-0 kubenswrapper[31411]: I0224 02:27:09.116773 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bfb3b1e-6ed0-4816-b812-7d0c38cb7812" path="/var/lib/kubelet/pods/3bfb3b1e-6ed0-4816-b812-7d0c38cb7812/volumes" Feb 24 02:27:12.529184 master-0 kubenswrapper[31411]: I0224 02:27:12.529095 31411 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 24 02:27:12.530316 master-0 kubenswrapper[31411]: I0224 02:27:12.529550 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="754ca2ae56da4950b59492ccafe15df5" containerName="cluster-policy-controller" containerID="cri-o://aafef463fb6ec586aa93d68463140f0e495d6fd2628a807d6f4cc72093d982ad" gracePeriod=30 Feb 24 02:27:12.530316 master-0 kubenswrapper[31411]: I0224 02:27:12.529693 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="754ca2ae56da4950b59492ccafe15df5" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://7221134b3af5b91666a2752516338d10accc510917d963f481c2b324d0d85d72" gracePeriod=30 Feb 24 02:27:12.530316 master-0 kubenswrapper[31411]: I0224 02:27:12.529726 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="754ca2ae56da4950b59492ccafe15df5" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://2dd35a4b899d0d46f8cae0b0fdc77f4727b3df36f32d794fdf4c0908a1c24f54" gracePeriod=30 Feb 24 02:27:12.530316 master-0 kubenswrapper[31411]: I0224 02:27:12.530122 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="754ca2ae56da4950b59492ccafe15df5" containerName="kube-controller-manager" containerID="cri-o://9250496c585327c26795a4ba925297eda6aefab50f23daee888a6e7c19b4af75" gracePeriod=30 Feb 24 02:27:12.531355 master-0 kubenswrapper[31411]: I0224 02:27:12.531057 31411 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 24 02:27:12.531696 master-0 kubenswrapper[31411]: E0224 02:27:12.531647 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="754ca2ae56da4950b59492ccafe15df5" containerName="kube-controller-manager-recovery-controller" Feb 24 02:27:12.531696 master-0 kubenswrapper[31411]: I0224 02:27:12.531684 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="754ca2ae56da4950b59492ccafe15df5" containerName="kube-controller-manager-recovery-controller" Feb 24 02:27:12.531929 master-0 kubenswrapper[31411]: E0224 02:27:12.531736 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="754ca2ae56da4950b59492ccafe15df5" containerName="kube-controller-manager" Feb 24 02:27:12.531929 master-0 kubenswrapper[31411]: I0224 02:27:12.531750 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="754ca2ae56da4950b59492ccafe15df5" containerName="kube-controller-manager" Feb 24 02:27:12.531929 master-0 kubenswrapper[31411]: E0224 02:27:12.531775 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="754ca2ae56da4950b59492ccafe15df5" containerName="cluster-policy-controller" Feb 24 02:27:12.531929 master-0 kubenswrapper[31411]: I0224 02:27:12.531789 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="754ca2ae56da4950b59492ccafe15df5" containerName="cluster-policy-controller" Feb 24 02:27:12.531929 master-0 kubenswrapper[31411]: E0224 02:27:12.531878 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bfb3b1e-6ed0-4816-b812-7d0c38cb7812" containerName="console" Feb 24 02:27:12.531929 master-0 kubenswrapper[31411]: I0224 02:27:12.531894 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bfb3b1e-6ed0-4816-b812-7d0c38cb7812" containerName="console" Feb 24 02:27:12.532312 master-0 kubenswrapper[31411]: E0224 02:27:12.531973 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="754ca2ae56da4950b59492ccafe15df5" containerName="kube-controller-manager-cert-syncer" Feb 24 02:27:12.532312 master-0 kubenswrapper[31411]: I0224 02:27:12.531993 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="754ca2ae56da4950b59492ccafe15df5" containerName="kube-controller-manager-cert-syncer" Feb 24 02:27:12.532312 master-0 kubenswrapper[31411]: I0224 02:27:12.532281 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bfb3b1e-6ed0-4816-b812-7d0c38cb7812" containerName="console" Feb 24 02:27:12.532312 master-0 kubenswrapper[31411]: I0224 02:27:12.532311 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="754ca2ae56da4950b59492ccafe15df5" containerName="kube-controller-manager-cert-syncer" Feb 24 02:27:12.532610 master-0 kubenswrapper[31411]: I0224 02:27:12.532416 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="754ca2ae56da4950b59492ccafe15df5" containerName="cluster-policy-controller" Feb 24 02:27:12.533836 master-0 kubenswrapper[31411]: I0224 02:27:12.533788 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="754ca2ae56da4950b59492ccafe15df5" containerName="kube-controller-manager" Feb 24 02:27:12.533944 master-0 kubenswrapper[31411]: I0224 02:27:12.533839 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="754ca2ae56da4950b59492ccafe15df5" containerName="kube-controller-manager" Feb 24 02:27:12.533944 master-0 kubenswrapper[31411]: I0224 02:27:12.533881 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="754ca2ae56da4950b59492ccafe15df5" containerName="kube-controller-manager-recovery-controller" Feb 24 02:27:12.534189 master-0 kubenswrapper[31411]: E0224 02:27:12.534148 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="754ca2ae56da4950b59492ccafe15df5" containerName="kube-controller-manager" Feb 24 02:27:12.534189 master-0 kubenswrapper[31411]: I0224 02:27:12.534174 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="754ca2ae56da4950b59492ccafe15df5" containerName="kube-controller-manager" Feb 24 02:27:12.709174 master-0 kubenswrapper[31411]: I0224 02:27:12.709096 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/90cab0b4b999dea8aa2983c3ba90887c-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"90cab0b4b999dea8aa2983c3ba90887c\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:27:12.709331 master-0 kubenswrapper[31411]: I0224 02:27:12.709247 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/90cab0b4b999dea8aa2983c3ba90887c-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"90cab0b4b999dea8aa2983c3ba90887c\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:27:12.809565 master-0 kubenswrapper[31411]: I0224 02:27:12.809415 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_754ca2ae56da4950b59492ccafe15df5/kube-controller-manager-cert-syncer/0.log" Feb 24 02:27:12.810509 master-0 kubenswrapper[31411]: I0224 02:27:12.810458 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/90cab0b4b999dea8aa2983c3ba90887c-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"90cab0b4b999dea8aa2983c3ba90887c\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:27:12.810632 master-0 kubenswrapper[31411]: I0224 02:27:12.810552 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/90cab0b4b999dea8aa2983c3ba90887c-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"90cab0b4b999dea8aa2983c3ba90887c\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:27:12.810791 master-0 kubenswrapper[31411]: I0224 02:27:12.810737 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/90cab0b4b999dea8aa2983c3ba90887c-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"90cab0b4b999dea8aa2983c3ba90887c\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:27:12.810898 master-0 kubenswrapper[31411]: I0224 02:27:12.810825 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/90cab0b4b999dea8aa2983c3ba90887c-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"90cab0b4b999dea8aa2983c3ba90887c\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:27:12.812468 master-0 kubenswrapper[31411]: I0224 02:27:12.812414 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_754ca2ae56da4950b59492ccafe15df5/kube-controller-manager/0.log" Feb 24 02:27:12.812640 master-0 kubenswrapper[31411]: I0224 02:27:12.812562 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:27:12.817813 master-0 kubenswrapper[31411]: I0224 02:27:12.817738 31411 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="754ca2ae56da4950b59492ccafe15df5" podUID="90cab0b4b999dea8aa2983c3ba90887c" Feb 24 02:27:12.999994 master-0 kubenswrapper[31411]: I0224 02:27:12.999931 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_754ca2ae56da4950b59492ccafe15df5/kube-controller-manager-cert-syncer/0.log" Feb 24 02:27:13.002163 master-0 kubenswrapper[31411]: I0224 02:27:13.001996 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_754ca2ae56da4950b59492ccafe15df5/kube-controller-manager/0.log" Feb 24 02:27:13.002383 master-0 kubenswrapper[31411]: I0224 02:27:13.002326 31411 generic.go:334] "Generic (PLEG): container finished" podID="754ca2ae56da4950b59492ccafe15df5" containerID="9250496c585327c26795a4ba925297eda6aefab50f23daee888a6e7c19b4af75" exitCode=0 Feb 24 02:27:13.002383 master-0 kubenswrapper[31411]: I0224 02:27:13.002380 31411 generic.go:334] "Generic (PLEG): container finished" podID="754ca2ae56da4950b59492ccafe15df5" containerID="2dd35a4b899d0d46f8cae0b0fdc77f4727b3df36f32d794fdf4c0908a1c24f54" exitCode=0 Feb 24 02:27:13.002544 master-0 kubenswrapper[31411]: I0224 02:27:13.002397 31411 generic.go:334] "Generic (PLEG): container finished" podID="754ca2ae56da4950b59492ccafe15df5" containerID="7221134b3af5b91666a2752516338d10accc510917d963f481c2b324d0d85d72" exitCode=2 Feb 24 02:27:13.002544 master-0 kubenswrapper[31411]: I0224 02:27:13.002411 31411 generic.go:334] "Generic (PLEG): container finished" podID="754ca2ae56da4950b59492ccafe15df5" containerID="aafef463fb6ec586aa93d68463140f0e495d6fd2628a807d6f4cc72093d982ad" exitCode=0 Feb 24 02:27:13.002544 master-0 kubenswrapper[31411]: I0224 02:27:13.002523 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c7a4ddfc580299e516c3b8a5ba70fdc237d2a8d221bdcc8c2e2dd933bbe11dc" Feb 24 02:27:13.002782 master-0 kubenswrapper[31411]: I0224 02:27:13.002565 31411 scope.go:117] "RemoveContainer" containerID="e8427f995dbba6ba67b5688619b31dac495d913cbeaa8eabfe11085df7d0a498" Feb 24 02:27:13.002998 master-0 kubenswrapper[31411]: I0224 02:27:13.002959 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:27:13.004857 master-0 kubenswrapper[31411]: I0224 02:27:13.004816 31411 generic.go:334] "Generic (PLEG): container finished" podID="9ce98a71-76d2-4dca-a329-12edcd926f2e" containerID="1af12da9edd7fbe2e8a879b98aebbdf16570d94c6a0c0cf1965ea814db1b7c46" exitCode=0 Feb 24 02:27:13.005035 master-0 kubenswrapper[31411]: I0224 02:27:13.004941 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"9ce98a71-76d2-4dca-a329-12edcd926f2e","Type":"ContainerDied","Data":"1af12da9edd7fbe2e8a879b98aebbdf16570d94c6a0c0cf1965ea814db1b7c46"} Feb 24 02:27:13.008365 master-0 kubenswrapper[31411]: I0224 02:27:13.008300 31411 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="754ca2ae56da4950b59492ccafe15df5" podUID="90cab0b4b999dea8aa2983c3ba90887c" Feb 24 02:27:13.013688 master-0 kubenswrapper[31411]: I0224 02:27:13.013621 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/754ca2ae56da4950b59492ccafe15df5-cert-dir\") pod \"754ca2ae56da4950b59492ccafe15df5\" (UID: \"754ca2ae56da4950b59492ccafe15df5\") " Feb 24 02:27:13.013688 master-0 kubenswrapper[31411]: I0224 02:27:13.013682 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/754ca2ae56da4950b59492ccafe15df5-resource-dir\") pod \"754ca2ae56da4950b59492ccafe15df5\" (UID: \"754ca2ae56da4950b59492ccafe15df5\") " Feb 24 02:27:13.014007 master-0 kubenswrapper[31411]: I0224 02:27:13.013973 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/754ca2ae56da4950b59492ccafe15df5-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "754ca2ae56da4950b59492ccafe15df5" (UID: "754ca2ae56da4950b59492ccafe15df5"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:27:13.014159 master-0 kubenswrapper[31411]: I0224 02:27:13.014107 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/754ca2ae56da4950b59492ccafe15df5-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "754ca2ae56da4950b59492ccafe15df5" (UID: "754ca2ae56da4950b59492ccafe15df5"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:27:13.014614 master-0 kubenswrapper[31411]: I0224 02:27:13.014560 31411 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/754ca2ae56da4950b59492ccafe15df5-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:27:13.014811 master-0 kubenswrapper[31411]: I0224 02:27:13.014787 31411 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/754ca2ae56da4950b59492ccafe15df5-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:27:13.106156 master-0 kubenswrapper[31411]: I0224 02:27:13.105970 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="754ca2ae56da4950b59492ccafe15df5" path="/var/lib/kubelet/pods/754ca2ae56da4950b59492ccafe15df5/volumes" Feb 24 02:27:13.314143 master-0 kubenswrapper[31411]: I0224 02:27:13.314044 31411 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="754ca2ae56da4950b59492ccafe15df5" podUID="90cab0b4b999dea8aa2983c3ba90887c" Feb 24 02:27:14.022209 master-0 kubenswrapper[31411]: I0224 02:27:14.022127 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_754ca2ae56da4950b59492ccafe15df5/kube-controller-manager-cert-syncer/0.log" Feb 24 02:27:14.610080 master-0 kubenswrapper[31411]: I0224 02:27:14.610001 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 24 02:27:14.751432 master-0 kubenswrapper[31411]: I0224 02:27:14.751342 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9ce98a71-76d2-4dca-a329-12edcd926f2e-var-lock\") pod \"9ce98a71-76d2-4dca-a329-12edcd926f2e\" (UID: \"9ce98a71-76d2-4dca-a329-12edcd926f2e\") " Feb 24 02:27:14.751761 master-0 kubenswrapper[31411]: I0224 02:27:14.751459 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ce98a71-76d2-4dca-a329-12edcd926f2e-kube-api-access\") pod \"9ce98a71-76d2-4dca-a329-12edcd926f2e\" (UID: \"9ce98a71-76d2-4dca-a329-12edcd926f2e\") " Feb 24 02:27:14.751761 master-0 kubenswrapper[31411]: I0224 02:27:14.751512 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ce98a71-76d2-4dca-a329-12edcd926f2e-var-lock" (OuterVolumeSpecName: "var-lock") pod "9ce98a71-76d2-4dca-a329-12edcd926f2e" (UID: "9ce98a71-76d2-4dca-a329-12edcd926f2e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:27:14.751761 master-0 kubenswrapper[31411]: I0224 02:27:14.751537 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ce98a71-76d2-4dca-a329-12edcd926f2e-kubelet-dir\") pod \"9ce98a71-76d2-4dca-a329-12edcd926f2e\" (UID: \"9ce98a71-76d2-4dca-a329-12edcd926f2e\") " Feb 24 02:27:14.751761 master-0 kubenswrapper[31411]: I0224 02:27:14.751681 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ce98a71-76d2-4dca-a329-12edcd926f2e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9ce98a71-76d2-4dca-a329-12edcd926f2e" (UID: "9ce98a71-76d2-4dca-a329-12edcd926f2e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:27:14.756409 master-0 kubenswrapper[31411]: I0224 02:27:14.755657 31411 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ce98a71-76d2-4dca-a329-12edcd926f2e-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:27:14.756409 master-0 kubenswrapper[31411]: I0224 02:27:14.755703 31411 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9ce98a71-76d2-4dca-a329-12edcd926f2e-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 02:27:14.761961 master-0 kubenswrapper[31411]: I0224 02:27:14.760897 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ce98a71-76d2-4dca-a329-12edcd926f2e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9ce98a71-76d2-4dca-a329-12edcd926f2e" (UID: "9ce98a71-76d2-4dca-a329-12edcd926f2e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:27:14.857360 master-0 kubenswrapper[31411]: I0224 02:27:14.857278 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ce98a71-76d2-4dca-a329-12edcd926f2e-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 02:27:15.036593 master-0 kubenswrapper[31411]: I0224 02:27:15.036510 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"9ce98a71-76d2-4dca-a329-12edcd926f2e","Type":"ContainerDied","Data":"5160537e55cfce859a2c6dac2756b1d812115c4f27b28d2771512903b8926c06"} Feb 24 02:27:15.037362 master-0 kubenswrapper[31411]: I0224 02:27:15.036622 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5160537e55cfce859a2c6dac2756b1d812115c4f27b28d2771512903b8926c06" Feb 24 02:27:15.037362 master-0 kubenswrapper[31411]: I0224 02:27:15.036690 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 24 02:27:17.003065 master-0 kubenswrapper[31411]: I0224 02:27:17.002883 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-79b5f69b87-9qbb4" podUID="db8e2a95-f0dd-4199-8ba1-51aac840146e" containerName="console" containerID="cri-o://a7d82d2784f15a68cc4e32a983450691b1ab3e6e22537f9275c6afe158055b9d" gracePeriod=15 Feb 24 02:27:17.745746 master-0 kubenswrapper[31411]: I0224 02:27:17.745696 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-79b5f69b87-9qbb4_db8e2a95-f0dd-4199-8ba1-51aac840146e/console/0.log" Feb 24 02:27:17.745933 master-0 kubenswrapper[31411]: I0224 02:27:17.745827 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79b5f69b87-9qbb4" Feb 24 02:27:17.824564 master-0 kubenswrapper[31411]: I0224 02:27:17.824478 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/db8e2a95-f0dd-4199-8ba1-51aac840146e-service-ca\") pod \"db8e2a95-f0dd-4199-8ba1-51aac840146e\" (UID: \"db8e2a95-f0dd-4199-8ba1-51aac840146e\") " Feb 24 02:27:17.824739 master-0 kubenswrapper[31411]: I0224 02:27:17.824623 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/db8e2a95-f0dd-4199-8ba1-51aac840146e-oauth-serving-cert\") pod \"db8e2a95-f0dd-4199-8ba1-51aac840146e\" (UID: \"db8e2a95-f0dd-4199-8ba1-51aac840146e\") " Feb 24 02:27:17.824739 master-0 kubenswrapper[31411]: I0224 02:27:17.824693 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/db8e2a95-f0dd-4199-8ba1-51aac840146e-console-oauth-config\") pod \"db8e2a95-f0dd-4199-8ba1-51aac840146e\" (UID: \"db8e2a95-f0dd-4199-8ba1-51aac840146e\") " Feb 24 02:27:17.824871 master-0 kubenswrapper[31411]: I0224 02:27:17.824777 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/db8e2a95-f0dd-4199-8ba1-51aac840146e-console-serving-cert\") pod \"db8e2a95-f0dd-4199-8ba1-51aac840146e\" (UID: \"db8e2a95-f0dd-4199-8ba1-51aac840146e\") " Feb 24 02:27:17.824871 master-0 kubenswrapper[31411]: I0224 02:27:17.824839 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpgnf\" (UniqueName: \"kubernetes.io/projected/db8e2a95-f0dd-4199-8ba1-51aac840146e-kube-api-access-vpgnf\") pod \"db8e2a95-f0dd-4199-8ba1-51aac840146e\" (UID: \"db8e2a95-f0dd-4199-8ba1-51aac840146e\") " Feb 24 02:27:17.825021 master-0 kubenswrapper[31411]: I0224 02:27:17.824880 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db8e2a95-f0dd-4199-8ba1-51aac840146e-trusted-ca-bundle\") pod \"db8e2a95-f0dd-4199-8ba1-51aac840146e\" (UID: \"db8e2a95-f0dd-4199-8ba1-51aac840146e\") " Feb 24 02:27:17.825021 master-0 kubenswrapper[31411]: I0224 02:27:17.824913 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/db8e2a95-f0dd-4199-8ba1-51aac840146e-console-config\") pod \"db8e2a95-f0dd-4199-8ba1-51aac840146e\" (UID: \"db8e2a95-f0dd-4199-8ba1-51aac840146e\") " Feb 24 02:27:17.825426 master-0 kubenswrapper[31411]: I0224 02:27:17.825345 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db8e2a95-f0dd-4199-8ba1-51aac840146e-service-ca" (OuterVolumeSpecName: "service-ca") pod "db8e2a95-f0dd-4199-8ba1-51aac840146e" (UID: "db8e2a95-f0dd-4199-8ba1-51aac840146e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:27:17.826352 master-0 kubenswrapper[31411]: I0224 02:27:17.826275 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db8e2a95-f0dd-4199-8ba1-51aac840146e-console-config" (OuterVolumeSpecName: "console-config") pod "db8e2a95-f0dd-4199-8ba1-51aac840146e" (UID: "db8e2a95-f0dd-4199-8ba1-51aac840146e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:27:17.826352 master-0 kubenswrapper[31411]: I0224 02:27:17.826247 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db8e2a95-f0dd-4199-8ba1-51aac840146e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "db8e2a95-f0dd-4199-8ba1-51aac840146e" (UID: "db8e2a95-f0dd-4199-8ba1-51aac840146e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:27:17.827025 master-0 kubenswrapper[31411]: I0224 02:27:17.826973 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db8e2a95-f0dd-4199-8ba1-51aac840146e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "db8e2a95-f0dd-4199-8ba1-51aac840146e" (UID: "db8e2a95-f0dd-4199-8ba1-51aac840146e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:27:17.830681 master-0 kubenswrapper[31411]: I0224 02:27:17.830603 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db8e2a95-f0dd-4199-8ba1-51aac840146e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "db8e2a95-f0dd-4199-8ba1-51aac840146e" (UID: "db8e2a95-f0dd-4199-8ba1-51aac840146e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:27:17.830805 master-0 kubenswrapper[31411]: I0224 02:27:17.830667 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db8e2a95-f0dd-4199-8ba1-51aac840146e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "db8e2a95-f0dd-4199-8ba1-51aac840146e" (UID: "db8e2a95-f0dd-4199-8ba1-51aac840146e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:27:17.830878 master-0 kubenswrapper[31411]: I0224 02:27:17.830825 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db8e2a95-f0dd-4199-8ba1-51aac840146e-kube-api-access-vpgnf" (OuterVolumeSpecName: "kube-api-access-vpgnf") pod "db8e2a95-f0dd-4199-8ba1-51aac840146e" (UID: "db8e2a95-f0dd-4199-8ba1-51aac840146e"). InnerVolumeSpecName "kube-api-access-vpgnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:27:17.929092 master-0 kubenswrapper[31411]: I0224 02:27:17.928919 31411 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/db8e2a95-f0dd-4199-8ba1-51aac840146e-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 02:27:17.929092 master-0 kubenswrapper[31411]: I0224 02:27:17.928967 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpgnf\" (UniqueName: \"kubernetes.io/projected/db8e2a95-f0dd-4199-8ba1-51aac840146e-kube-api-access-vpgnf\") on node \"master-0\" DevicePath \"\"" Feb 24 02:27:17.929092 master-0 kubenswrapper[31411]: I0224 02:27:17.928983 31411 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/db8e2a95-f0dd-4199-8ba1-51aac840146e-console-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:27:17.929092 master-0 kubenswrapper[31411]: I0224 02:27:17.928996 31411 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db8e2a95-f0dd-4199-8ba1-51aac840146e-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:27:17.929092 master-0 kubenswrapper[31411]: I0224 02:27:17.929009 31411 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/db8e2a95-f0dd-4199-8ba1-51aac840146e-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 02:27:17.929092 master-0 kubenswrapper[31411]: I0224 02:27:17.929021 31411 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/db8e2a95-f0dd-4199-8ba1-51aac840146e-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 02:27:17.929092 master-0 kubenswrapper[31411]: I0224 02:27:17.929033 31411 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/db8e2a95-f0dd-4199-8ba1-51aac840146e-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:27:18.068036 master-0 kubenswrapper[31411]: I0224 02:27:18.067927 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-79b5f69b87-9qbb4_db8e2a95-f0dd-4199-8ba1-51aac840146e/console/0.log" Feb 24 02:27:18.069213 master-0 kubenswrapper[31411]: I0224 02:27:18.068049 31411 generic.go:334] "Generic (PLEG): container finished" podID="db8e2a95-f0dd-4199-8ba1-51aac840146e" containerID="a7d82d2784f15a68cc4e32a983450691b1ab3e6e22537f9275c6afe158055b9d" exitCode=2 Feb 24 02:27:18.069213 master-0 kubenswrapper[31411]: I0224 02:27:18.068103 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79b5f69b87-9qbb4" event={"ID":"db8e2a95-f0dd-4199-8ba1-51aac840146e","Type":"ContainerDied","Data":"a7d82d2784f15a68cc4e32a983450691b1ab3e6e22537f9275c6afe158055b9d"} Feb 24 02:27:18.069213 master-0 kubenswrapper[31411]: I0224 02:27:18.068149 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79b5f69b87-9qbb4" Feb 24 02:27:18.069213 master-0 kubenswrapper[31411]: I0224 02:27:18.068203 31411 scope.go:117] "RemoveContainer" containerID="a7d82d2784f15a68cc4e32a983450691b1ab3e6e22537f9275c6afe158055b9d" Feb 24 02:27:18.069213 master-0 kubenswrapper[31411]: I0224 02:27:18.068159 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79b5f69b87-9qbb4" event={"ID":"db8e2a95-f0dd-4199-8ba1-51aac840146e","Type":"ContainerDied","Data":"03d97ad09bba8d15baea6c8f331911b8d1bcb7190a8cb9793c7f8fa5ce314a57"} Feb 24 02:27:18.104238 master-0 kubenswrapper[31411]: I0224 02:27:18.104051 31411 scope.go:117] "RemoveContainer" containerID="a7d82d2784f15a68cc4e32a983450691b1ab3e6e22537f9275c6afe158055b9d" Feb 24 02:27:18.104983 master-0 kubenswrapper[31411]: E0224 02:27:18.104528 31411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7d82d2784f15a68cc4e32a983450691b1ab3e6e22537f9275c6afe158055b9d\": container with ID starting with a7d82d2784f15a68cc4e32a983450691b1ab3e6e22537f9275c6afe158055b9d not found: ID does not exist" containerID="a7d82d2784f15a68cc4e32a983450691b1ab3e6e22537f9275c6afe158055b9d" Feb 24 02:27:18.105098 master-0 kubenswrapper[31411]: I0224 02:27:18.104985 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7d82d2784f15a68cc4e32a983450691b1ab3e6e22537f9275c6afe158055b9d"} err="failed to get container status \"a7d82d2784f15a68cc4e32a983450691b1ab3e6e22537f9275c6afe158055b9d\": rpc error: code = NotFound desc = could not find container \"a7d82d2784f15a68cc4e32a983450691b1ab3e6e22537f9275c6afe158055b9d\": container with ID starting with a7d82d2784f15a68cc4e32a983450691b1ab3e6e22537f9275c6afe158055b9d not found: ID does not exist" Feb 24 02:27:18.121188 master-0 kubenswrapper[31411]: I0224 02:27:18.121133 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-79b5f69b87-9qbb4"] Feb 24 02:27:18.140651 master-0 kubenswrapper[31411]: I0224 02:27:18.140530 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-79b5f69b87-9qbb4"] Feb 24 02:27:19.111387 master-0 kubenswrapper[31411]: I0224 02:27:19.111315 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db8e2a95-f0dd-4199-8ba1-51aac840146e" path="/var/lib/kubelet/pods/db8e2a95-f0dd-4199-8ba1-51aac840146e/volumes" Feb 24 02:27:23.091738 master-0 kubenswrapper[31411]: I0224 02:27:23.091652 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:27:23.118103 master-0 kubenswrapper[31411]: I0224 02:27:23.118038 31411 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="db9d33e7-6162-46bc-8147-0da9cd8cc402" Feb 24 02:27:23.118103 master-0 kubenswrapper[31411]: I0224 02:27:23.118095 31411 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="db9d33e7-6162-46bc-8147-0da9cd8cc402" Feb 24 02:27:23.143473 master-0 kubenswrapper[31411]: I0224 02:27:23.142548 31411 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:27:23.148694 master-0 kubenswrapper[31411]: I0224 02:27:23.147850 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 24 02:27:23.159422 master-0 kubenswrapper[31411]: I0224 02:27:23.159056 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 24 02:27:23.169459 master-0 kubenswrapper[31411]: I0224 02:27:23.169372 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:27:23.176873 master-0 kubenswrapper[31411]: I0224 02:27:23.176816 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 24 02:27:23.218474 master-0 kubenswrapper[31411]: W0224 02:27:23.218400 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90cab0b4b999dea8aa2983c3ba90887c.slice/crio-b2f3793cad05771f6dcc3969c791a4b0bcab3405c312e2c7a9af9506a8e2a1ee WatchSource:0}: Error finding container b2f3793cad05771f6dcc3969c791a4b0bcab3405c312e2c7a9af9506a8e2a1ee: Status 404 returned error can't find the container with id b2f3793cad05771f6dcc3969c791a4b0bcab3405c312e2c7a9af9506a8e2a1ee Feb 24 02:27:24.153773 master-0 kubenswrapper[31411]: I0224 02:27:24.153540 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"90cab0b4b999dea8aa2983c3ba90887c","Type":"ContainerStarted","Data":"2558dcd86b8811023e67ce075499f31d45a0f32c90e8695da0ed8ab1725df0fe"} Feb 24 02:27:24.153773 master-0 kubenswrapper[31411]: I0224 02:27:24.153661 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"90cab0b4b999dea8aa2983c3ba90887c","Type":"ContainerStarted","Data":"7c4316790f4be799c8aab44b8a2db1543b1550bafae701c105b61b07adb9c226"} Feb 24 02:27:24.153773 master-0 kubenswrapper[31411]: I0224 02:27:24.153677 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"90cab0b4b999dea8aa2983c3ba90887c","Type":"ContainerStarted","Data":"b2f3793cad05771f6dcc3969c791a4b0bcab3405c312e2c7a9af9506a8e2a1ee"} Feb 24 02:27:25.175011 master-0 kubenswrapper[31411]: I0224 02:27:25.174912 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"90cab0b4b999dea8aa2983c3ba90887c","Type":"ContainerStarted","Data":"5c6855703f0ef0478455058f4a79740efbc905126dc4620181cf390c774dbd13"} Feb 24 02:27:25.175011 master-0 kubenswrapper[31411]: I0224 02:27:25.174996 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"90cab0b4b999dea8aa2983c3ba90887c","Type":"ContainerStarted","Data":"31b55a9ee19b2906747ca02044b530d80c92f4ba55156ba1dfdef23b1589a5af"} Feb 24 02:27:25.215257 master-0 kubenswrapper[31411]: I0224 02:27:25.215061 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.215030245 podStartE2EDuration="2.215030245s" podCreationTimestamp="2026-02-24 02:27:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:27:25.212936446 +0000 UTC m=+388.430134322" watchObservedRunningTime="2026-02-24 02:27:25.215030245 +0000 UTC m=+388.432228131" Feb 24 02:27:33.171332 master-0 kubenswrapper[31411]: I0224 02:27:33.171250 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:27:33.172332 master-0 kubenswrapper[31411]: I0224 02:27:33.171479 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:27:33.172332 master-0 kubenswrapper[31411]: I0224 02:27:33.171512 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:27:33.172332 master-0 kubenswrapper[31411]: I0224 02:27:33.171536 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:27:33.172332 master-0 kubenswrapper[31411]: I0224 02:27:33.172169 31411 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 24 02:27:33.172332 master-0 kubenswrapper[31411]: I0224 02:27:33.172307 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="90cab0b4b999dea8aa2983c3ba90887c" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 24 02:27:33.178922 master-0 kubenswrapper[31411]: I0224 02:27:33.178852 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:27:33.270853 master-0 kubenswrapper[31411]: I0224 02:27:33.270762 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:27:43.179602 master-0 kubenswrapper[31411]: I0224 02:27:43.179520 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:27:43.188351 master-0 kubenswrapper[31411]: I0224 02:27:43.188269 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 02:27:57.563904 master-0 kubenswrapper[31411]: I0224 02:27:57.563824 31411 scope.go:117] "RemoveContainer" containerID="7221134b3af5b91666a2752516338d10accc510917d963f481c2b324d0d85d72" Feb 24 02:27:57.585673 master-0 kubenswrapper[31411]: I0224 02:27:57.585569 31411 scope.go:117] "RemoveContainer" containerID="aafef463fb6ec586aa93d68463140f0e495d6fd2628a807d6f4cc72093d982ad" Feb 24 02:27:57.608178 master-0 kubenswrapper[31411]: I0224 02:27:57.608131 31411 scope.go:117] "RemoveContainer" containerID="2dd35a4b899d0d46f8cae0b0fdc77f4727b3df36f32d794fdf4c0908a1c24f54" Feb 24 02:27:58.276567 master-0 kubenswrapper[31411]: I0224 02:27:58.276209 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-q2bh9"] Feb 24 02:27:58.276567 master-0 kubenswrapper[31411]: E0224 02:27:58.276505 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce98a71-76d2-4dca-a329-12edcd926f2e" containerName="installer" Feb 24 02:27:58.276567 master-0 kubenswrapper[31411]: I0224 02:27:58.276516 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce98a71-76d2-4dca-a329-12edcd926f2e" containerName="installer" Feb 24 02:27:58.276567 master-0 kubenswrapper[31411]: E0224 02:27:58.276535 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db8e2a95-f0dd-4199-8ba1-51aac840146e" containerName="console" Feb 24 02:27:58.276567 master-0 kubenswrapper[31411]: I0224 02:27:58.276541 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="db8e2a95-f0dd-4199-8ba1-51aac840146e" containerName="console" Feb 24 02:27:58.277191 master-0 kubenswrapper[31411]: I0224 02:27:58.276720 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ce98a71-76d2-4dca-a329-12edcd926f2e" containerName="installer" Feb 24 02:27:58.277191 master-0 kubenswrapper[31411]: I0224 02:27:58.276734 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="db8e2a95-f0dd-4199-8ba1-51aac840146e" containerName="console" Feb 24 02:27:58.277319 master-0 kubenswrapper[31411]: I0224 02:27:58.277195 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-78f6d7d749-q2bh9" Feb 24 02:27:58.283614 master-0 kubenswrapper[31411]: I0224 02:27:58.280526 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"kube-root-ca.crt" Feb 24 02:27:58.283614 master-0 kubenswrapper[31411]: I0224 02:27:58.280766 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"openshift-service-ca.crt" Feb 24 02:27:58.283614 master-0 kubenswrapper[31411]: I0224 02:27:58.280880 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"sushy-emulator-config" Feb 24 02:27:58.283614 master-0 kubenswrapper[31411]: I0224 02:27:58.281001 31411 reflector.go:368] Caches populated for *v1.Secret from object-"sushy-emulator"/"os-client-config" Feb 24 02:27:58.308606 master-0 kubenswrapper[31411]: I0224 02:27:58.307826 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-q2bh9"] Feb 24 02:27:58.390915 master-0 kubenswrapper[31411]: I0224 02:27:58.390859 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-594mx\" (UniqueName: \"kubernetes.io/projected/432763a0-0405-497d-b4a2-d253c31a5d3e-kube-api-access-594mx\") pod \"sushy-emulator-78f6d7d749-q2bh9\" (UID: \"432763a0-0405-497d-b4a2-d253c31a5d3e\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-q2bh9" Feb 24 02:27:58.391172 master-0 kubenswrapper[31411]: I0224 02:27:58.390998 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/432763a0-0405-497d-b4a2-d253c31a5d3e-os-client-config\") pod \"sushy-emulator-78f6d7d749-q2bh9\" (UID: \"432763a0-0405-497d-b4a2-d253c31a5d3e\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-q2bh9" Feb 24 02:27:58.391172 master-0 kubenswrapper[31411]: I0224 02:27:58.391064 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/432763a0-0405-497d-b4a2-d253c31a5d3e-sushy-emulator-config\") pod \"sushy-emulator-78f6d7d749-q2bh9\" (UID: \"432763a0-0405-497d-b4a2-d253c31a5d3e\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-q2bh9" Feb 24 02:27:58.492743 master-0 kubenswrapper[31411]: I0224 02:27:58.492676 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/432763a0-0405-497d-b4a2-d253c31a5d3e-sushy-emulator-config\") pod \"sushy-emulator-78f6d7d749-q2bh9\" (UID: \"432763a0-0405-497d-b4a2-d253c31a5d3e\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-q2bh9" Feb 24 02:27:58.493001 master-0 kubenswrapper[31411]: I0224 02:27:58.492779 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-594mx\" (UniqueName: \"kubernetes.io/projected/432763a0-0405-497d-b4a2-d253c31a5d3e-kube-api-access-594mx\") pod \"sushy-emulator-78f6d7d749-q2bh9\" (UID: \"432763a0-0405-497d-b4a2-d253c31a5d3e\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-q2bh9" Feb 24 02:27:58.493001 master-0 kubenswrapper[31411]: I0224 02:27:58.492863 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/432763a0-0405-497d-b4a2-d253c31a5d3e-os-client-config\") pod \"sushy-emulator-78f6d7d749-q2bh9\" (UID: \"432763a0-0405-497d-b4a2-d253c31a5d3e\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-q2bh9" Feb 24 02:27:58.494091 master-0 kubenswrapper[31411]: I0224 02:27:58.494028 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/432763a0-0405-497d-b4a2-d253c31a5d3e-sushy-emulator-config\") pod \"sushy-emulator-78f6d7d749-q2bh9\" (UID: \"432763a0-0405-497d-b4a2-d253c31a5d3e\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-q2bh9" Feb 24 02:27:58.498329 master-0 kubenswrapper[31411]: I0224 02:27:58.498275 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/432763a0-0405-497d-b4a2-d253c31a5d3e-os-client-config\") pod \"sushy-emulator-78f6d7d749-q2bh9\" (UID: \"432763a0-0405-497d-b4a2-d253c31a5d3e\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-q2bh9" Feb 24 02:27:58.516821 master-0 kubenswrapper[31411]: I0224 02:27:58.516754 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-594mx\" (UniqueName: \"kubernetes.io/projected/432763a0-0405-497d-b4a2-d253c31a5d3e-kube-api-access-594mx\") pod \"sushy-emulator-78f6d7d749-q2bh9\" (UID: \"432763a0-0405-497d-b4a2-d253c31a5d3e\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-q2bh9" Feb 24 02:27:58.650908 master-0 kubenswrapper[31411]: I0224 02:27:58.650748 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-78f6d7d749-q2bh9" Feb 24 02:27:59.179239 master-0 kubenswrapper[31411]: I0224 02:27:59.179167 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-q2bh9"] Feb 24 02:27:59.195877 master-0 kubenswrapper[31411]: W0224 02:27:59.195818 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod432763a0_0405_497d_b4a2_d253c31a5d3e.slice/crio-ee56223b1fb091dd4b599b32727ba11df4ca2828b744f13aff27734cceeb6f8c WatchSource:0}: Error finding container ee56223b1fb091dd4b599b32727ba11df4ca2828b744f13aff27734cceeb6f8c: Status 404 returned error can't find the container with id ee56223b1fb091dd4b599b32727ba11df4ca2828b744f13aff27734cceeb6f8c Feb 24 02:27:59.199501 master-0 kubenswrapper[31411]: I0224 02:27:59.199457 31411 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 02:27:59.536918 master-0 kubenswrapper[31411]: I0224 02:27:59.533534 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-78f6d7d749-q2bh9" event={"ID":"432763a0-0405-497d-b4a2-d253c31a5d3e","Type":"ContainerStarted","Data":"ee56223b1fb091dd4b599b32727ba11df4ca2828b744f13aff27734cceeb6f8c"} Feb 24 02:27:59.536918 master-0 kubenswrapper[31411]: I0224 02:27:59.536047 31411 generic.go:334] "Generic (PLEG): container finished" podID="8c396c41-c617-4631-9700-a7052af5a276" containerID="6e72c582c52cc1175706db2ef4a54c95fdecae69c4b7d4caf28fde6f98e8eaa4" exitCode=0 Feb 24 02:27:59.536918 master-0 kubenswrapper[31411]: I0224 02:27:59.536160 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" event={"ID":"8c396c41-c617-4631-9700-a7052af5a276","Type":"ContainerDied","Data":"6e72c582c52cc1175706db2ef4a54c95fdecae69c4b7d4caf28fde6f98e8eaa4"} Feb 24 02:27:59.641606 master-0 kubenswrapper[31411]: I0224 02:27:59.641521 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:27:59.820614 master-0 kubenswrapper[31411]: I0224 02:27:59.820429 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c396c41-c617-4631-9700-a7052af5a276-configmap-kubelet-serving-ca-bundle\") pod \"8c396c41-c617-4631-9700-a7052af5a276\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " Feb 24 02:27:59.821787 master-0 kubenswrapper[31411]: I0224 02:27:59.821751 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4grf\" (UniqueName: \"kubernetes.io/projected/8c396c41-c617-4631-9700-a7052af5a276-kube-api-access-n4grf\") pod \"8c396c41-c617-4631-9700-a7052af5a276\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " Feb 24 02:27:59.822049 master-0 kubenswrapper[31411]: I0224 02:27:59.821962 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c396c41-c617-4631-9700-a7052af5a276-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "8c396c41-c617-4631-9700-a7052af5a276" (UID: "8c396c41-c617-4631-9700-a7052af5a276"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:27:59.822150 master-0 kubenswrapper[31411]: I0224 02:27:59.821984 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/8c396c41-c617-4631-9700-a7052af5a276-metrics-server-audit-profiles\") pod \"8c396c41-c617-4631-9700-a7052af5a276\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " Feb 24 02:27:59.822150 master-0 kubenswrapper[31411]: I0224 02:27:59.822141 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-secret-metrics-client-certs\") pod \"8c396c41-c617-4631-9700-a7052af5a276\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " Feb 24 02:27:59.822292 master-0 kubenswrapper[31411]: I0224 02:27:59.822207 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-secret-metrics-server-tls\") pod \"8c396c41-c617-4631-9700-a7052af5a276\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " Feb 24 02:27:59.822501 master-0 kubenswrapper[31411]: I0224 02:27:59.822464 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-client-ca-bundle\") pod \"8c396c41-c617-4631-9700-a7052af5a276\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " Feb 24 02:27:59.822673 master-0 kubenswrapper[31411]: I0224 02:27:59.822569 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/8c396c41-c617-4631-9700-a7052af5a276-audit-log\") pod \"8c396c41-c617-4631-9700-a7052af5a276\" (UID: \"8c396c41-c617-4631-9700-a7052af5a276\") " Feb 24 02:27:59.824375 master-0 kubenswrapper[31411]: I0224 02:27:59.824003 31411 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c396c41-c617-4631-9700-a7052af5a276-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:27:59.824375 master-0 kubenswrapper[31411]: I0224 02:27:59.824327 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c396c41-c617-4631-9700-a7052af5a276-audit-log" (OuterVolumeSpecName: "audit-log") pod "8c396c41-c617-4631-9700-a7052af5a276" (UID: "8c396c41-c617-4631-9700-a7052af5a276"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:27:59.825152 master-0 kubenswrapper[31411]: I0224 02:27:59.825114 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c396c41-c617-4631-9700-a7052af5a276-metrics-server-audit-profiles" (OuterVolumeSpecName: "metrics-server-audit-profiles") pod "8c396c41-c617-4631-9700-a7052af5a276" (UID: "8c396c41-c617-4631-9700-a7052af5a276"). InnerVolumeSpecName "metrics-server-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:27:59.829906 master-0 kubenswrapper[31411]: I0224 02:27:59.829845 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "8c396c41-c617-4631-9700-a7052af5a276" (UID: "8c396c41-c617-4631-9700-a7052af5a276"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:27:59.830162 master-0 kubenswrapper[31411]: I0224 02:27:59.830114 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-secret-metrics-server-tls" (OuterVolumeSpecName: "secret-metrics-server-tls") pod "8c396c41-c617-4631-9700-a7052af5a276" (UID: "8c396c41-c617-4631-9700-a7052af5a276"). InnerVolumeSpecName "secret-metrics-server-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:27:59.833740 master-0 kubenswrapper[31411]: I0224 02:27:59.832122 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c396c41-c617-4631-9700-a7052af5a276-kube-api-access-n4grf" (OuterVolumeSpecName: "kube-api-access-n4grf") pod "8c396c41-c617-4631-9700-a7052af5a276" (UID: "8c396c41-c617-4631-9700-a7052af5a276"). InnerVolumeSpecName "kube-api-access-n4grf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:27:59.836908 master-0 kubenswrapper[31411]: I0224 02:27:59.836814 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-client-ca-bundle" (OuterVolumeSpecName: "client-ca-bundle") pod "8c396c41-c617-4631-9700-a7052af5a276" (UID: "8c396c41-c617-4631-9700-a7052af5a276"). InnerVolumeSpecName "client-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:27:59.927980 master-0 kubenswrapper[31411]: I0224 02:27:59.927768 31411 reconciler_common.go:293] "Volume detached for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-client-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:27:59.927980 master-0 kubenswrapper[31411]: I0224 02:27:59.927826 31411 reconciler_common.go:293] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/8c396c41-c617-4631-9700-a7052af5a276-audit-log\") on node \"master-0\" DevicePath \"\"" Feb 24 02:27:59.927980 master-0 kubenswrapper[31411]: I0224 02:27:59.927852 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4grf\" (UniqueName: \"kubernetes.io/projected/8c396c41-c617-4631-9700-a7052af5a276-kube-api-access-n4grf\") on node \"master-0\" DevicePath \"\"" Feb 24 02:27:59.927980 master-0 kubenswrapper[31411]: I0224 02:27:59.927895 31411 reconciler_common.go:293] "Volume detached for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/8c396c41-c617-4631-9700-a7052af5a276-metrics-server-audit-profiles\") on node \"master-0\" DevicePath \"\"" Feb 24 02:27:59.927980 master-0 kubenswrapper[31411]: I0224 02:27:59.927918 31411 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Feb 24 02:27:59.927980 master-0 kubenswrapper[31411]: I0224 02:27:59.927937 31411 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/8c396c41-c617-4631-9700-a7052af5a276-secret-metrics-server-tls\") on node \"master-0\" DevicePath \"\"" Feb 24 02:28:00.546810 master-0 kubenswrapper[31411]: I0224 02:28:00.546731 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" event={"ID":"8c396c41-c617-4631-9700-a7052af5a276","Type":"ContainerDied","Data":"1684de33a1b99214450be6d5f8c060f5a9f1a7c517e642d15f1d667f8a119c75"} Feb 24 02:28:00.546810 master-0 kubenswrapper[31411]: I0224 02:28:00.546817 31411 scope.go:117] "RemoveContainer" containerID="6e72c582c52cc1175706db2ef4a54c95fdecae69c4b7d4caf28fde6f98e8eaa4" Feb 24 02:28:00.546810 master-0 kubenswrapper[31411]: I0224 02:28:00.546820 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7b9cc5984b-smpdl" Feb 24 02:28:00.583856 master-0 kubenswrapper[31411]: I0224 02:28:00.583677 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-7b9cc5984b-smpdl"] Feb 24 02:28:00.589787 master-0 kubenswrapper[31411]: I0224 02:28:00.589663 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/metrics-server-7b9cc5984b-smpdl"] Feb 24 02:28:01.108086 master-0 kubenswrapper[31411]: I0224 02:28:01.108001 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c396c41-c617-4631-9700-a7052af5a276" path="/var/lib/kubelet/pods/8c396c41-c617-4631-9700-a7052af5a276/volumes" Feb 24 02:28:06.623301 master-0 kubenswrapper[31411]: I0224 02:28:06.623149 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-78f6d7d749-q2bh9" event={"ID":"432763a0-0405-497d-b4a2-d253c31a5d3e","Type":"ContainerStarted","Data":"9554571b8fb1928f8dad689398cbb0a68508918fa53aff32e7ab5771cdb98c43"} Feb 24 02:28:06.660759 master-0 kubenswrapper[31411]: I0224 02:28:06.660640 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-78f6d7d749-q2bh9" podStartSLOduration=1.823019062 podStartE2EDuration="8.660613441s" podCreationTimestamp="2026-02-24 02:27:58 +0000 UTC" firstStartedPulling="2026-02-24 02:27:59.199352568 +0000 UTC m=+422.416550424" lastFinishedPulling="2026-02-24 02:28:06.036946927 +0000 UTC m=+429.254144803" observedRunningTime="2026-02-24 02:28:06.653888051 +0000 UTC m=+429.871085947" watchObservedRunningTime="2026-02-24 02:28:06.660613441 +0000 UTC m=+429.877811327" Feb 24 02:28:08.651161 master-0 kubenswrapper[31411]: I0224 02:28:08.651055 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-78f6d7d749-q2bh9" Feb 24 02:28:08.653361 master-0 kubenswrapper[31411]: I0224 02:28:08.651355 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-78f6d7d749-q2bh9" Feb 24 02:28:08.667848 master-0 kubenswrapper[31411]: I0224 02:28:08.667788 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-78f6d7d749-q2bh9" Feb 24 02:28:09.653022 master-0 kubenswrapper[31411]: I0224 02:28:09.652973 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-78f6d7d749-q2bh9" Feb 24 02:28:28.969662 master-0 kubenswrapper[31411]: I0224 02:28:28.969428 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-poller-67cbf9ddc7-sbfjc"] Feb 24 02:28:28.970896 master-0 kubenswrapper[31411]: E0224 02:28:28.969971 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c396c41-c617-4631-9700-a7052af5a276" containerName="metrics-server" Feb 24 02:28:28.970896 master-0 kubenswrapper[31411]: I0224 02:28:28.969994 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c396c41-c617-4631-9700-a7052af5a276" containerName="metrics-server" Feb 24 02:28:28.970896 master-0 kubenswrapper[31411]: I0224 02:28:28.970236 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c396c41-c617-4631-9700-a7052af5a276" containerName="metrics-server" Feb 24 02:28:28.972164 master-0 kubenswrapper[31411]: I0224 02:28:28.972117 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-67cbf9ddc7-sbfjc" Feb 24 02:28:28.990661 master-0 kubenswrapper[31411]: I0224 02:28:28.990457 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-67cbf9ddc7-sbfjc"] Feb 24 02:28:29.079072 master-0 kubenswrapper[31411]: I0224 02:28:29.078990 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/cd9ff58f-4974-4ec2-97ca-4b88c830aa2c-os-client-config\") pod \"nova-console-poller-67cbf9ddc7-sbfjc\" (UID: \"cd9ff58f-4974-4ec2-97ca-4b88c830aa2c\") " pod="sushy-emulator/nova-console-poller-67cbf9ddc7-sbfjc" Feb 24 02:28:29.079330 master-0 kubenswrapper[31411]: I0224 02:28:29.079260 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp9lq\" (UniqueName: \"kubernetes.io/projected/cd9ff58f-4974-4ec2-97ca-4b88c830aa2c-kube-api-access-qp9lq\") pod \"nova-console-poller-67cbf9ddc7-sbfjc\" (UID: \"cd9ff58f-4974-4ec2-97ca-4b88c830aa2c\") " pod="sushy-emulator/nova-console-poller-67cbf9ddc7-sbfjc" Feb 24 02:28:29.181118 master-0 kubenswrapper[31411]: I0224 02:28:29.181019 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp9lq\" (UniqueName: \"kubernetes.io/projected/cd9ff58f-4974-4ec2-97ca-4b88c830aa2c-kube-api-access-qp9lq\") pod \"nova-console-poller-67cbf9ddc7-sbfjc\" (UID: \"cd9ff58f-4974-4ec2-97ca-4b88c830aa2c\") " pod="sushy-emulator/nova-console-poller-67cbf9ddc7-sbfjc" Feb 24 02:28:29.181601 master-0 kubenswrapper[31411]: I0224 02:28:29.181509 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/cd9ff58f-4974-4ec2-97ca-4b88c830aa2c-os-client-config\") pod \"nova-console-poller-67cbf9ddc7-sbfjc\" (UID: \"cd9ff58f-4974-4ec2-97ca-4b88c830aa2c\") " pod="sushy-emulator/nova-console-poller-67cbf9ddc7-sbfjc" Feb 24 02:28:29.187673 master-0 kubenswrapper[31411]: I0224 02:28:29.187611 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/cd9ff58f-4974-4ec2-97ca-4b88c830aa2c-os-client-config\") pod \"nova-console-poller-67cbf9ddc7-sbfjc\" (UID: \"cd9ff58f-4974-4ec2-97ca-4b88c830aa2c\") " pod="sushy-emulator/nova-console-poller-67cbf9ddc7-sbfjc" Feb 24 02:28:29.214345 master-0 kubenswrapper[31411]: I0224 02:28:29.214245 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp9lq\" (UniqueName: \"kubernetes.io/projected/cd9ff58f-4974-4ec2-97ca-4b88c830aa2c-kube-api-access-qp9lq\") pod \"nova-console-poller-67cbf9ddc7-sbfjc\" (UID: \"cd9ff58f-4974-4ec2-97ca-4b88c830aa2c\") " pod="sushy-emulator/nova-console-poller-67cbf9ddc7-sbfjc" Feb 24 02:28:29.315258 master-0 kubenswrapper[31411]: I0224 02:28:29.315074 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-67cbf9ddc7-sbfjc" Feb 24 02:28:29.853661 master-0 kubenswrapper[31411]: I0224 02:28:29.849273 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-67cbf9ddc7-sbfjc"] Feb 24 02:28:29.853661 master-0 kubenswrapper[31411]: W0224 02:28:29.852883 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd9ff58f_4974_4ec2_97ca_4b88c830aa2c.slice/crio-61d59a49050286c9518da619a73a36f44c925747f3d0226bf32798954717c21f WatchSource:0}: Error finding container 61d59a49050286c9518da619a73a36f44c925747f3d0226bf32798954717c21f: Status 404 returned error can't find the container with id 61d59a49050286c9518da619a73a36f44c925747f3d0226bf32798954717c21f Feb 24 02:28:30.857349 master-0 kubenswrapper[31411]: I0224 02:28:30.857254 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-67cbf9ddc7-sbfjc" event={"ID":"cd9ff58f-4974-4ec2-97ca-4b88c830aa2c","Type":"ContainerStarted","Data":"61d59a49050286c9518da619a73a36f44c925747f3d0226bf32798954717c21f"} Feb 24 02:28:35.948159 master-0 kubenswrapper[31411]: I0224 02:28:35.947998 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-67cbf9ddc7-sbfjc" event={"ID":"cd9ff58f-4974-4ec2-97ca-4b88c830aa2c","Type":"ContainerStarted","Data":"cae172e845ce9a93ce99d52bc5e20efc9b9978215c91323221c75b80234507e4"} Feb 24 02:28:35.948159 master-0 kubenswrapper[31411]: I0224 02:28:35.948096 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-67cbf9ddc7-sbfjc" event={"ID":"cd9ff58f-4974-4ec2-97ca-4b88c830aa2c","Type":"ContainerStarted","Data":"552ab61d69606038723227f2a3e2b5203df4392cc0d0f6511657c81ec641f7da"} Feb 24 02:28:35.981249 master-0 kubenswrapper[31411]: I0224 02:28:35.981010 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-poller-67cbf9ddc7-sbfjc" podStartSLOduration=2.405298502 podStartE2EDuration="7.980972422s" podCreationTimestamp="2026-02-24 02:28:28 +0000 UTC" firstStartedPulling="2026-02-24 02:28:29.855732341 +0000 UTC m=+453.072930227" lastFinishedPulling="2026-02-24 02:28:35.431406261 +0000 UTC m=+458.648604147" observedRunningTime="2026-02-24 02:28:35.974368236 +0000 UTC m=+459.191566112" watchObservedRunningTime="2026-02-24 02:28:35.980972422 +0000 UTC m=+459.198170308" Feb 24 02:28:57.725108 master-0 kubenswrapper[31411]: I0224 02:28:57.725013 31411 scope.go:117] "RemoveContainer" containerID="9250496c585327c26795a4ba925297eda6aefab50f23daee888a6e7c19b4af75" Feb 24 02:29:00.268173 master-0 kubenswrapper[31411]: I0224 02:29:00.267435 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-recorder-856878b5df-4lhhs"] Feb 24 02:29:00.269473 master-0 kubenswrapper[31411]: I0224 02:29:00.269421 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-856878b5df-4lhhs" Feb 24 02:29:00.294982 master-0 kubenswrapper[31411]: I0224 02:29:00.283852 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-856878b5df-4lhhs"] Feb 24 02:29:00.379751 master-0 kubenswrapper[31411]: I0224 02:29:00.379682 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjcgd\" (UniqueName: \"kubernetes.io/projected/5c07e479-d6ac-45ab-a067-07d988b7b964-kube-api-access-mjcgd\") pod \"nova-console-recorder-856878b5df-4lhhs\" (UID: \"5c07e479-d6ac-45ab-a067-07d988b7b964\") " pod="sushy-emulator/nova-console-recorder-856878b5df-4lhhs" Feb 24 02:29:00.380034 master-0 kubenswrapper[31411]: I0224 02:29:00.379943 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/5c07e479-d6ac-45ab-a067-07d988b7b964-os-client-config\") pod \"nova-console-recorder-856878b5df-4lhhs\" (UID: \"5c07e479-d6ac-45ab-a067-07d988b7b964\") " pod="sushy-emulator/nova-console-recorder-856878b5df-4lhhs" Feb 24 02:29:00.380279 master-0 kubenswrapper[31411]: I0224 02:29:00.380239 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/5c07e479-d6ac-45ab-a067-07d988b7b964-nova-console-recordings-pv\") pod \"nova-console-recorder-856878b5df-4lhhs\" (UID: \"5c07e479-d6ac-45ab-a067-07d988b7b964\") " pod="sushy-emulator/nova-console-recorder-856878b5df-4lhhs" Feb 24 02:29:00.482259 master-0 kubenswrapper[31411]: I0224 02:29:00.482187 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/5c07e479-d6ac-45ab-a067-07d988b7b964-nova-console-recordings-pv\") pod \"nova-console-recorder-856878b5df-4lhhs\" (UID: \"5c07e479-d6ac-45ab-a067-07d988b7b964\") " pod="sushy-emulator/nova-console-recorder-856878b5df-4lhhs" Feb 24 02:29:00.482530 master-0 kubenswrapper[31411]: I0224 02:29:00.482315 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjcgd\" (UniqueName: \"kubernetes.io/projected/5c07e479-d6ac-45ab-a067-07d988b7b964-kube-api-access-mjcgd\") pod \"nova-console-recorder-856878b5df-4lhhs\" (UID: \"5c07e479-d6ac-45ab-a067-07d988b7b964\") " pod="sushy-emulator/nova-console-recorder-856878b5df-4lhhs" Feb 24 02:29:00.482530 master-0 kubenswrapper[31411]: I0224 02:29:00.482369 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/5c07e479-d6ac-45ab-a067-07d988b7b964-os-client-config\") pod \"nova-console-recorder-856878b5df-4lhhs\" (UID: \"5c07e479-d6ac-45ab-a067-07d988b7b964\") " pod="sushy-emulator/nova-console-recorder-856878b5df-4lhhs" Feb 24 02:29:00.489760 master-0 kubenswrapper[31411]: I0224 02:29:00.488853 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/5c07e479-d6ac-45ab-a067-07d988b7b964-os-client-config\") pod \"nova-console-recorder-856878b5df-4lhhs\" (UID: \"5c07e479-d6ac-45ab-a067-07d988b7b964\") " pod="sushy-emulator/nova-console-recorder-856878b5df-4lhhs" Feb 24 02:29:00.511498 master-0 kubenswrapper[31411]: I0224 02:29:00.511442 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjcgd\" (UniqueName: \"kubernetes.io/projected/5c07e479-d6ac-45ab-a067-07d988b7b964-kube-api-access-mjcgd\") pod \"nova-console-recorder-856878b5df-4lhhs\" (UID: \"5c07e479-d6ac-45ab-a067-07d988b7b964\") " pod="sushy-emulator/nova-console-recorder-856878b5df-4lhhs" Feb 24 02:29:01.206530 master-0 kubenswrapper[31411]: I0224 02:29:01.206443 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/5c07e479-d6ac-45ab-a067-07d988b7b964-nova-console-recordings-pv\") pod \"nova-console-recorder-856878b5df-4lhhs\" (UID: \"5c07e479-d6ac-45ab-a067-07d988b7b964\") " pod="sushy-emulator/nova-console-recorder-856878b5df-4lhhs" Feb 24 02:29:01.232596 master-0 kubenswrapper[31411]: I0224 02:29:01.232518 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-856878b5df-4lhhs" Feb 24 02:29:01.798257 master-0 kubenswrapper[31411]: W0224 02:29:01.798178 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c07e479_d6ac_45ab_a067_07d988b7b964.slice/crio-3c3ed522b93dfd9b6fe82d5f0b883b3cbf21e031dc098a463c1d09672b8426e3 WatchSource:0}: Error finding container 3c3ed522b93dfd9b6fe82d5f0b883b3cbf21e031dc098a463c1d09672b8426e3: Status 404 returned error can't find the container with id 3c3ed522b93dfd9b6fe82d5f0b883b3cbf21e031dc098a463c1d09672b8426e3 Feb 24 02:29:01.799998 master-0 kubenswrapper[31411]: I0224 02:29:01.799913 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-856878b5df-4lhhs"] Feb 24 02:29:02.213747 master-0 kubenswrapper[31411]: I0224 02:29:02.213606 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-856878b5df-4lhhs" event={"ID":"5c07e479-d6ac-45ab-a067-07d988b7b964","Type":"ContainerStarted","Data":"3c3ed522b93dfd9b6fe82d5f0b883b3cbf21e031dc098a463c1d09672b8426e3"} Feb 24 02:29:09.293378 master-0 kubenswrapper[31411]: I0224 02:29:09.293297 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-856878b5df-4lhhs" event={"ID":"5c07e479-d6ac-45ab-a067-07d988b7b964","Type":"ContainerStarted","Data":"914be6471fc239b9ffeb05e1ee5f6255ef7aacb15ec15996840cac9d24afc2eb"} Feb 24 02:29:10.313729 master-0 kubenswrapper[31411]: I0224 02:29:10.313632 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-856878b5df-4lhhs" event={"ID":"5c07e479-d6ac-45ab-a067-07d988b7b964","Type":"ContainerStarted","Data":"6e39a756cabadca7a590bc050ed9bff0f8e4b34c73d4a9962e8eeaa15dcb3417"} Feb 24 02:29:10.344669 master-0 kubenswrapper[31411]: I0224 02:29:10.344511 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-recorder-856878b5df-4lhhs" podStartSLOduration=2.725018424 podStartE2EDuration="10.344482586s" podCreationTimestamp="2026-02-24 02:29:00 +0000 UTC" firstStartedPulling="2026-02-24 02:29:01.804170817 +0000 UTC m=+485.021368693" lastFinishedPulling="2026-02-24 02:29:09.423634979 +0000 UTC m=+492.640832855" observedRunningTime="2026-02-24 02:29:10.342812469 +0000 UTC m=+493.560010345" watchObservedRunningTime="2026-02-24 02:29:10.344482586 +0000 UTC m=+493.561680462" Feb 24 02:29:36.706046 master-0 kubenswrapper[31411]: I0224 02:29:36.705968 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf"] Feb 24 02:29:36.708676 master-0 kubenswrapper[31411]: I0224 02:29:36.708628 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf" Feb 24 02:29:36.711220 master-0 kubenswrapper[31411]: I0224 02:29:36.711178 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-dfkgm" Feb 24 02:29:36.728241 master-0 kubenswrapper[31411]: I0224 02:29:36.728170 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf"] Feb 24 02:29:36.730795 master-0 kubenswrapper[31411]: I0224 02:29:36.730730 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmfmp\" (UniqueName: \"kubernetes.io/projected/30565733-2b81-4928-bfcf-0e1fafb3bdb3-kube-api-access-pmfmp\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf\" (UID: \"30565733-2b81-4928-bfcf-0e1fafb3bdb3\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf" Feb 24 02:29:36.731200 master-0 kubenswrapper[31411]: I0224 02:29:36.731135 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30565733-2b81-4928-bfcf-0e1fafb3bdb3-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf\" (UID: \"30565733-2b81-4928-bfcf-0e1fafb3bdb3\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf" Feb 24 02:29:36.731446 master-0 kubenswrapper[31411]: I0224 02:29:36.731410 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30565733-2b81-4928-bfcf-0e1fafb3bdb3-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf\" (UID: \"30565733-2b81-4928-bfcf-0e1fafb3bdb3\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf" Feb 24 02:29:36.834119 master-0 kubenswrapper[31411]: I0224 02:29:36.834033 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30565733-2b81-4928-bfcf-0e1fafb3bdb3-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf\" (UID: \"30565733-2b81-4928-bfcf-0e1fafb3bdb3\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf" Feb 24 02:29:36.834416 master-0 kubenswrapper[31411]: I0224 02:29:36.834190 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30565733-2b81-4928-bfcf-0e1fafb3bdb3-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf\" (UID: \"30565733-2b81-4928-bfcf-0e1fafb3bdb3\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf" Feb 24 02:29:36.834416 master-0 kubenswrapper[31411]: I0224 02:29:36.834299 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmfmp\" (UniqueName: \"kubernetes.io/projected/30565733-2b81-4928-bfcf-0e1fafb3bdb3-kube-api-access-pmfmp\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf\" (UID: \"30565733-2b81-4928-bfcf-0e1fafb3bdb3\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf" Feb 24 02:29:36.835752 master-0 kubenswrapper[31411]: I0224 02:29:36.835700 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30565733-2b81-4928-bfcf-0e1fafb3bdb3-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf\" (UID: \"30565733-2b81-4928-bfcf-0e1fafb3bdb3\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf" Feb 24 02:29:36.835837 master-0 kubenswrapper[31411]: I0224 02:29:36.835799 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30565733-2b81-4928-bfcf-0e1fafb3bdb3-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf\" (UID: \"30565733-2b81-4928-bfcf-0e1fafb3bdb3\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf" Feb 24 02:29:36.864066 master-0 kubenswrapper[31411]: I0224 02:29:36.863980 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmfmp\" (UniqueName: \"kubernetes.io/projected/30565733-2b81-4928-bfcf-0e1fafb3bdb3-kube-api-access-pmfmp\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf\" (UID: \"30565733-2b81-4928-bfcf-0e1fafb3bdb3\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf" Feb 24 02:29:37.033126 master-0 kubenswrapper[31411]: I0224 02:29:37.032955 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf" Feb 24 02:29:37.612356 master-0 kubenswrapper[31411]: W0224 02:29:37.612286 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30565733_2b81_4928_bfcf_0e1fafb3bdb3.slice/crio-aaa64f0bba106540b363d309e2fdda29274b5e79fd33316840ac8ced16264be8 WatchSource:0}: Error finding container aaa64f0bba106540b363d309e2fdda29274b5e79fd33316840ac8ced16264be8: Status 404 returned error can't find the container with id aaa64f0bba106540b363d309e2fdda29274b5e79fd33316840ac8ced16264be8 Feb 24 02:29:37.616270 master-0 kubenswrapper[31411]: I0224 02:29:37.616211 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf"] Feb 24 02:29:38.598648 master-0 kubenswrapper[31411]: I0224 02:29:38.598547 31411 generic.go:334] "Generic (PLEG): container finished" podID="30565733-2b81-4928-bfcf-0e1fafb3bdb3" containerID="58136bf40aefb136912cd675fcc2f0cd886b55521d85260f39c036d32bbd9966" exitCode=0 Feb 24 02:29:38.598648 master-0 kubenswrapper[31411]: I0224 02:29:38.598641 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf" event={"ID":"30565733-2b81-4928-bfcf-0e1fafb3bdb3","Type":"ContainerDied","Data":"58136bf40aefb136912cd675fcc2f0cd886b55521d85260f39c036d32bbd9966"} Feb 24 02:29:38.599790 master-0 kubenswrapper[31411]: I0224 02:29:38.598683 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf" event={"ID":"30565733-2b81-4928-bfcf-0e1fafb3bdb3","Type":"ContainerStarted","Data":"aaa64f0bba106540b363d309e2fdda29274b5e79fd33316840ac8ced16264be8"} Feb 24 02:29:40.621645 master-0 kubenswrapper[31411]: I0224 02:29:40.621532 31411 generic.go:334] "Generic (PLEG): container finished" podID="30565733-2b81-4928-bfcf-0e1fafb3bdb3" containerID="0d0115ae39b0dbfe42e3881cd0c45d69fbfdc1bd100d6b083169f948995f2dce" exitCode=0 Feb 24 02:29:40.622378 master-0 kubenswrapper[31411]: I0224 02:29:40.621643 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf" event={"ID":"30565733-2b81-4928-bfcf-0e1fafb3bdb3","Type":"ContainerDied","Data":"0d0115ae39b0dbfe42e3881cd0c45d69fbfdc1bd100d6b083169f948995f2dce"} Feb 24 02:29:41.643133 master-0 kubenswrapper[31411]: I0224 02:29:41.642949 31411 generic.go:334] "Generic (PLEG): container finished" podID="30565733-2b81-4928-bfcf-0e1fafb3bdb3" containerID="832ba61aa8bb7faa0bc099537dbdb1d731f5ad89d72b34aed0edc3db41ad2c8b" exitCode=0 Feb 24 02:29:41.643133 master-0 kubenswrapper[31411]: I0224 02:29:41.643017 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf" event={"ID":"30565733-2b81-4928-bfcf-0e1fafb3bdb3","Type":"ContainerDied","Data":"832ba61aa8bb7faa0bc099537dbdb1d731f5ad89d72b34aed0edc3db41ad2c8b"} Feb 24 02:29:43.030601 master-0 kubenswrapper[31411]: I0224 02:29:43.029302 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf" Feb 24 02:29:43.167673 master-0 kubenswrapper[31411]: I0224 02:29:43.167564 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmfmp\" (UniqueName: \"kubernetes.io/projected/30565733-2b81-4928-bfcf-0e1fafb3bdb3-kube-api-access-pmfmp\") pod \"30565733-2b81-4928-bfcf-0e1fafb3bdb3\" (UID: \"30565733-2b81-4928-bfcf-0e1fafb3bdb3\") " Feb 24 02:29:43.167979 master-0 kubenswrapper[31411]: I0224 02:29:43.167814 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30565733-2b81-4928-bfcf-0e1fafb3bdb3-bundle\") pod \"30565733-2b81-4928-bfcf-0e1fafb3bdb3\" (UID: \"30565733-2b81-4928-bfcf-0e1fafb3bdb3\") " Feb 24 02:29:43.168938 master-0 kubenswrapper[31411]: I0224 02:29:43.168862 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30565733-2b81-4928-bfcf-0e1fafb3bdb3-util\") pod \"30565733-2b81-4928-bfcf-0e1fafb3bdb3\" (UID: \"30565733-2b81-4928-bfcf-0e1fafb3bdb3\") " Feb 24 02:29:43.169069 master-0 kubenswrapper[31411]: I0224 02:29:43.169013 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30565733-2b81-4928-bfcf-0e1fafb3bdb3-bundle" (OuterVolumeSpecName: "bundle") pod "30565733-2b81-4928-bfcf-0e1fafb3bdb3" (UID: "30565733-2b81-4928-bfcf-0e1fafb3bdb3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:29:43.169903 master-0 kubenswrapper[31411]: I0224 02:29:43.169862 31411 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30565733-2b81-4928-bfcf-0e1fafb3bdb3-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:29:43.173451 master-0 kubenswrapper[31411]: I0224 02:29:43.173382 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30565733-2b81-4928-bfcf-0e1fafb3bdb3-kube-api-access-pmfmp" (OuterVolumeSpecName: "kube-api-access-pmfmp") pod "30565733-2b81-4928-bfcf-0e1fafb3bdb3" (UID: "30565733-2b81-4928-bfcf-0e1fafb3bdb3"). InnerVolumeSpecName "kube-api-access-pmfmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:29:43.187860 master-0 kubenswrapper[31411]: I0224 02:29:43.187798 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30565733-2b81-4928-bfcf-0e1fafb3bdb3-util" (OuterVolumeSpecName: "util") pod "30565733-2b81-4928-bfcf-0e1fafb3bdb3" (UID: "30565733-2b81-4928-bfcf-0e1fafb3bdb3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:29:43.273966 master-0 kubenswrapper[31411]: I0224 02:29:43.273765 31411 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30565733-2b81-4928-bfcf-0e1fafb3bdb3-util\") on node \"master-0\" DevicePath \"\"" Feb 24 02:29:43.273966 master-0 kubenswrapper[31411]: I0224 02:29:43.273834 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmfmp\" (UniqueName: \"kubernetes.io/projected/30565733-2b81-4928-bfcf-0e1fafb3bdb3-kube-api-access-pmfmp\") on node \"master-0\" DevicePath \"\"" Feb 24 02:29:43.665965 master-0 kubenswrapper[31411]: I0224 02:29:43.665776 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf" event={"ID":"30565733-2b81-4928-bfcf-0e1fafb3bdb3","Type":"ContainerDied","Data":"aaa64f0bba106540b363d309e2fdda29274b5e79fd33316840ac8ced16264be8"} Feb 24 02:29:43.665965 master-0 kubenswrapper[31411]: I0224 02:29:43.665856 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aaa64f0bba106540b363d309e2fdda29274b5e79fd33316840ac8ced16264be8" Feb 24 02:29:43.665965 master-0 kubenswrapper[31411]: I0224 02:29:43.665936 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42ntpf" Feb 24 02:29:49.839484 master-0 kubenswrapper[31411]: I0224 02:29:49.839420 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/lvms-operator-7bbcf6487b-nkgxz"] Feb 24 02:29:49.840312 master-0 kubenswrapper[31411]: E0224 02:29:49.839822 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30565733-2b81-4928-bfcf-0e1fafb3bdb3" containerName="extract" Feb 24 02:29:49.840312 master-0 kubenswrapper[31411]: I0224 02:29:49.839838 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="30565733-2b81-4928-bfcf-0e1fafb3bdb3" containerName="extract" Feb 24 02:29:49.840312 master-0 kubenswrapper[31411]: E0224 02:29:49.839856 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30565733-2b81-4928-bfcf-0e1fafb3bdb3" containerName="util" Feb 24 02:29:49.840312 master-0 kubenswrapper[31411]: I0224 02:29:49.839868 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="30565733-2b81-4928-bfcf-0e1fafb3bdb3" containerName="util" Feb 24 02:29:49.840312 master-0 kubenswrapper[31411]: E0224 02:29:49.839884 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30565733-2b81-4928-bfcf-0e1fafb3bdb3" containerName="pull" Feb 24 02:29:49.840312 master-0 kubenswrapper[31411]: I0224 02:29:49.839893 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="30565733-2b81-4928-bfcf-0e1fafb3bdb3" containerName="pull" Feb 24 02:29:49.840312 master-0 kubenswrapper[31411]: I0224 02:29:49.840148 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="30565733-2b81-4928-bfcf-0e1fafb3bdb3" containerName="extract" Feb 24 02:29:49.840754 master-0 kubenswrapper[31411]: I0224 02:29:49.840727 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-7bbcf6487b-nkgxz" Feb 24 02:29:49.846718 master-0 kubenswrapper[31411]: I0224 02:29:49.846655 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-service-cert" Feb 24 02:29:49.846892 master-0 kubenswrapper[31411]: I0224 02:29:49.846833 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Feb 24 02:29:49.847023 master-0 kubenswrapper[31411]: I0224 02:29:49.846971 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-webhook-server-cert" Feb 24 02:29:49.849466 master-0 kubenswrapper[31411]: I0224 02:29:49.849426 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Feb 24 02:29:49.849818 master-0 kubenswrapper[31411]: I0224 02:29:49.849789 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-metrics-cert" Feb 24 02:29:49.857123 master-0 kubenswrapper[31411]: I0224 02:29:49.857073 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-7bbcf6487b-nkgxz"] Feb 24 02:29:50.015669 master-0 kubenswrapper[31411]: I0224 02:29:50.015593 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73eee22-f6ec-43ab-8287-f122a830d68a-webhook-cert\") pod \"lvms-operator-7bbcf6487b-nkgxz\" (UID: \"e73eee22-f6ec-43ab-8287-f122a830d68a\") " pod="openshift-storage/lvms-operator-7bbcf6487b-nkgxz" Feb 24 02:29:50.015962 master-0 kubenswrapper[31411]: I0224 02:29:50.015712 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e73eee22-f6ec-43ab-8287-f122a830d68a-socket-dir\") pod \"lvms-operator-7bbcf6487b-nkgxz\" (UID: \"e73eee22-f6ec-43ab-8287-f122a830d68a\") " pod="openshift-storage/lvms-operator-7bbcf6487b-nkgxz" Feb 24 02:29:50.015962 master-0 kubenswrapper[31411]: I0224 02:29:50.015852 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73eee22-f6ec-43ab-8287-f122a830d68a-apiservice-cert\") pod \"lvms-operator-7bbcf6487b-nkgxz\" (UID: \"e73eee22-f6ec-43ab-8287-f122a830d68a\") " pod="openshift-storage/lvms-operator-7bbcf6487b-nkgxz" Feb 24 02:29:50.016030 master-0 kubenswrapper[31411]: I0224 02:29:50.015981 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/e73eee22-f6ec-43ab-8287-f122a830d68a-metrics-cert\") pod \"lvms-operator-7bbcf6487b-nkgxz\" (UID: \"e73eee22-f6ec-43ab-8287-f122a830d68a\") " pod="openshift-storage/lvms-operator-7bbcf6487b-nkgxz" Feb 24 02:29:50.016157 master-0 kubenswrapper[31411]: I0224 02:29:50.016124 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc9dc\" (UniqueName: \"kubernetes.io/projected/e73eee22-f6ec-43ab-8287-f122a830d68a-kube-api-access-nc9dc\") pod \"lvms-operator-7bbcf6487b-nkgxz\" (UID: \"e73eee22-f6ec-43ab-8287-f122a830d68a\") " pod="openshift-storage/lvms-operator-7bbcf6487b-nkgxz" Feb 24 02:29:50.117849 master-0 kubenswrapper[31411]: I0224 02:29:50.117677 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73eee22-f6ec-43ab-8287-f122a830d68a-webhook-cert\") pod \"lvms-operator-7bbcf6487b-nkgxz\" (UID: \"e73eee22-f6ec-43ab-8287-f122a830d68a\") " pod="openshift-storage/lvms-operator-7bbcf6487b-nkgxz" Feb 24 02:29:50.117849 master-0 kubenswrapper[31411]: I0224 02:29:50.117757 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e73eee22-f6ec-43ab-8287-f122a830d68a-socket-dir\") pod \"lvms-operator-7bbcf6487b-nkgxz\" (UID: \"e73eee22-f6ec-43ab-8287-f122a830d68a\") " pod="openshift-storage/lvms-operator-7bbcf6487b-nkgxz" Feb 24 02:29:50.118087 master-0 kubenswrapper[31411]: I0224 02:29:50.117991 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73eee22-f6ec-43ab-8287-f122a830d68a-apiservice-cert\") pod \"lvms-operator-7bbcf6487b-nkgxz\" (UID: \"e73eee22-f6ec-43ab-8287-f122a830d68a\") " pod="openshift-storage/lvms-operator-7bbcf6487b-nkgxz" Feb 24 02:29:50.118376 master-0 kubenswrapper[31411]: I0224 02:29:50.118314 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/e73eee22-f6ec-43ab-8287-f122a830d68a-metrics-cert\") pod \"lvms-operator-7bbcf6487b-nkgxz\" (UID: \"e73eee22-f6ec-43ab-8287-f122a830d68a\") " pod="openshift-storage/lvms-operator-7bbcf6487b-nkgxz" Feb 24 02:29:50.118503 master-0 kubenswrapper[31411]: I0224 02:29:50.118483 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e73eee22-f6ec-43ab-8287-f122a830d68a-socket-dir\") pod \"lvms-operator-7bbcf6487b-nkgxz\" (UID: \"e73eee22-f6ec-43ab-8287-f122a830d68a\") " pod="openshift-storage/lvms-operator-7bbcf6487b-nkgxz" Feb 24 02:29:50.118554 master-0 kubenswrapper[31411]: I0224 02:29:50.118516 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9dc\" (UniqueName: \"kubernetes.io/projected/e73eee22-f6ec-43ab-8287-f122a830d68a-kube-api-access-nc9dc\") pod \"lvms-operator-7bbcf6487b-nkgxz\" (UID: \"e73eee22-f6ec-43ab-8287-f122a830d68a\") " pod="openshift-storage/lvms-operator-7bbcf6487b-nkgxz" Feb 24 02:29:50.122301 master-0 kubenswrapper[31411]: I0224 02:29:50.122262 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/e73eee22-f6ec-43ab-8287-f122a830d68a-metrics-cert\") pod \"lvms-operator-7bbcf6487b-nkgxz\" (UID: \"e73eee22-f6ec-43ab-8287-f122a830d68a\") " pod="openshift-storage/lvms-operator-7bbcf6487b-nkgxz" Feb 24 02:29:50.122808 master-0 kubenswrapper[31411]: I0224 02:29:50.122765 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73eee22-f6ec-43ab-8287-f122a830d68a-webhook-cert\") pod \"lvms-operator-7bbcf6487b-nkgxz\" (UID: \"e73eee22-f6ec-43ab-8287-f122a830d68a\") " pod="openshift-storage/lvms-operator-7bbcf6487b-nkgxz" Feb 24 02:29:50.123511 master-0 kubenswrapper[31411]: I0224 02:29:50.123451 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73eee22-f6ec-43ab-8287-f122a830d68a-apiservice-cert\") pod \"lvms-operator-7bbcf6487b-nkgxz\" (UID: \"e73eee22-f6ec-43ab-8287-f122a830d68a\") " pod="openshift-storage/lvms-operator-7bbcf6487b-nkgxz" Feb 24 02:29:50.137233 master-0 kubenswrapper[31411]: I0224 02:29:50.137183 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc9dc\" (UniqueName: \"kubernetes.io/projected/e73eee22-f6ec-43ab-8287-f122a830d68a-kube-api-access-nc9dc\") pod \"lvms-operator-7bbcf6487b-nkgxz\" (UID: \"e73eee22-f6ec-43ab-8287-f122a830d68a\") " pod="openshift-storage/lvms-operator-7bbcf6487b-nkgxz" Feb 24 02:29:50.158898 master-0 kubenswrapper[31411]: I0224 02:29:50.158839 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-7bbcf6487b-nkgxz" Feb 24 02:29:50.686669 master-0 kubenswrapper[31411]: I0224 02:29:50.686594 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-7bbcf6487b-nkgxz"] Feb 24 02:29:50.733471 master-0 kubenswrapper[31411]: I0224 02:29:50.733361 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-7bbcf6487b-nkgxz" event={"ID":"e73eee22-f6ec-43ab-8287-f122a830d68a","Type":"ContainerStarted","Data":"d76590b95572227a622c375819075641f3334c7bef3a6286a061cc95af943693"} Feb 24 02:29:56.792462 master-0 kubenswrapper[31411]: I0224 02:29:56.792357 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-7bbcf6487b-nkgxz" event={"ID":"e73eee22-f6ec-43ab-8287-f122a830d68a","Type":"ContainerStarted","Data":"1a2a916462b9bedb7547cf0dc5e835aeafa85eb442ddc847f59e600ee877c184"} Feb 24 02:29:56.793783 master-0 kubenswrapper[31411]: I0224 02:29:56.793736 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/lvms-operator-7bbcf6487b-nkgxz" Feb 24 02:29:56.829844 master-0 kubenswrapper[31411]: I0224 02:29:56.829718 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/lvms-operator-7bbcf6487b-nkgxz" podStartSLOduration=2.490243145 podStartE2EDuration="7.829697463s" podCreationTimestamp="2026-02-24 02:29:49 +0000 UTC" firstStartedPulling="2026-02-24 02:29:50.690989852 +0000 UTC m=+533.908187738" lastFinishedPulling="2026-02-24 02:29:56.03044417 +0000 UTC m=+539.247642056" observedRunningTime="2026-02-24 02:29:56.825484094 +0000 UTC m=+540.042681970" watchObservedRunningTime="2026-02-24 02:29:56.829697463 +0000 UTC m=+540.046895339" Feb 24 02:29:57.813424 master-0 kubenswrapper[31411]: I0224 02:29:57.813364 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/lvms-operator-7bbcf6487b-nkgxz" Feb 24 02:30:00.194291 master-0 kubenswrapper[31411]: I0224 02:30:00.194214 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531670-t652n"] Feb 24 02:30:00.195436 master-0 kubenswrapper[31411]: I0224 02:30:00.195400 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531670-t652n" Feb 24 02:30:00.198401 master-0 kubenswrapper[31411]: I0224 02:30:00.198338 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 02:30:00.199980 master-0 kubenswrapper[31411]: I0224 02:30:00.199939 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-dqz7q" Feb 24 02:30:00.214684 master-0 kubenswrapper[31411]: I0224 02:30:00.214611 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531670-t652n"] Feb 24 02:30:00.265591 master-0 kubenswrapper[31411]: I0224 02:30:00.265488 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d46hw\" (UniqueName: \"kubernetes.io/projected/c8325a5f-7007-43fe-a995-11f8205c19b2-kube-api-access-d46hw\") pod \"collect-profiles-29531670-t652n\" (UID: \"c8325a5f-7007-43fe-a995-11f8205c19b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531670-t652n" Feb 24 02:30:00.265848 master-0 kubenswrapper[31411]: I0224 02:30:00.265596 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8325a5f-7007-43fe-a995-11f8205c19b2-secret-volume\") pod \"collect-profiles-29531670-t652n\" (UID: \"c8325a5f-7007-43fe-a995-11f8205c19b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531670-t652n" Feb 24 02:30:00.265848 master-0 kubenswrapper[31411]: I0224 02:30:00.265759 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8325a5f-7007-43fe-a995-11f8205c19b2-config-volume\") pod \"collect-profiles-29531670-t652n\" (UID: \"c8325a5f-7007-43fe-a995-11f8205c19b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531670-t652n" Feb 24 02:30:00.368631 master-0 kubenswrapper[31411]: I0224 02:30:00.368400 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d46hw\" (UniqueName: \"kubernetes.io/projected/c8325a5f-7007-43fe-a995-11f8205c19b2-kube-api-access-d46hw\") pod \"collect-profiles-29531670-t652n\" (UID: \"c8325a5f-7007-43fe-a995-11f8205c19b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531670-t652n" Feb 24 02:30:00.368921 master-0 kubenswrapper[31411]: I0224 02:30:00.368682 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8325a5f-7007-43fe-a995-11f8205c19b2-secret-volume\") pod \"collect-profiles-29531670-t652n\" (UID: \"c8325a5f-7007-43fe-a995-11f8205c19b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531670-t652n" Feb 24 02:30:00.368921 master-0 kubenswrapper[31411]: I0224 02:30:00.368821 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8325a5f-7007-43fe-a995-11f8205c19b2-config-volume\") pod \"collect-profiles-29531670-t652n\" (UID: \"c8325a5f-7007-43fe-a995-11f8205c19b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531670-t652n" Feb 24 02:30:00.370165 master-0 kubenswrapper[31411]: I0224 02:30:00.370115 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8325a5f-7007-43fe-a995-11f8205c19b2-config-volume\") pod \"collect-profiles-29531670-t652n\" (UID: \"c8325a5f-7007-43fe-a995-11f8205c19b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531670-t652n" Feb 24 02:30:00.376320 master-0 kubenswrapper[31411]: I0224 02:30:00.376268 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8325a5f-7007-43fe-a995-11f8205c19b2-secret-volume\") pod \"collect-profiles-29531670-t652n\" (UID: \"c8325a5f-7007-43fe-a995-11f8205c19b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531670-t652n" Feb 24 02:30:00.389373 master-0 kubenswrapper[31411]: I0224 02:30:00.389326 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d46hw\" (UniqueName: \"kubernetes.io/projected/c8325a5f-7007-43fe-a995-11f8205c19b2-kube-api-access-d46hw\") pod \"collect-profiles-29531670-t652n\" (UID: \"c8325a5f-7007-43fe-a995-11f8205c19b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531670-t652n" Feb 24 02:30:00.526122 master-0 kubenswrapper[31411]: I0224 02:30:00.526043 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531670-t652n" Feb 24 02:30:01.079029 master-0 kubenswrapper[31411]: I0224 02:30:01.078948 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531670-t652n"] Feb 24 02:30:01.079732 master-0 kubenswrapper[31411]: W0224 02:30:01.079660 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8325a5f_7007_43fe_a995_11f8205c19b2.slice/crio-bd85b64e2d1ddbd24e55b88393fd76ecd8f5e62e0695f3b6a483aeaafd6603ba WatchSource:0}: Error finding container bd85b64e2d1ddbd24e55b88393fd76ecd8f5e62e0695f3b6a483aeaafd6603ba: Status 404 returned error can't find the container with id bd85b64e2d1ddbd24e55b88393fd76ecd8f5e62e0695f3b6a483aeaafd6603ba Feb 24 02:30:01.566825 master-0 kubenswrapper[31411]: I0224 02:30:01.566756 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx"] Feb 24 02:30:01.568624 master-0 kubenswrapper[31411]: I0224 02:30:01.568567 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx" Feb 24 02:30:01.571011 master-0 kubenswrapper[31411]: I0224 02:30:01.570957 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-dfkgm" Feb 24 02:30:01.591436 master-0 kubenswrapper[31411]: I0224 02:30:01.591375 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx"] Feb 24 02:30:01.594084 master-0 kubenswrapper[31411]: I0224 02:30:01.594019 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df2c951a-2433-4a38-8b67-e5e8d02433fa-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx\" (UID: \"df2c951a-2433-4a38-8b67-e5e8d02433fa\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx" Feb 24 02:30:01.594202 master-0 kubenswrapper[31411]: I0224 02:30:01.594111 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69l5m\" (UniqueName: \"kubernetes.io/projected/df2c951a-2433-4a38-8b67-e5e8d02433fa-kube-api-access-69l5m\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx\" (UID: \"df2c951a-2433-4a38-8b67-e5e8d02433fa\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx" Feb 24 02:30:01.594202 master-0 kubenswrapper[31411]: I0224 02:30:01.594180 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df2c951a-2433-4a38-8b67-e5e8d02433fa-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx\" (UID: \"df2c951a-2433-4a38-8b67-e5e8d02433fa\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx" Feb 24 02:30:01.696143 master-0 kubenswrapper[31411]: I0224 02:30:01.696056 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df2c951a-2433-4a38-8b67-e5e8d02433fa-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx\" (UID: \"df2c951a-2433-4a38-8b67-e5e8d02433fa\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx" Feb 24 02:30:01.696624 master-0 kubenswrapper[31411]: I0224 02:30:01.696168 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df2c951a-2433-4a38-8b67-e5e8d02433fa-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx\" (UID: \"df2c951a-2433-4a38-8b67-e5e8d02433fa\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx" Feb 24 02:30:01.696624 master-0 kubenswrapper[31411]: I0224 02:30:01.696231 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69l5m\" (UniqueName: \"kubernetes.io/projected/df2c951a-2433-4a38-8b67-e5e8d02433fa-kube-api-access-69l5m\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx\" (UID: \"df2c951a-2433-4a38-8b67-e5e8d02433fa\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx" Feb 24 02:30:01.697731 master-0 kubenswrapper[31411]: I0224 02:30:01.697087 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df2c951a-2433-4a38-8b67-e5e8d02433fa-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx\" (UID: \"df2c951a-2433-4a38-8b67-e5e8d02433fa\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx" Feb 24 02:30:01.697731 master-0 kubenswrapper[31411]: I0224 02:30:01.697304 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df2c951a-2433-4a38-8b67-e5e8d02433fa-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx\" (UID: \"df2c951a-2433-4a38-8b67-e5e8d02433fa\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx" Feb 24 02:30:01.731366 master-0 kubenswrapper[31411]: I0224 02:30:01.731231 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69l5m\" (UniqueName: \"kubernetes.io/projected/df2c951a-2433-4a38-8b67-e5e8d02433fa-kube-api-access-69l5m\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx\" (UID: \"df2c951a-2433-4a38-8b67-e5e8d02433fa\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx" Feb 24 02:30:01.741133 master-0 kubenswrapper[31411]: I0224 02:30:01.741003 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4"] Feb 24 02:30:01.744876 master-0 kubenswrapper[31411]: I0224 02:30:01.744824 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4" Feb 24 02:30:01.764700 master-0 kubenswrapper[31411]: I0224 02:30:01.761703 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4"] Feb 24 02:30:01.797427 master-0 kubenswrapper[31411]: I0224 02:30:01.797365 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/63d48e16-529a-486e-b21a-1e5c3f64295d-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4\" (UID: \"63d48e16-529a-486e-b21a-1e5c3f64295d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4" Feb 24 02:30:01.797663 master-0 kubenswrapper[31411]: I0224 02:30:01.797600 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/63d48e16-529a-486e-b21a-1e5c3f64295d-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4\" (UID: \"63d48e16-529a-486e-b21a-1e5c3f64295d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4" Feb 24 02:30:01.798037 master-0 kubenswrapper[31411]: I0224 02:30:01.797979 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f94n7\" (UniqueName: \"kubernetes.io/projected/63d48e16-529a-486e-b21a-1e5c3f64295d-kube-api-access-f94n7\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4\" (UID: \"63d48e16-529a-486e-b21a-1e5c3f64295d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4" Feb 24 02:30:01.855382 master-0 kubenswrapper[31411]: I0224 02:30:01.855341 31411 generic.go:334] "Generic (PLEG): container finished" podID="c8325a5f-7007-43fe-a995-11f8205c19b2" containerID="17295c9c53675112adc7bb9bc9c1af22b6775319fc779fe0c3460fa72ae66c4e" exitCode=0 Feb 24 02:30:01.855534 master-0 kubenswrapper[31411]: I0224 02:30:01.855504 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531670-t652n" event={"ID":"c8325a5f-7007-43fe-a995-11f8205c19b2","Type":"ContainerDied","Data":"17295c9c53675112adc7bb9bc9c1af22b6775319fc779fe0c3460fa72ae66c4e"} Feb 24 02:30:01.855667 master-0 kubenswrapper[31411]: I0224 02:30:01.855649 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531670-t652n" event={"ID":"c8325a5f-7007-43fe-a995-11f8205c19b2","Type":"ContainerStarted","Data":"bd85b64e2d1ddbd24e55b88393fd76ecd8f5e62e0695f3b6a483aeaafd6603ba"} Feb 24 02:30:01.900372 master-0 kubenswrapper[31411]: I0224 02:30:01.900276 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/63d48e16-529a-486e-b21a-1e5c3f64295d-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4\" (UID: \"63d48e16-529a-486e-b21a-1e5c3f64295d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4" Feb 24 02:30:01.900522 master-0 kubenswrapper[31411]: I0224 02:30:01.900473 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/63d48e16-529a-486e-b21a-1e5c3f64295d-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4\" (UID: \"63d48e16-529a-486e-b21a-1e5c3f64295d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4" Feb 24 02:30:01.900699 master-0 kubenswrapper[31411]: I0224 02:30:01.900668 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f94n7\" (UniqueName: \"kubernetes.io/projected/63d48e16-529a-486e-b21a-1e5c3f64295d-kube-api-access-f94n7\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4\" (UID: \"63d48e16-529a-486e-b21a-1e5c3f64295d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4" Feb 24 02:30:01.903379 master-0 kubenswrapper[31411]: I0224 02:30:01.903354 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/63d48e16-529a-486e-b21a-1e5c3f64295d-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4\" (UID: \"63d48e16-529a-486e-b21a-1e5c3f64295d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4" Feb 24 02:30:01.903540 master-0 kubenswrapper[31411]: I0224 02:30:01.903479 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/63d48e16-529a-486e-b21a-1e5c3f64295d-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4\" (UID: \"63d48e16-529a-486e-b21a-1e5c3f64295d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4" Feb 24 02:30:01.929141 master-0 kubenswrapper[31411]: I0224 02:30:01.929086 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f94n7\" (UniqueName: \"kubernetes.io/projected/63d48e16-529a-486e-b21a-1e5c3f64295d-kube-api-access-f94n7\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4\" (UID: \"63d48e16-529a-486e-b21a-1e5c3f64295d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4" Feb 24 02:30:01.963205 master-0 kubenswrapper[31411]: I0224 02:30:01.963156 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx" Feb 24 02:30:02.086023 master-0 kubenswrapper[31411]: I0224 02:30:02.085946 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4" Feb 24 02:30:02.417189 master-0 kubenswrapper[31411]: I0224 02:30:02.416696 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4"] Feb 24 02:30:02.417870 master-0 kubenswrapper[31411]: W0224 02:30:02.417735 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63d48e16_529a_486e_b21a_1e5c3f64295d.slice/crio-3be32c51de13008943df4c730e3bf0ba4591f533cb59a2e46854d41791ee25dc WatchSource:0}: Error finding container 3be32c51de13008943df4c730e3bf0ba4591f533cb59a2e46854d41791ee25dc: Status 404 returned error can't find the container with id 3be32c51de13008943df4c730e3bf0ba4591f533cb59a2e46854d41791ee25dc Feb 24 02:30:02.513867 master-0 kubenswrapper[31411]: I0224 02:30:02.513751 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx"] Feb 24 02:30:02.871365 master-0 kubenswrapper[31411]: I0224 02:30:02.871287 31411 generic.go:334] "Generic (PLEG): container finished" podID="df2c951a-2433-4a38-8b67-e5e8d02433fa" containerID="010330e008c3df816f3ef55b28613ed8189b4ef7a3e679550193982fa1c5ec30" exitCode=0 Feb 24 02:30:02.872103 master-0 kubenswrapper[31411]: I0224 02:30:02.871361 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx" event={"ID":"df2c951a-2433-4a38-8b67-e5e8d02433fa","Type":"ContainerDied","Data":"010330e008c3df816f3ef55b28613ed8189b4ef7a3e679550193982fa1c5ec30"} Feb 24 02:30:02.872103 master-0 kubenswrapper[31411]: I0224 02:30:02.871423 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx" event={"ID":"df2c951a-2433-4a38-8b67-e5e8d02433fa","Type":"ContainerStarted","Data":"2d05b85cf86e24690dd5b37bb112a9f43fb3a090ee2818a082ecced6269ed41b"} Feb 24 02:30:02.877605 master-0 kubenswrapper[31411]: I0224 02:30:02.877510 31411 generic.go:334] "Generic (PLEG): container finished" podID="63d48e16-529a-486e-b21a-1e5c3f64295d" containerID="e2fcad3a98bed224f0301f3b7e97966c64602ee56e4cf3a762b9f7f1e2f301c4" exitCode=0 Feb 24 02:30:02.878269 master-0 kubenswrapper[31411]: I0224 02:30:02.878068 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4" event={"ID":"63d48e16-529a-486e-b21a-1e5c3f64295d","Type":"ContainerDied","Data":"e2fcad3a98bed224f0301f3b7e97966c64602ee56e4cf3a762b9f7f1e2f301c4"} Feb 24 02:30:02.878374 master-0 kubenswrapper[31411]: I0224 02:30:02.878333 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4" event={"ID":"63d48e16-529a-486e-b21a-1e5c3f64295d","Type":"ContainerStarted","Data":"3be32c51de13008943df4c730e3bf0ba4591f533cb59a2e46854d41791ee25dc"} Feb 24 02:30:03.325536 master-0 kubenswrapper[31411]: I0224 02:30:03.325473 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531670-t652n" Feb 24 02:30:03.431835 master-0 kubenswrapper[31411]: I0224 02:30:03.431770 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d46hw\" (UniqueName: \"kubernetes.io/projected/c8325a5f-7007-43fe-a995-11f8205c19b2-kube-api-access-d46hw\") pod \"c8325a5f-7007-43fe-a995-11f8205c19b2\" (UID: \"c8325a5f-7007-43fe-a995-11f8205c19b2\") " Feb 24 02:30:03.431835 master-0 kubenswrapper[31411]: I0224 02:30:03.431856 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8325a5f-7007-43fe-a995-11f8205c19b2-secret-volume\") pod \"c8325a5f-7007-43fe-a995-11f8205c19b2\" (UID: \"c8325a5f-7007-43fe-a995-11f8205c19b2\") " Feb 24 02:30:03.432351 master-0 kubenswrapper[31411]: I0224 02:30:03.432162 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8325a5f-7007-43fe-a995-11f8205c19b2-config-volume\") pod \"c8325a5f-7007-43fe-a995-11f8205c19b2\" (UID: \"c8325a5f-7007-43fe-a995-11f8205c19b2\") " Feb 24 02:30:03.433500 master-0 kubenswrapper[31411]: I0224 02:30:03.433419 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8325a5f-7007-43fe-a995-11f8205c19b2-config-volume" (OuterVolumeSpecName: "config-volume") pod "c8325a5f-7007-43fe-a995-11f8205c19b2" (UID: "c8325a5f-7007-43fe-a995-11f8205c19b2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:30:03.437326 master-0 kubenswrapper[31411]: I0224 02:30:03.437256 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8325a5f-7007-43fe-a995-11f8205c19b2-kube-api-access-d46hw" (OuterVolumeSpecName: "kube-api-access-d46hw") pod "c8325a5f-7007-43fe-a995-11f8205c19b2" (UID: "c8325a5f-7007-43fe-a995-11f8205c19b2"). InnerVolumeSpecName "kube-api-access-d46hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:30:03.437937 master-0 kubenswrapper[31411]: I0224 02:30:03.437845 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8325a5f-7007-43fe-a995-11f8205c19b2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c8325a5f-7007-43fe-a995-11f8205c19b2" (UID: "c8325a5f-7007-43fe-a995-11f8205c19b2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:30:03.534995 master-0 kubenswrapper[31411]: I0224 02:30:03.534917 31411 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8325a5f-7007-43fe-a995-11f8205c19b2-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 24 02:30:03.534995 master-0 kubenswrapper[31411]: I0224 02:30:03.534977 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d46hw\" (UniqueName: \"kubernetes.io/projected/c8325a5f-7007-43fe-a995-11f8205c19b2-kube-api-access-d46hw\") on node \"master-0\" DevicePath \"\"" Feb 24 02:30:03.534995 master-0 kubenswrapper[31411]: I0224 02:30:03.535000 31411 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8325a5f-7007-43fe-a995-11f8205c19b2-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 24 02:30:03.903604 master-0 kubenswrapper[31411]: I0224 02:30:03.903458 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531670-t652n" event={"ID":"c8325a5f-7007-43fe-a995-11f8205c19b2","Type":"ContainerDied","Data":"bd85b64e2d1ddbd24e55b88393fd76ecd8f5e62e0695f3b6a483aeaafd6603ba"} Feb 24 02:30:03.903604 master-0 kubenswrapper[31411]: I0224 02:30:03.903540 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531670-t652n" Feb 24 02:30:03.906668 master-0 kubenswrapper[31411]: I0224 02:30:03.903555 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd85b64e2d1ddbd24e55b88393fd76ecd8f5e62e0695f3b6a483aeaafd6603ba" Feb 24 02:30:04.260845 master-0 kubenswrapper[31411]: I0224 02:30:04.260760 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68"] Feb 24 02:30:04.261356 master-0 kubenswrapper[31411]: E0224 02:30:04.261323 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8325a5f-7007-43fe-a995-11f8205c19b2" containerName="collect-profiles" Feb 24 02:30:04.261356 master-0 kubenswrapper[31411]: I0224 02:30:04.261352 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8325a5f-7007-43fe-a995-11f8205c19b2" containerName="collect-profiles" Feb 24 02:30:04.261672 master-0 kubenswrapper[31411]: I0224 02:30:04.261642 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8325a5f-7007-43fe-a995-11f8205c19b2" containerName="collect-profiles" Feb 24 02:30:04.263569 master-0 kubenswrapper[31411]: I0224 02:30:04.263527 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68" Feb 24 02:30:04.291917 master-0 kubenswrapper[31411]: I0224 02:30:04.291182 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68"] Feb 24 02:30:04.352687 master-0 kubenswrapper[31411]: I0224 02:30:04.352566 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/91a0fd7c-d9a9-47fe-b086-15762c6b99fb-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68\" (UID: \"91a0fd7c-d9a9-47fe-b086-15762c6b99fb\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68" Feb 24 02:30:04.353027 master-0 kubenswrapper[31411]: I0224 02:30:04.352797 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/91a0fd7c-d9a9-47fe-b086-15762c6b99fb-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68\" (UID: \"91a0fd7c-d9a9-47fe-b086-15762c6b99fb\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68" Feb 24 02:30:04.353027 master-0 kubenswrapper[31411]: I0224 02:30:04.352849 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw7kf\" (UniqueName: \"kubernetes.io/projected/91a0fd7c-d9a9-47fe-b086-15762c6b99fb-kube-api-access-dw7kf\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68\" (UID: \"91a0fd7c-d9a9-47fe-b086-15762c6b99fb\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68" Feb 24 02:30:04.454855 master-0 kubenswrapper[31411]: I0224 02:30:04.454746 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/91a0fd7c-d9a9-47fe-b086-15762c6b99fb-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68\" (UID: \"91a0fd7c-d9a9-47fe-b086-15762c6b99fb\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68" Feb 24 02:30:04.455197 master-0 kubenswrapper[31411]: I0224 02:30:04.454936 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/91a0fd7c-d9a9-47fe-b086-15762c6b99fb-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68\" (UID: \"91a0fd7c-d9a9-47fe-b086-15762c6b99fb\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68" Feb 24 02:30:04.455456 master-0 kubenswrapper[31411]: I0224 02:30:04.455372 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw7kf\" (UniqueName: \"kubernetes.io/projected/91a0fd7c-d9a9-47fe-b086-15762c6b99fb-kube-api-access-dw7kf\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68\" (UID: \"91a0fd7c-d9a9-47fe-b086-15762c6b99fb\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68" Feb 24 02:30:04.455953 master-0 kubenswrapper[31411]: I0224 02:30:04.455887 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/91a0fd7c-d9a9-47fe-b086-15762c6b99fb-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68\" (UID: \"91a0fd7c-d9a9-47fe-b086-15762c6b99fb\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68" Feb 24 02:30:04.456076 master-0 kubenswrapper[31411]: I0224 02:30:04.455974 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/91a0fd7c-d9a9-47fe-b086-15762c6b99fb-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68\" (UID: \"91a0fd7c-d9a9-47fe-b086-15762c6b99fb\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68" Feb 24 02:30:04.484383 master-0 kubenswrapper[31411]: I0224 02:30:04.484307 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw7kf\" (UniqueName: \"kubernetes.io/projected/91a0fd7c-d9a9-47fe-b086-15762c6b99fb-kube-api-access-dw7kf\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68\" (UID: \"91a0fd7c-d9a9-47fe-b086-15762c6b99fb\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68" Feb 24 02:30:04.589008 master-0 kubenswrapper[31411]: I0224 02:30:04.588844 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68" Feb 24 02:30:05.136652 master-0 kubenswrapper[31411]: I0224 02:30:05.136526 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68"] Feb 24 02:30:05.143964 master-0 kubenswrapper[31411]: W0224 02:30:05.143874 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91a0fd7c_d9a9_47fe_b086_15762c6b99fb.slice/crio-e50c4ba47c4681bb7b8f65d920d85ca281016eff5fc2b4b24c702b88d9a6a118 WatchSource:0}: Error finding container e50c4ba47c4681bb7b8f65d920d85ca281016eff5fc2b4b24c702b88d9a6a118: Status 404 returned error can't find the container with id e50c4ba47c4681bb7b8f65d920d85ca281016eff5fc2b4b24c702b88d9a6a118 Feb 24 02:30:05.931854 master-0 kubenswrapper[31411]: I0224 02:30:05.931764 31411 generic.go:334] "Generic (PLEG): container finished" podID="91a0fd7c-d9a9-47fe-b086-15762c6b99fb" containerID="c980654e0a241bd50682700a6748cbb0319cfe1eafd71f0e454901fd9f9c20ef" exitCode=0 Feb 24 02:30:05.931854 master-0 kubenswrapper[31411]: I0224 02:30:05.931850 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68" event={"ID":"91a0fd7c-d9a9-47fe-b086-15762c6b99fb","Type":"ContainerDied","Data":"c980654e0a241bd50682700a6748cbb0319cfe1eafd71f0e454901fd9f9c20ef"} Feb 24 02:30:05.932284 master-0 kubenswrapper[31411]: I0224 02:30:05.931899 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68" event={"ID":"91a0fd7c-d9a9-47fe-b086-15762c6b99fb","Type":"ContainerStarted","Data":"e50c4ba47c4681bb7b8f65d920d85ca281016eff5fc2b4b24c702b88d9a6a118"} Feb 24 02:30:08.403622 master-0 kubenswrapper[31411]: I0224 02:30:08.403529 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx"] Feb 24 02:30:08.408318 master-0 kubenswrapper[31411]: I0224 02:30:08.408253 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx" Feb 24 02:30:08.416989 master-0 kubenswrapper[31411]: I0224 02:30:08.416921 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx"] Feb 24 02:30:08.540916 master-0 kubenswrapper[31411]: I0224 02:30:08.540800 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e092618-1127-4b14-91b4-9e875816d8ec-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx\" (UID: \"4e092618-1127-4b14-91b4-9e875816d8ec\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx" Feb 24 02:30:08.541258 master-0 kubenswrapper[31411]: I0224 02:30:08.541130 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e092618-1127-4b14-91b4-9e875816d8ec-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx\" (UID: \"4e092618-1127-4b14-91b4-9e875816d8ec\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx" Feb 24 02:30:08.541258 master-0 kubenswrapper[31411]: I0224 02:30:08.541202 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkc7h\" (UniqueName: \"kubernetes.io/projected/4e092618-1127-4b14-91b4-9e875816d8ec-kube-api-access-kkc7h\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx\" (UID: \"4e092618-1127-4b14-91b4-9e875816d8ec\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx" Feb 24 02:30:08.642976 master-0 kubenswrapper[31411]: I0224 02:30:08.642895 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e092618-1127-4b14-91b4-9e875816d8ec-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx\" (UID: \"4e092618-1127-4b14-91b4-9e875816d8ec\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx" Feb 24 02:30:08.642976 master-0 kubenswrapper[31411]: I0224 02:30:08.642973 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e092618-1127-4b14-91b4-9e875816d8ec-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx\" (UID: \"4e092618-1127-4b14-91b4-9e875816d8ec\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx" Feb 24 02:30:08.643902 master-0 kubenswrapper[31411]: I0224 02:30:08.643857 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e092618-1127-4b14-91b4-9e875816d8ec-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx\" (UID: \"4e092618-1127-4b14-91b4-9e875816d8ec\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx" Feb 24 02:30:08.643902 master-0 kubenswrapper[31411]: I0224 02:30:08.643857 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkc7h\" (UniqueName: \"kubernetes.io/projected/4e092618-1127-4b14-91b4-9e875816d8ec-kube-api-access-kkc7h\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx\" (UID: \"4e092618-1127-4b14-91b4-9e875816d8ec\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx" Feb 24 02:30:08.644104 master-0 kubenswrapper[31411]: I0224 02:30:08.644020 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e092618-1127-4b14-91b4-9e875816d8ec-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx\" (UID: \"4e092618-1127-4b14-91b4-9e875816d8ec\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx" Feb 24 02:30:08.672718 master-0 kubenswrapper[31411]: I0224 02:30:08.672527 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkc7h\" (UniqueName: \"kubernetes.io/projected/4e092618-1127-4b14-91b4-9e875816d8ec-kube-api-access-kkc7h\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx\" (UID: \"4e092618-1127-4b14-91b4-9e875816d8ec\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx" Feb 24 02:30:08.743302 master-0 kubenswrapper[31411]: I0224 02:30:08.743201 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx" Feb 24 02:30:09.287534 master-0 kubenswrapper[31411]: I0224 02:30:09.287467 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx"] Feb 24 02:30:09.295193 master-0 kubenswrapper[31411]: W0224 02:30:09.295126 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e092618_1127_4b14_91b4_9e875816d8ec.slice/crio-3aa5b19a37c2a0ef2473ce731f4458d2d495d41ca0d7bdda8dcbfd7c00eab83e WatchSource:0}: Error finding container 3aa5b19a37c2a0ef2473ce731f4458d2d495d41ca0d7bdda8dcbfd7c00eab83e: Status 404 returned error can't find the container with id 3aa5b19a37c2a0ef2473ce731f4458d2d495d41ca0d7bdda8dcbfd7c00eab83e Feb 24 02:30:09.984072 master-0 kubenswrapper[31411]: I0224 02:30:09.983977 31411 generic.go:334] "Generic (PLEG): container finished" podID="4e092618-1127-4b14-91b4-9e875816d8ec" containerID="f26e4790d5565c0a31b1394a883522a6fdb1701f6fddec1c9da7975a61aa4648" exitCode=0 Feb 24 02:30:09.984072 master-0 kubenswrapper[31411]: I0224 02:30:09.984030 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx" event={"ID":"4e092618-1127-4b14-91b4-9e875816d8ec","Type":"ContainerDied","Data":"f26e4790d5565c0a31b1394a883522a6fdb1701f6fddec1c9da7975a61aa4648"} Feb 24 02:30:09.984072 master-0 kubenswrapper[31411]: I0224 02:30:09.984060 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx" event={"ID":"4e092618-1127-4b14-91b4-9e875816d8ec","Type":"ContainerStarted","Data":"3aa5b19a37c2a0ef2473ce731f4458d2d495d41ca0d7bdda8dcbfd7c00eab83e"} Feb 24 02:30:11.003410 master-0 kubenswrapper[31411]: I0224 02:30:11.002674 31411 generic.go:334] "Generic (PLEG): container finished" podID="63d48e16-529a-486e-b21a-1e5c3f64295d" containerID="8d6b990ae69e40379ff12a2a4d9af09f88a52a809fce3b8586edf1ae05c80a74" exitCode=0 Feb 24 02:30:11.003410 master-0 kubenswrapper[31411]: I0224 02:30:11.002789 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4" event={"ID":"63d48e16-529a-486e-b21a-1e5c3f64295d","Type":"ContainerDied","Data":"8d6b990ae69e40379ff12a2a4d9af09f88a52a809fce3b8586edf1ae05c80a74"} Feb 24 02:30:11.008652 master-0 kubenswrapper[31411]: I0224 02:30:11.007887 31411 generic.go:334] "Generic (PLEG): container finished" podID="91a0fd7c-d9a9-47fe-b086-15762c6b99fb" containerID="01d331a93e27eb5ecd63e65b95d052a073116881561a3036f3765e655cbf8d49" exitCode=0 Feb 24 02:30:11.008652 master-0 kubenswrapper[31411]: I0224 02:30:11.007936 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68" event={"ID":"91a0fd7c-d9a9-47fe-b086-15762c6b99fb","Type":"ContainerDied","Data":"01d331a93e27eb5ecd63e65b95d052a073116881561a3036f3765e655cbf8d49"} Feb 24 02:30:12.019969 master-0 kubenswrapper[31411]: I0224 02:30:12.019900 31411 generic.go:334] "Generic (PLEG): container finished" podID="63d48e16-529a-486e-b21a-1e5c3f64295d" containerID="469fddf6bdc4a102b26618e853a1bda0e8605b01bd06d3f62c95fb48dae9b614" exitCode=0 Feb 24 02:30:12.021184 master-0 kubenswrapper[31411]: I0224 02:30:12.021034 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4" event={"ID":"63d48e16-529a-486e-b21a-1e5c3f64295d","Type":"ContainerDied","Data":"469fddf6bdc4a102b26618e853a1bda0e8605b01bd06d3f62c95fb48dae9b614"} Feb 24 02:30:13.034260 master-0 kubenswrapper[31411]: I0224 02:30:13.034173 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx" event={"ID":"df2c951a-2433-4a38-8b67-e5e8d02433fa","Type":"ContainerStarted","Data":"b759e10f0b12cda08ff7f3c7f87d2ce370ec19e71b846dfdb15aa49de6009bd5"} Feb 24 02:30:13.037373 master-0 kubenswrapper[31411]: I0224 02:30:13.037277 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx" event={"ID":"4e092618-1127-4b14-91b4-9e875816d8ec","Type":"ContainerStarted","Data":"17b5fcda17d9ef7d3ba17d0896132392b2bb9665a67f0fb4983980ddebe41856"} Feb 24 02:30:13.041832 master-0 kubenswrapper[31411]: I0224 02:30:13.041493 31411 generic.go:334] "Generic (PLEG): container finished" podID="91a0fd7c-d9a9-47fe-b086-15762c6b99fb" containerID="fb20daa6a5fdcfc0efa8b7303684e577ea34d66e8bdfa91897bb81b9ff114362" exitCode=0 Feb 24 02:30:13.041832 master-0 kubenswrapper[31411]: I0224 02:30:13.041728 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68" event={"ID":"91a0fd7c-d9a9-47fe-b086-15762c6b99fb","Type":"ContainerDied","Data":"fb20daa6a5fdcfc0efa8b7303684e577ea34d66e8bdfa91897bb81b9ff114362"} Feb 24 02:30:13.516871 master-0 kubenswrapper[31411]: I0224 02:30:13.516792 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4" Feb 24 02:30:13.665059 master-0 kubenswrapper[31411]: I0224 02:30:13.664943 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/63d48e16-529a-486e-b21a-1e5c3f64295d-bundle\") pod \"63d48e16-529a-486e-b21a-1e5c3f64295d\" (UID: \"63d48e16-529a-486e-b21a-1e5c3f64295d\") " Feb 24 02:30:13.665369 master-0 kubenswrapper[31411]: I0224 02:30:13.665119 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/63d48e16-529a-486e-b21a-1e5c3f64295d-util\") pod \"63d48e16-529a-486e-b21a-1e5c3f64295d\" (UID: \"63d48e16-529a-486e-b21a-1e5c3f64295d\") " Feb 24 02:30:13.665369 master-0 kubenswrapper[31411]: I0224 02:30:13.665276 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f94n7\" (UniqueName: \"kubernetes.io/projected/63d48e16-529a-486e-b21a-1e5c3f64295d-kube-api-access-f94n7\") pod \"63d48e16-529a-486e-b21a-1e5c3f64295d\" (UID: \"63d48e16-529a-486e-b21a-1e5c3f64295d\") " Feb 24 02:30:13.670483 master-0 kubenswrapper[31411]: I0224 02:30:13.670417 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63d48e16-529a-486e-b21a-1e5c3f64295d-kube-api-access-f94n7" (OuterVolumeSpecName: "kube-api-access-f94n7") pod "63d48e16-529a-486e-b21a-1e5c3f64295d" (UID: "63d48e16-529a-486e-b21a-1e5c3f64295d"). InnerVolumeSpecName "kube-api-access-f94n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:30:13.672087 master-0 kubenswrapper[31411]: I0224 02:30:13.671995 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63d48e16-529a-486e-b21a-1e5c3f64295d-bundle" (OuterVolumeSpecName: "bundle") pod "63d48e16-529a-486e-b21a-1e5c3f64295d" (UID: "63d48e16-529a-486e-b21a-1e5c3f64295d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:30:13.687899 master-0 kubenswrapper[31411]: I0224 02:30:13.687734 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63d48e16-529a-486e-b21a-1e5c3f64295d-util" (OuterVolumeSpecName: "util") pod "63d48e16-529a-486e-b21a-1e5c3f64295d" (UID: "63d48e16-529a-486e-b21a-1e5c3f64295d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:30:13.767910 master-0 kubenswrapper[31411]: I0224 02:30:13.767789 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f94n7\" (UniqueName: \"kubernetes.io/projected/63d48e16-529a-486e-b21a-1e5c3f64295d-kube-api-access-f94n7\") on node \"master-0\" DevicePath \"\"" Feb 24 02:30:13.767910 master-0 kubenswrapper[31411]: I0224 02:30:13.767864 31411 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/63d48e16-529a-486e-b21a-1e5c3f64295d-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:30:13.767910 master-0 kubenswrapper[31411]: I0224 02:30:13.767894 31411 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/63d48e16-529a-486e-b21a-1e5c3f64295d-util\") on node \"master-0\" DevicePath \"\"" Feb 24 02:30:14.056947 master-0 kubenswrapper[31411]: I0224 02:30:14.056843 31411 generic.go:334] "Generic (PLEG): container finished" podID="4e092618-1127-4b14-91b4-9e875816d8ec" containerID="17b5fcda17d9ef7d3ba17d0896132392b2bb9665a67f0fb4983980ddebe41856" exitCode=0 Feb 24 02:30:14.057866 master-0 kubenswrapper[31411]: I0224 02:30:14.056944 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx" event={"ID":"4e092618-1127-4b14-91b4-9e875816d8ec","Type":"ContainerDied","Data":"17b5fcda17d9ef7d3ba17d0896132392b2bb9665a67f0fb4983980ddebe41856"} Feb 24 02:30:14.061975 master-0 kubenswrapper[31411]: I0224 02:30:14.061847 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4" event={"ID":"63d48e16-529a-486e-b21a-1e5c3f64295d","Type":"ContainerDied","Data":"3be32c51de13008943df4c730e3bf0ba4591f533cb59a2e46854d41791ee25dc"} Feb 24 02:30:14.062119 master-0 kubenswrapper[31411]: I0224 02:30:14.062012 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3be32c51de13008943df4c730e3bf0ba4591f533cb59a2e46854d41791ee25dc" Feb 24 02:30:14.063135 master-0 kubenswrapper[31411]: I0224 02:30:14.063074 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213r54h4" Feb 24 02:30:14.065331 master-0 kubenswrapper[31411]: I0224 02:30:14.065260 31411 generic.go:334] "Generic (PLEG): container finished" podID="df2c951a-2433-4a38-8b67-e5e8d02433fa" containerID="b759e10f0b12cda08ff7f3c7f87d2ce370ec19e71b846dfdb15aa49de6009bd5" exitCode=0 Feb 24 02:30:14.065519 master-0 kubenswrapper[31411]: I0224 02:30:14.065463 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx" event={"ID":"df2c951a-2433-4a38-8b67-e5e8d02433fa","Type":"ContainerDied","Data":"b759e10f0b12cda08ff7f3c7f87d2ce370ec19e71b846dfdb15aa49de6009bd5"} Feb 24 02:30:14.521362 master-0 kubenswrapper[31411]: I0224 02:30:14.521291 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68" Feb 24 02:30:14.709956 master-0 kubenswrapper[31411]: I0224 02:30:14.709878 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw7kf\" (UniqueName: \"kubernetes.io/projected/91a0fd7c-d9a9-47fe-b086-15762c6b99fb-kube-api-access-dw7kf\") pod \"91a0fd7c-d9a9-47fe-b086-15762c6b99fb\" (UID: \"91a0fd7c-d9a9-47fe-b086-15762c6b99fb\") " Feb 24 02:30:14.710146 master-0 kubenswrapper[31411]: I0224 02:30:14.709963 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/91a0fd7c-d9a9-47fe-b086-15762c6b99fb-bundle\") pod \"91a0fd7c-d9a9-47fe-b086-15762c6b99fb\" (UID: \"91a0fd7c-d9a9-47fe-b086-15762c6b99fb\") " Feb 24 02:30:14.710228 master-0 kubenswrapper[31411]: I0224 02:30:14.710177 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/91a0fd7c-d9a9-47fe-b086-15762c6b99fb-util\") pod \"91a0fd7c-d9a9-47fe-b086-15762c6b99fb\" (UID: \"91a0fd7c-d9a9-47fe-b086-15762c6b99fb\") " Feb 24 02:30:14.710762 master-0 kubenswrapper[31411]: I0224 02:30:14.710529 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91a0fd7c-d9a9-47fe-b086-15762c6b99fb-bundle" (OuterVolumeSpecName: "bundle") pod "91a0fd7c-d9a9-47fe-b086-15762c6b99fb" (UID: "91a0fd7c-d9a9-47fe-b086-15762c6b99fb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:30:14.711205 master-0 kubenswrapper[31411]: I0224 02:30:14.711160 31411 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/91a0fd7c-d9a9-47fe-b086-15762c6b99fb-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:30:14.714316 master-0 kubenswrapper[31411]: I0224 02:30:14.714245 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91a0fd7c-d9a9-47fe-b086-15762c6b99fb-kube-api-access-dw7kf" (OuterVolumeSpecName: "kube-api-access-dw7kf") pod "91a0fd7c-d9a9-47fe-b086-15762c6b99fb" (UID: "91a0fd7c-d9a9-47fe-b086-15762c6b99fb"). InnerVolumeSpecName "kube-api-access-dw7kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:30:14.731649 master-0 kubenswrapper[31411]: I0224 02:30:14.731549 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91a0fd7c-d9a9-47fe-b086-15762c6b99fb-util" (OuterVolumeSpecName: "util") pod "91a0fd7c-d9a9-47fe-b086-15762c6b99fb" (UID: "91a0fd7c-d9a9-47fe-b086-15762c6b99fb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:30:14.812650 master-0 kubenswrapper[31411]: I0224 02:30:14.812522 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw7kf\" (UniqueName: \"kubernetes.io/projected/91a0fd7c-d9a9-47fe-b086-15762c6b99fb-kube-api-access-dw7kf\") on node \"master-0\" DevicePath \"\"" Feb 24 02:30:14.812650 master-0 kubenswrapper[31411]: I0224 02:30:14.812632 31411 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/91a0fd7c-d9a9-47fe-b086-15762c6b99fb-util\") on node \"master-0\" DevicePath \"\"" Feb 24 02:30:15.083787 master-0 kubenswrapper[31411]: I0224 02:30:15.083705 31411 generic.go:334] "Generic (PLEG): container finished" podID="df2c951a-2433-4a38-8b67-e5e8d02433fa" containerID="c43f16d75e386305e3bb9270851cc07e6e7e2a1d507a217033c39588d0f76659" exitCode=0 Feb 24 02:30:15.084709 master-0 kubenswrapper[31411]: I0224 02:30:15.083781 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx" event={"ID":"df2c951a-2433-4a38-8b67-e5e8d02433fa","Type":"ContainerDied","Data":"c43f16d75e386305e3bb9270851cc07e6e7e2a1d507a217033c39588d0f76659"} Feb 24 02:30:15.092858 master-0 kubenswrapper[31411]: I0224 02:30:15.092794 31411 generic.go:334] "Generic (PLEG): container finished" podID="4e092618-1127-4b14-91b4-9e875816d8ec" containerID="56e7bc204eba1d85b32111c4d5be497771a21a859ffdd0b87c06af736b7807dc" exitCode=0 Feb 24 02:30:15.097160 master-0 kubenswrapper[31411]: I0224 02:30:15.097124 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68" Feb 24 02:30:15.111527 master-0 kubenswrapper[31411]: I0224 02:30:15.111428 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx" event={"ID":"4e092618-1127-4b14-91b4-9e875816d8ec","Type":"ContainerDied","Data":"56e7bc204eba1d85b32111c4d5be497771a21a859ffdd0b87c06af736b7807dc"} Feb 24 02:30:15.111527 master-0 kubenswrapper[31411]: I0224 02:30:15.111520 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaczj68" event={"ID":"91a0fd7c-d9a9-47fe-b086-15762c6b99fb","Type":"ContainerDied","Data":"e50c4ba47c4681bb7b8f65d920d85ca281016eff5fc2b4b24c702b88d9a6a118"} Feb 24 02:30:15.111779 master-0 kubenswrapper[31411]: I0224 02:30:15.111569 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e50c4ba47c4681bb7b8f65d920d85ca281016eff5fc2b4b24c702b88d9a6a118" Feb 24 02:30:16.691830 master-0 kubenswrapper[31411]: I0224 02:30:16.691707 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx" Feb 24 02:30:16.712629 master-0 kubenswrapper[31411]: I0224 02:30:16.710442 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx" Feb 24 02:30:16.812060 master-0 kubenswrapper[31411]: I0224 02:30:16.811989 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e092618-1127-4b14-91b4-9e875816d8ec-bundle\") pod \"4e092618-1127-4b14-91b4-9e875816d8ec\" (UID: \"4e092618-1127-4b14-91b4-9e875816d8ec\") " Feb 24 02:30:16.812334 master-0 kubenswrapper[31411]: I0224 02:30:16.812118 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69l5m\" (UniqueName: \"kubernetes.io/projected/df2c951a-2433-4a38-8b67-e5e8d02433fa-kube-api-access-69l5m\") pod \"df2c951a-2433-4a38-8b67-e5e8d02433fa\" (UID: \"df2c951a-2433-4a38-8b67-e5e8d02433fa\") " Feb 24 02:30:16.812334 master-0 kubenswrapper[31411]: I0224 02:30:16.812177 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkc7h\" (UniqueName: \"kubernetes.io/projected/4e092618-1127-4b14-91b4-9e875816d8ec-kube-api-access-kkc7h\") pod \"4e092618-1127-4b14-91b4-9e875816d8ec\" (UID: \"4e092618-1127-4b14-91b4-9e875816d8ec\") " Feb 24 02:30:16.812334 master-0 kubenswrapper[31411]: I0224 02:30:16.812261 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e092618-1127-4b14-91b4-9e875816d8ec-util\") pod \"4e092618-1127-4b14-91b4-9e875816d8ec\" (UID: \"4e092618-1127-4b14-91b4-9e875816d8ec\") " Feb 24 02:30:16.812427 master-0 kubenswrapper[31411]: I0224 02:30:16.812361 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df2c951a-2433-4a38-8b67-e5e8d02433fa-bundle\") pod \"df2c951a-2433-4a38-8b67-e5e8d02433fa\" (UID: \"df2c951a-2433-4a38-8b67-e5e8d02433fa\") " Feb 24 02:30:16.812427 master-0 kubenswrapper[31411]: I0224 02:30:16.812411 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df2c951a-2433-4a38-8b67-e5e8d02433fa-util\") pod \"df2c951a-2433-4a38-8b67-e5e8d02433fa\" (UID: \"df2c951a-2433-4a38-8b67-e5e8d02433fa\") " Feb 24 02:30:16.815156 master-0 kubenswrapper[31411]: I0224 02:30:16.815100 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e092618-1127-4b14-91b4-9e875816d8ec-bundle" (OuterVolumeSpecName: "bundle") pod "4e092618-1127-4b14-91b4-9e875816d8ec" (UID: "4e092618-1127-4b14-91b4-9e875816d8ec"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:30:16.815239 master-0 kubenswrapper[31411]: I0224 02:30:16.815181 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df2c951a-2433-4a38-8b67-e5e8d02433fa-bundle" (OuterVolumeSpecName: "bundle") pod "df2c951a-2433-4a38-8b67-e5e8d02433fa" (UID: "df2c951a-2433-4a38-8b67-e5e8d02433fa"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:30:16.819304 master-0 kubenswrapper[31411]: I0224 02:30:16.819269 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e092618-1127-4b14-91b4-9e875816d8ec-kube-api-access-kkc7h" (OuterVolumeSpecName: "kube-api-access-kkc7h") pod "4e092618-1127-4b14-91b4-9e875816d8ec" (UID: "4e092618-1127-4b14-91b4-9e875816d8ec"). InnerVolumeSpecName "kube-api-access-kkc7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:30:16.823519 master-0 kubenswrapper[31411]: I0224 02:30:16.823438 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e092618-1127-4b14-91b4-9e875816d8ec-util" (OuterVolumeSpecName: "util") pod "4e092618-1127-4b14-91b4-9e875816d8ec" (UID: "4e092618-1127-4b14-91b4-9e875816d8ec"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:30:16.823884 master-0 kubenswrapper[31411]: I0224 02:30:16.823848 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df2c951a-2433-4a38-8b67-e5e8d02433fa-util" (OuterVolumeSpecName: "util") pod "df2c951a-2433-4a38-8b67-e5e8d02433fa" (UID: "df2c951a-2433-4a38-8b67-e5e8d02433fa"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:30:16.824440 master-0 kubenswrapper[31411]: I0224 02:30:16.824406 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df2c951a-2433-4a38-8b67-e5e8d02433fa-kube-api-access-69l5m" (OuterVolumeSpecName: "kube-api-access-69l5m") pod "df2c951a-2433-4a38-8b67-e5e8d02433fa" (UID: "df2c951a-2433-4a38-8b67-e5e8d02433fa"). InnerVolumeSpecName "kube-api-access-69l5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:30:16.914302 master-0 kubenswrapper[31411]: I0224 02:30:16.914221 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkc7h\" (UniqueName: \"kubernetes.io/projected/4e092618-1127-4b14-91b4-9e875816d8ec-kube-api-access-kkc7h\") on node \"master-0\" DevicePath \"\"" Feb 24 02:30:16.914302 master-0 kubenswrapper[31411]: I0224 02:30:16.914286 31411 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e092618-1127-4b14-91b4-9e875816d8ec-util\") on node \"master-0\" DevicePath \"\"" Feb 24 02:30:16.914302 master-0 kubenswrapper[31411]: I0224 02:30:16.914307 31411 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df2c951a-2433-4a38-8b67-e5e8d02433fa-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:30:16.914780 master-0 kubenswrapper[31411]: I0224 02:30:16.914343 31411 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df2c951a-2433-4a38-8b67-e5e8d02433fa-util\") on node \"master-0\" DevicePath \"\"" Feb 24 02:30:16.914780 master-0 kubenswrapper[31411]: I0224 02:30:16.914364 31411 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e092618-1127-4b14-91b4-9e875816d8ec-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:30:16.914780 master-0 kubenswrapper[31411]: I0224 02:30:16.914388 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69l5m\" (UniqueName: \"kubernetes.io/projected/df2c951a-2433-4a38-8b67-e5e8d02433fa-kube-api-access-69l5m\") on node \"master-0\" DevicePath \"\"" Feb 24 02:30:17.129037 master-0 kubenswrapper[31411]: I0224 02:30:17.128968 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx" Feb 24 02:30:17.129379 master-0 kubenswrapper[31411]: I0224 02:30:17.128955 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5cchhx" event={"ID":"df2c951a-2433-4a38-8b67-e5e8d02433fa","Type":"ContainerDied","Data":"2d05b85cf86e24690dd5b37bb112a9f43fb3a090ee2818a082ecced6269ed41b"} Feb 24 02:30:17.129379 master-0 kubenswrapper[31411]: I0224 02:30:17.129287 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d05b85cf86e24690dd5b37bb112a9f43fb3a090ee2818a082ecced6269ed41b" Feb 24 02:30:17.133221 master-0 kubenswrapper[31411]: I0224 02:30:17.132962 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx" event={"ID":"4e092618-1127-4b14-91b4-9e875816d8ec","Type":"ContainerDied","Data":"3aa5b19a37c2a0ef2473ce731f4458d2d495d41ca0d7bdda8dcbfd7c00eab83e"} Feb 24 02:30:17.133357 master-0 kubenswrapper[31411]: I0224 02:30:17.133249 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3aa5b19a37c2a0ef2473ce731f4458d2d495d41ca0d7bdda8dcbfd7c00eab83e" Feb 24 02:30:17.133357 master-0 kubenswrapper[31411]: I0224 02:30:17.133088 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rhlrx" Feb 24 02:30:24.923815 master-0 kubenswrapper[31411]: I0224 02:30:24.923713 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-xp57m"] Feb 24 02:30:24.924971 master-0 kubenswrapper[31411]: E0224 02:30:24.924356 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63d48e16-529a-486e-b21a-1e5c3f64295d" containerName="util" Feb 24 02:30:24.924971 master-0 kubenswrapper[31411]: I0224 02:30:24.924384 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="63d48e16-529a-486e-b21a-1e5c3f64295d" containerName="util" Feb 24 02:30:24.924971 master-0 kubenswrapper[31411]: E0224 02:30:24.924417 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e092618-1127-4b14-91b4-9e875816d8ec" containerName="extract" Feb 24 02:30:24.924971 master-0 kubenswrapper[31411]: I0224 02:30:24.924432 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e092618-1127-4b14-91b4-9e875816d8ec" containerName="extract" Feb 24 02:30:24.924971 master-0 kubenswrapper[31411]: E0224 02:30:24.924476 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a0fd7c-d9a9-47fe-b086-15762c6b99fb" containerName="extract" Feb 24 02:30:24.924971 master-0 kubenswrapper[31411]: I0224 02:30:24.924491 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a0fd7c-d9a9-47fe-b086-15762c6b99fb" containerName="extract" Feb 24 02:30:24.924971 master-0 kubenswrapper[31411]: E0224 02:30:24.924514 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e092618-1127-4b14-91b4-9e875816d8ec" containerName="pull" Feb 24 02:30:24.924971 master-0 kubenswrapper[31411]: I0224 02:30:24.924535 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e092618-1127-4b14-91b4-9e875816d8ec" containerName="pull" Feb 24 02:30:24.924971 master-0 kubenswrapper[31411]: E0224 02:30:24.924566 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a0fd7c-d9a9-47fe-b086-15762c6b99fb" containerName="util" Feb 24 02:30:24.924971 master-0 kubenswrapper[31411]: I0224 02:30:24.924604 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a0fd7c-d9a9-47fe-b086-15762c6b99fb" containerName="util" Feb 24 02:30:24.924971 master-0 kubenswrapper[31411]: E0224 02:30:24.924633 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df2c951a-2433-4a38-8b67-e5e8d02433fa" containerName="extract" Feb 24 02:30:24.924971 master-0 kubenswrapper[31411]: I0224 02:30:24.924646 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="df2c951a-2433-4a38-8b67-e5e8d02433fa" containerName="extract" Feb 24 02:30:24.924971 master-0 kubenswrapper[31411]: E0224 02:30:24.924675 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63d48e16-529a-486e-b21a-1e5c3f64295d" containerName="pull" Feb 24 02:30:24.924971 master-0 kubenswrapper[31411]: I0224 02:30:24.924689 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="63d48e16-529a-486e-b21a-1e5c3f64295d" containerName="pull" Feb 24 02:30:24.924971 master-0 kubenswrapper[31411]: E0224 02:30:24.924710 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df2c951a-2433-4a38-8b67-e5e8d02433fa" containerName="pull" Feb 24 02:30:24.924971 master-0 kubenswrapper[31411]: I0224 02:30:24.924724 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="df2c951a-2433-4a38-8b67-e5e8d02433fa" containerName="pull" Feb 24 02:30:24.924971 master-0 kubenswrapper[31411]: E0224 02:30:24.924753 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df2c951a-2433-4a38-8b67-e5e8d02433fa" containerName="util" Feb 24 02:30:24.924971 master-0 kubenswrapper[31411]: I0224 02:30:24.924766 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="df2c951a-2433-4a38-8b67-e5e8d02433fa" containerName="util" Feb 24 02:30:24.924971 master-0 kubenswrapper[31411]: E0224 02:30:24.924781 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a0fd7c-d9a9-47fe-b086-15762c6b99fb" containerName="pull" Feb 24 02:30:24.924971 master-0 kubenswrapper[31411]: I0224 02:30:24.924793 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a0fd7c-d9a9-47fe-b086-15762c6b99fb" containerName="pull" Feb 24 02:30:24.924971 master-0 kubenswrapper[31411]: E0224 02:30:24.924830 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63d48e16-529a-486e-b21a-1e5c3f64295d" containerName="extract" Feb 24 02:30:24.924971 master-0 kubenswrapper[31411]: I0224 02:30:24.924844 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="63d48e16-529a-486e-b21a-1e5c3f64295d" containerName="extract" Feb 24 02:30:24.924971 master-0 kubenswrapper[31411]: E0224 02:30:24.924873 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e092618-1127-4b14-91b4-9e875816d8ec" containerName="util" Feb 24 02:30:24.924971 master-0 kubenswrapper[31411]: I0224 02:30:24.924886 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e092618-1127-4b14-91b4-9e875816d8ec" containerName="util" Feb 24 02:30:24.926733 master-0 kubenswrapper[31411]: I0224 02:30:24.925187 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="91a0fd7c-d9a9-47fe-b086-15762c6b99fb" containerName="extract" Feb 24 02:30:24.926733 master-0 kubenswrapper[31411]: I0224 02:30:24.925298 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e092618-1127-4b14-91b4-9e875816d8ec" containerName="extract" Feb 24 02:30:24.926733 master-0 kubenswrapper[31411]: I0224 02:30:24.925324 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="63d48e16-529a-486e-b21a-1e5c3f64295d" containerName="extract" Feb 24 02:30:24.926733 master-0 kubenswrapper[31411]: I0224 02:30:24.925350 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="df2c951a-2433-4a38-8b67-e5e8d02433fa" containerName="extract" Feb 24 02:30:24.926733 master-0 kubenswrapper[31411]: I0224 02:30:24.926371 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-xp57m" Feb 24 02:30:24.929645 master-0 kubenswrapper[31411]: I0224 02:30:24.929552 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 24 02:30:24.930012 master-0 kubenswrapper[31411]: I0224 02:30:24.929963 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 24 02:30:24.958919 master-0 kubenswrapper[31411]: I0224 02:30:24.958816 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-xp57m"] Feb 24 02:30:25.080164 master-0 kubenswrapper[31411]: I0224 02:30:25.080037 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mvh9\" (UniqueName: \"kubernetes.io/projected/9550d924-11ab-4206-acdb-02ed4342db27-kube-api-access-9mvh9\") pod \"nmstate-operator-694c9596b7-xp57m\" (UID: \"9550d924-11ab-4206-acdb-02ed4342db27\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-xp57m" Feb 24 02:30:25.183039 master-0 kubenswrapper[31411]: I0224 02:30:25.182879 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mvh9\" (UniqueName: \"kubernetes.io/projected/9550d924-11ab-4206-acdb-02ed4342db27-kube-api-access-9mvh9\") pod \"nmstate-operator-694c9596b7-xp57m\" (UID: \"9550d924-11ab-4206-acdb-02ed4342db27\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-xp57m" Feb 24 02:30:25.224693 master-0 kubenswrapper[31411]: I0224 02:30:25.224144 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mvh9\" (UniqueName: \"kubernetes.io/projected/9550d924-11ab-4206-acdb-02ed4342db27-kube-api-access-9mvh9\") pod \"nmstate-operator-694c9596b7-xp57m\" (UID: \"9550d924-11ab-4206-acdb-02ed4342db27\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-xp57m" Feb 24 02:30:25.256645 master-0 kubenswrapper[31411]: I0224 02:30:25.256525 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-xp57m" Feb 24 02:30:25.883591 master-0 kubenswrapper[31411]: I0224 02:30:25.883478 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-xp57m"] Feb 24 02:30:25.891673 master-0 kubenswrapper[31411]: W0224 02:30:25.891595 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9550d924_11ab_4206_acdb_02ed4342db27.slice/crio-f4fe7c2f0aba2c34dcb049930e0a1f9bdc9058080d7cd1693e12f99f3beefdc7 WatchSource:0}: Error finding container f4fe7c2f0aba2c34dcb049930e0a1f9bdc9058080d7cd1693e12f99f3beefdc7: Status 404 returned error can't find the container with id f4fe7c2f0aba2c34dcb049930e0a1f9bdc9058080d7cd1693e12f99f3beefdc7 Feb 24 02:30:26.251723 master-0 kubenswrapper[31411]: I0224 02:30:26.251622 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-xp57m" event={"ID":"9550d924-11ab-4206-acdb-02ed4342db27","Type":"ContainerStarted","Data":"f4fe7c2f0aba2c34dcb049930e0a1f9bdc9058080d7cd1693e12f99f3beefdc7"} Feb 24 02:30:29.287953 master-0 kubenswrapper[31411]: I0224 02:30:29.287890 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-xp57m" event={"ID":"9550d924-11ab-4206-acdb-02ed4342db27","Type":"ContainerStarted","Data":"9d87b453cc4a90d62b6527f108ea09689550b28f14718f52874fac77a4cce956"} Feb 24 02:30:29.335605 master-0 kubenswrapper[31411]: I0224 02:30:29.325558 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-xp57m" podStartSLOduration=2.629578353 podStartE2EDuration="5.325537512s" podCreationTimestamp="2026-02-24 02:30:24 +0000 UTC" firstStartedPulling="2026-02-24 02:30:25.895479882 +0000 UTC m=+569.112677728" lastFinishedPulling="2026-02-24 02:30:28.591439041 +0000 UTC m=+571.808636887" observedRunningTime="2026-02-24 02:30:29.310180994 +0000 UTC m=+572.527378840" watchObservedRunningTime="2026-02-24 02:30:29.325537512 +0000 UTC m=+572.542735358" Feb 24 02:30:29.471637 master-0 kubenswrapper[31411]: I0224 02:30:29.471534 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7577845998-zvq74"] Feb 24 02:30:29.473008 master-0 kubenswrapper[31411]: I0224 02:30:29.472964 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7577845998-zvq74" Feb 24 02:30:29.475793 master-0 kubenswrapper[31411]: I0224 02:30:29.475736 31411 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 24 02:30:29.475978 master-0 kubenswrapper[31411]: I0224 02:30:29.475941 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 24 02:30:29.476369 master-0 kubenswrapper[31411]: I0224 02:30:29.476201 31411 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 24 02:30:29.488011 master-0 kubenswrapper[31411]: I0224 02:30:29.487968 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 24 02:30:29.491148 master-0 kubenswrapper[31411]: I0224 02:30:29.491111 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7577845998-zvq74"] Feb 24 02:30:29.605147 master-0 kubenswrapper[31411]: I0224 02:30:29.604958 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9skxm\" (UniqueName: \"kubernetes.io/projected/4fd22fab-71ee-4af2-9ed2-fab1b5ad38a7-kube-api-access-9skxm\") pod \"metallb-operator-controller-manager-7577845998-zvq74\" (UID: \"4fd22fab-71ee-4af2-9ed2-fab1b5ad38a7\") " pod="metallb-system/metallb-operator-controller-manager-7577845998-zvq74" Feb 24 02:30:29.605629 master-0 kubenswrapper[31411]: I0224 02:30:29.605538 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fd22fab-71ee-4af2-9ed2-fab1b5ad38a7-webhook-cert\") pod \"metallb-operator-controller-manager-7577845998-zvq74\" (UID: \"4fd22fab-71ee-4af2-9ed2-fab1b5ad38a7\") " pod="metallb-system/metallb-operator-controller-manager-7577845998-zvq74" Feb 24 02:30:29.605989 master-0 kubenswrapper[31411]: I0224 02:30:29.605955 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4fd22fab-71ee-4af2-9ed2-fab1b5ad38a7-apiservice-cert\") pod \"metallb-operator-controller-manager-7577845998-zvq74\" (UID: \"4fd22fab-71ee-4af2-9ed2-fab1b5ad38a7\") " pod="metallb-system/metallb-operator-controller-manager-7577845998-zvq74" Feb 24 02:30:29.708818 master-0 kubenswrapper[31411]: I0224 02:30:29.708728 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9skxm\" (UniqueName: \"kubernetes.io/projected/4fd22fab-71ee-4af2-9ed2-fab1b5ad38a7-kube-api-access-9skxm\") pod \"metallb-operator-controller-manager-7577845998-zvq74\" (UID: \"4fd22fab-71ee-4af2-9ed2-fab1b5ad38a7\") " pod="metallb-system/metallb-operator-controller-manager-7577845998-zvq74" Feb 24 02:30:29.709232 master-0 kubenswrapper[31411]: I0224 02:30:29.708958 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fd22fab-71ee-4af2-9ed2-fab1b5ad38a7-webhook-cert\") pod \"metallb-operator-controller-manager-7577845998-zvq74\" (UID: \"4fd22fab-71ee-4af2-9ed2-fab1b5ad38a7\") " pod="metallb-system/metallb-operator-controller-manager-7577845998-zvq74" Feb 24 02:30:29.709232 master-0 kubenswrapper[31411]: I0224 02:30:29.709128 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4fd22fab-71ee-4af2-9ed2-fab1b5ad38a7-apiservice-cert\") pod \"metallb-operator-controller-manager-7577845998-zvq74\" (UID: \"4fd22fab-71ee-4af2-9ed2-fab1b5ad38a7\") " pod="metallb-system/metallb-operator-controller-manager-7577845998-zvq74" Feb 24 02:30:29.710021 master-0 kubenswrapper[31411]: I0224 02:30:29.709908 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-559d754c8d-8sgn7"] Feb 24 02:30:29.721722 master-0 kubenswrapper[31411]: I0224 02:30:29.711728 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-559d754c8d-8sgn7" Feb 24 02:30:29.721722 master-0 kubenswrapper[31411]: I0224 02:30:29.713419 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4fd22fab-71ee-4af2-9ed2-fab1b5ad38a7-apiservice-cert\") pod \"metallb-operator-controller-manager-7577845998-zvq74\" (UID: \"4fd22fab-71ee-4af2-9ed2-fab1b5ad38a7\") " pod="metallb-system/metallb-operator-controller-manager-7577845998-zvq74" Feb 24 02:30:29.721722 master-0 kubenswrapper[31411]: I0224 02:30:29.715664 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fd22fab-71ee-4af2-9ed2-fab1b5ad38a7-webhook-cert\") pod \"metallb-operator-controller-manager-7577845998-zvq74\" (UID: \"4fd22fab-71ee-4af2-9ed2-fab1b5ad38a7\") " pod="metallb-system/metallb-operator-controller-manager-7577845998-zvq74" Feb 24 02:30:29.721722 master-0 kubenswrapper[31411]: I0224 02:30:29.716157 31411 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 24 02:30:29.721722 master-0 kubenswrapper[31411]: I0224 02:30:29.717393 31411 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 24 02:30:29.729005 master-0 kubenswrapper[31411]: I0224 02:30:29.728969 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-559d754c8d-8sgn7"] Feb 24 02:30:29.730366 master-0 kubenswrapper[31411]: I0224 02:30:29.730321 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9skxm\" (UniqueName: \"kubernetes.io/projected/4fd22fab-71ee-4af2-9ed2-fab1b5ad38a7-kube-api-access-9skxm\") pod \"metallb-operator-controller-manager-7577845998-zvq74\" (UID: \"4fd22fab-71ee-4af2-9ed2-fab1b5ad38a7\") " pod="metallb-system/metallb-operator-controller-manager-7577845998-zvq74" Feb 24 02:30:29.793864 master-0 kubenswrapper[31411]: I0224 02:30:29.791642 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7577845998-zvq74" Feb 24 02:30:29.810795 master-0 kubenswrapper[31411]: I0224 02:30:29.810552 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kv9z\" (UniqueName: \"kubernetes.io/projected/013fb964-8d21-4b63-9afb-521a7e902920-kube-api-access-6kv9z\") pod \"metallb-operator-webhook-server-559d754c8d-8sgn7\" (UID: \"013fb964-8d21-4b63-9afb-521a7e902920\") " pod="metallb-system/metallb-operator-webhook-server-559d754c8d-8sgn7" Feb 24 02:30:29.810795 master-0 kubenswrapper[31411]: I0224 02:30:29.810618 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/013fb964-8d21-4b63-9afb-521a7e902920-webhook-cert\") pod \"metallb-operator-webhook-server-559d754c8d-8sgn7\" (UID: \"013fb964-8d21-4b63-9afb-521a7e902920\") " pod="metallb-system/metallb-operator-webhook-server-559d754c8d-8sgn7" Feb 24 02:30:29.810795 master-0 kubenswrapper[31411]: I0224 02:30:29.810656 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/013fb964-8d21-4b63-9afb-521a7e902920-apiservice-cert\") pod \"metallb-operator-webhook-server-559d754c8d-8sgn7\" (UID: \"013fb964-8d21-4b63-9afb-521a7e902920\") " pod="metallb-system/metallb-operator-webhook-server-559d754c8d-8sgn7" Feb 24 02:30:29.915227 master-0 kubenswrapper[31411]: I0224 02:30:29.915162 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kv9z\" (UniqueName: \"kubernetes.io/projected/013fb964-8d21-4b63-9afb-521a7e902920-kube-api-access-6kv9z\") pod \"metallb-operator-webhook-server-559d754c8d-8sgn7\" (UID: \"013fb964-8d21-4b63-9afb-521a7e902920\") " pod="metallb-system/metallb-operator-webhook-server-559d754c8d-8sgn7" Feb 24 02:30:29.915420 master-0 kubenswrapper[31411]: I0224 02:30:29.915395 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/013fb964-8d21-4b63-9afb-521a7e902920-webhook-cert\") pod \"metallb-operator-webhook-server-559d754c8d-8sgn7\" (UID: \"013fb964-8d21-4b63-9afb-521a7e902920\") " pod="metallb-system/metallb-operator-webhook-server-559d754c8d-8sgn7" Feb 24 02:30:29.915474 master-0 kubenswrapper[31411]: I0224 02:30:29.915448 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/013fb964-8d21-4b63-9afb-521a7e902920-apiservice-cert\") pod \"metallb-operator-webhook-server-559d754c8d-8sgn7\" (UID: \"013fb964-8d21-4b63-9afb-521a7e902920\") " pod="metallb-system/metallb-operator-webhook-server-559d754c8d-8sgn7" Feb 24 02:30:29.920553 master-0 kubenswrapper[31411]: I0224 02:30:29.920434 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/013fb964-8d21-4b63-9afb-521a7e902920-webhook-cert\") pod \"metallb-operator-webhook-server-559d754c8d-8sgn7\" (UID: \"013fb964-8d21-4b63-9afb-521a7e902920\") " pod="metallb-system/metallb-operator-webhook-server-559d754c8d-8sgn7" Feb 24 02:30:29.926650 master-0 kubenswrapper[31411]: I0224 02:30:29.923517 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/013fb964-8d21-4b63-9afb-521a7e902920-apiservice-cert\") pod \"metallb-operator-webhook-server-559d754c8d-8sgn7\" (UID: \"013fb964-8d21-4b63-9afb-521a7e902920\") " pod="metallb-system/metallb-operator-webhook-server-559d754c8d-8sgn7" Feb 24 02:30:29.943653 master-0 kubenswrapper[31411]: I0224 02:30:29.943560 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kv9z\" (UniqueName: \"kubernetes.io/projected/013fb964-8d21-4b63-9afb-521a7e902920-kube-api-access-6kv9z\") pod \"metallb-operator-webhook-server-559d754c8d-8sgn7\" (UID: \"013fb964-8d21-4b63-9afb-521a7e902920\") " pod="metallb-system/metallb-operator-webhook-server-559d754c8d-8sgn7" Feb 24 02:30:30.095501 master-0 kubenswrapper[31411]: I0224 02:30:30.095425 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-559d754c8d-8sgn7" Feb 24 02:30:30.347973 master-0 kubenswrapper[31411]: I0224 02:30:30.346784 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7577845998-zvq74"] Feb 24 02:30:30.554477 master-0 kubenswrapper[31411]: I0224 02:30:30.553911 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-5rkn6"] Feb 24 02:30:30.555430 master-0 kubenswrapper[31411]: I0224 02:30:30.555397 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-5rkn6" Feb 24 02:30:30.564676 master-0 kubenswrapper[31411]: I0224 02:30:30.564423 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 24 02:30:30.564676 master-0 kubenswrapper[31411]: I0224 02:30:30.564478 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 24 02:30:30.587227 master-0 kubenswrapper[31411]: I0224 02:30:30.587167 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-5rkn6"] Feb 24 02:30:30.622442 master-0 kubenswrapper[31411]: I0224 02:30:30.621909 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-559d754c8d-8sgn7"] Feb 24 02:30:30.623051 master-0 kubenswrapper[31411]: W0224 02:30:30.622808 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod013fb964_8d21_4b63_9afb_521a7e902920.slice/crio-a185dc1a24b24aa1d370ca868756f92b419ebf7a0882014a105c2ac4ecb8605b WatchSource:0}: Error finding container a185dc1a24b24aa1d370ca868756f92b419ebf7a0882014a105c2ac4ecb8605b: Status 404 returned error can't find the container with id a185dc1a24b24aa1d370ca868756f92b419ebf7a0882014a105c2ac4ecb8605b Feb 24 02:30:30.664057 master-0 kubenswrapper[31411]: I0224 02:30:30.663995 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ba583132-e465-4ed3-aed0-9975d6c1ba8f-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-5rkn6\" (UID: \"ba583132-e465-4ed3-aed0-9975d6c1ba8f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-5rkn6" Feb 24 02:30:30.664324 master-0 kubenswrapper[31411]: I0224 02:30:30.664062 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plvfw\" (UniqueName: \"kubernetes.io/projected/ba583132-e465-4ed3-aed0-9975d6c1ba8f-kube-api-access-plvfw\") pod \"cert-manager-operator-controller-manager-66c8bdd694-5rkn6\" (UID: \"ba583132-e465-4ed3-aed0-9975d6c1ba8f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-5rkn6" Feb 24 02:30:30.766343 master-0 kubenswrapper[31411]: I0224 02:30:30.766265 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ba583132-e465-4ed3-aed0-9975d6c1ba8f-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-5rkn6\" (UID: \"ba583132-e465-4ed3-aed0-9975d6c1ba8f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-5rkn6" Feb 24 02:30:30.766674 master-0 kubenswrapper[31411]: I0224 02:30:30.766360 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plvfw\" (UniqueName: \"kubernetes.io/projected/ba583132-e465-4ed3-aed0-9975d6c1ba8f-kube-api-access-plvfw\") pod \"cert-manager-operator-controller-manager-66c8bdd694-5rkn6\" (UID: \"ba583132-e465-4ed3-aed0-9975d6c1ba8f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-5rkn6" Feb 24 02:30:30.768041 master-0 kubenswrapper[31411]: I0224 02:30:30.767206 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ba583132-e465-4ed3-aed0-9975d6c1ba8f-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-5rkn6\" (UID: \"ba583132-e465-4ed3-aed0-9975d6c1ba8f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-5rkn6" Feb 24 02:30:30.794994 master-0 kubenswrapper[31411]: I0224 02:30:30.794915 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plvfw\" (UniqueName: \"kubernetes.io/projected/ba583132-e465-4ed3-aed0-9975d6c1ba8f-kube-api-access-plvfw\") pod \"cert-manager-operator-controller-manager-66c8bdd694-5rkn6\" (UID: \"ba583132-e465-4ed3-aed0-9975d6c1ba8f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-5rkn6" Feb 24 02:30:30.886467 master-0 kubenswrapper[31411]: I0224 02:30:30.886305 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-5rkn6" Feb 24 02:30:31.328731 master-0 kubenswrapper[31411]: I0224 02:30:31.328625 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-559d754c8d-8sgn7" event={"ID":"013fb964-8d21-4b63-9afb-521a7e902920","Type":"ContainerStarted","Data":"a185dc1a24b24aa1d370ca868756f92b419ebf7a0882014a105c2ac4ecb8605b"} Feb 24 02:30:31.331325 master-0 kubenswrapper[31411]: I0224 02:30:31.331247 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7577845998-zvq74" event={"ID":"4fd22fab-71ee-4af2-9ed2-fab1b5ad38a7","Type":"ContainerStarted","Data":"b4a75108614eb5b0cb089493143e4be329a1d83dd01225aa4bb82639eee47fca"} Feb 24 02:30:31.411169 master-0 kubenswrapper[31411]: I0224 02:30:31.411106 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-5rkn6"] Feb 24 02:30:32.343375 master-0 kubenswrapper[31411]: I0224 02:30:32.343308 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-5rkn6" event={"ID":"ba583132-e465-4ed3-aed0-9975d6c1ba8f","Type":"ContainerStarted","Data":"9f83f835cc92454609776fc9b16b50a16c10c046f08b2261f69032c80be0bc8c"} Feb 24 02:30:38.419030 master-0 kubenswrapper[31411]: I0224 02:30:38.418951 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-559d754c8d-8sgn7" event={"ID":"013fb964-8d21-4b63-9afb-521a7e902920","Type":"ContainerStarted","Data":"970732855cc69916d9d6bf856f6bda35ff29df6ad6151d839ecfb051fbdd174a"} Feb 24 02:30:38.419773 master-0 kubenswrapper[31411]: I0224 02:30:38.419197 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-559d754c8d-8sgn7" Feb 24 02:30:38.421496 master-0 kubenswrapper[31411]: I0224 02:30:38.421461 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-5rkn6" event={"ID":"ba583132-e465-4ed3-aed0-9975d6c1ba8f","Type":"ContainerStarted","Data":"670e732a9a41260e62179f6eebe778e5bdaa04ef2eaeaf7c680844104cfc220c"} Feb 24 02:30:38.425498 master-0 kubenswrapper[31411]: I0224 02:30:38.425404 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7577845998-zvq74" event={"ID":"4fd22fab-71ee-4af2-9ed2-fab1b5ad38a7","Type":"ContainerStarted","Data":"f02da1e0977b7b6321b59662fbfb0b04afa9fc2bb27f314886b45114add3dbbd"} Feb 24 02:30:38.425565 master-0 kubenswrapper[31411]: I0224 02:30:38.425548 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7577845998-zvq74" Feb 24 02:30:38.460413 master-0 kubenswrapper[31411]: I0224 02:30:38.460328 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-559d754c8d-8sgn7" podStartSLOduration=2.051523923 podStartE2EDuration="9.460303225s" podCreationTimestamp="2026-02-24 02:30:29 +0000 UTC" firstStartedPulling="2026-02-24 02:30:30.627235234 +0000 UTC m=+573.844433070" lastFinishedPulling="2026-02-24 02:30:38.036014526 +0000 UTC m=+581.253212372" observedRunningTime="2026-02-24 02:30:38.451394747 +0000 UTC m=+581.668592593" watchObservedRunningTime="2026-02-24 02:30:38.460303225 +0000 UTC m=+581.677501071" Feb 24 02:30:38.498842 master-0 kubenswrapper[31411]: I0224 02:30:38.498540 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-5rkn6" podStartSLOduration=1.857018118 podStartE2EDuration="8.498517713s" podCreationTimestamp="2026-02-24 02:30:30 +0000 UTC" firstStartedPulling="2026-02-24 02:30:31.422853573 +0000 UTC m=+574.640051449" lastFinishedPulling="2026-02-24 02:30:38.064353188 +0000 UTC m=+581.281551044" observedRunningTime="2026-02-24 02:30:38.484608104 +0000 UTC m=+581.701805960" watchObservedRunningTime="2026-02-24 02:30:38.498517713 +0000 UTC m=+581.715715569" Feb 24 02:30:38.555840 master-0 kubenswrapper[31411]: I0224 02:30:38.555752 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7577845998-zvq74" podStartSLOduration=1.905426223 podStartE2EDuration="9.55573244s" podCreationTimestamp="2026-02-24 02:30:29 +0000 UTC" firstStartedPulling="2026-02-24 02:30:30.351200646 +0000 UTC m=+573.568398492" lastFinishedPulling="2026-02-24 02:30:38.001506863 +0000 UTC m=+581.218704709" observedRunningTime="2026-02-24 02:30:38.544954119 +0000 UTC m=+581.762151975" watchObservedRunningTime="2026-02-24 02:30:38.55573244 +0000 UTC m=+581.772930276" Feb 24 02:30:40.806947 master-0 kubenswrapper[31411]: I0224 02:30:40.806879 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-j4m97"] Feb 24 02:30:40.807840 master-0 kubenswrapper[31411]: I0224 02:30:40.807806 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-j4m97" Feb 24 02:30:40.810378 master-0 kubenswrapper[31411]: I0224 02:30:40.810334 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 24 02:30:40.810524 master-0 kubenswrapper[31411]: I0224 02:30:40.810411 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 24 02:30:40.860886 master-0 kubenswrapper[31411]: I0224 02:30:40.860818 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-j4m97"] Feb 24 02:30:40.921782 master-0 kubenswrapper[31411]: I0224 02:30:40.921716 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5pdq\" (UniqueName: \"kubernetes.io/projected/3b5844ba-fc8b-4df5-b9ea-bdf8b1054111-kube-api-access-x5pdq\") pod \"cert-manager-webhook-6888856db4-j4m97\" (UID: \"3b5844ba-fc8b-4df5-b9ea-bdf8b1054111\") " pod="cert-manager/cert-manager-webhook-6888856db4-j4m97" Feb 24 02:30:40.922146 master-0 kubenswrapper[31411]: I0224 02:30:40.922122 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3b5844ba-fc8b-4df5-b9ea-bdf8b1054111-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-j4m97\" (UID: \"3b5844ba-fc8b-4df5-b9ea-bdf8b1054111\") " pod="cert-manager/cert-manager-webhook-6888856db4-j4m97" Feb 24 02:30:41.024057 master-0 kubenswrapper[31411]: I0224 02:30:41.023917 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5pdq\" (UniqueName: \"kubernetes.io/projected/3b5844ba-fc8b-4df5-b9ea-bdf8b1054111-kube-api-access-x5pdq\") pod \"cert-manager-webhook-6888856db4-j4m97\" (UID: \"3b5844ba-fc8b-4df5-b9ea-bdf8b1054111\") " pod="cert-manager/cert-manager-webhook-6888856db4-j4m97" Feb 24 02:30:41.024316 master-0 kubenswrapper[31411]: I0224 02:30:41.024223 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3b5844ba-fc8b-4df5-b9ea-bdf8b1054111-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-j4m97\" (UID: \"3b5844ba-fc8b-4df5-b9ea-bdf8b1054111\") " pod="cert-manager/cert-manager-webhook-6888856db4-j4m97" Feb 24 02:30:41.046974 master-0 kubenswrapper[31411]: I0224 02:30:41.046625 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5pdq\" (UniqueName: \"kubernetes.io/projected/3b5844ba-fc8b-4df5-b9ea-bdf8b1054111-kube-api-access-x5pdq\") pod \"cert-manager-webhook-6888856db4-j4m97\" (UID: \"3b5844ba-fc8b-4df5-b9ea-bdf8b1054111\") " pod="cert-manager/cert-manager-webhook-6888856db4-j4m97" Feb 24 02:30:41.049241 master-0 kubenswrapper[31411]: I0224 02:30:41.049206 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3b5844ba-fc8b-4df5-b9ea-bdf8b1054111-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-j4m97\" (UID: \"3b5844ba-fc8b-4df5-b9ea-bdf8b1054111\") " pod="cert-manager/cert-manager-webhook-6888856db4-j4m97" Feb 24 02:30:41.122129 master-0 kubenswrapper[31411]: I0224 02:30:41.121982 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-j4m97" Feb 24 02:30:41.615932 master-0 kubenswrapper[31411]: I0224 02:30:41.615869 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-j4m97"] Feb 24 02:30:41.622044 master-0 kubenswrapper[31411]: W0224 02:30:41.621991 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b5844ba_fc8b_4df5_b9ea_bdf8b1054111.slice/crio-c45e8bf98c1927e5d698672c76cf0a0f69a289784d05e3fd93717f389b74e6b6 WatchSource:0}: Error finding container c45e8bf98c1927e5d698672c76cf0a0f69a289784d05e3fd93717f389b74e6b6: Status 404 returned error can't find the container with id c45e8bf98c1927e5d698672c76cf0a0f69a289784d05e3fd93717f389b74e6b6 Feb 24 02:30:42.476659 master-0 kubenswrapper[31411]: I0224 02:30:42.468094 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-j4m97" event={"ID":"3b5844ba-fc8b-4df5-b9ea-bdf8b1054111","Type":"ContainerStarted","Data":"c45e8bf98c1927e5d698672c76cf0a0f69a289784d05e3fd93717f389b74e6b6"} Feb 24 02:30:45.862960 master-0 kubenswrapper[31411]: I0224 02:30:45.862860 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-hhm6l"] Feb 24 02:30:45.864732 master-0 kubenswrapper[31411]: I0224 02:30:45.864683 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-hhm6l" Feb 24 02:30:45.883770 master-0 kubenswrapper[31411]: I0224 02:30:45.883697 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-hhm6l"] Feb 24 02:30:45.924655 master-0 kubenswrapper[31411]: I0224 02:30:45.923632 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/28acc751-e697-480b-8cc5-81a6600181b7-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-hhm6l\" (UID: \"28acc751-e697-480b-8cc5-81a6600181b7\") " pod="cert-manager/cert-manager-cainjector-5545bd876-hhm6l" Feb 24 02:30:45.924655 master-0 kubenswrapper[31411]: I0224 02:30:45.923720 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5f8l\" (UniqueName: \"kubernetes.io/projected/28acc751-e697-480b-8cc5-81a6600181b7-kube-api-access-h5f8l\") pod \"cert-manager-cainjector-5545bd876-hhm6l\" (UID: \"28acc751-e697-480b-8cc5-81a6600181b7\") " pod="cert-manager/cert-manager-cainjector-5545bd876-hhm6l" Feb 24 02:30:46.027612 master-0 kubenswrapper[31411]: I0224 02:30:46.027524 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5f8l\" (UniqueName: \"kubernetes.io/projected/28acc751-e697-480b-8cc5-81a6600181b7-kube-api-access-h5f8l\") pod \"cert-manager-cainjector-5545bd876-hhm6l\" (UID: \"28acc751-e697-480b-8cc5-81a6600181b7\") " pod="cert-manager/cert-manager-cainjector-5545bd876-hhm6l" Feb 24 02:30:46.027920 master-0 kubenswrapper[31411]: I0224 02:30:46.027896 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/28acc751-e697-480b-8cc5-81a6600181b7-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-hhm6l\" (UID: \"28acc751-e697-480b-8cc5-81a6600181b7\") " pod="cert-manager/cert-manager-cainjector-5545bd876-hhm6l" Feb 24 02:30:46.051089 master-0 kubenswrapper[31411]: I0224 02:30:46.051042 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5f8l\" (UniqueName: \"kubernetes.io/projected/28acc751-e697-480b-8cc5-81a6600181b7-kube-api-access-h5f8l\") pod \"cert-manager-cainjector-5545bd876-hhm6l\" (UID: \"28acc751-e697-480b-8cc5-81a6600181b7\") " pod="cert-manager/cert-manager-cainjector-5545bd876-hhm6l" Feb 24 02:30:46.053414 master-0 kubenswrapper[31411]: I0224 02:30:46.053353 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/28acc751-e697-480b-8cc5-81a6600181b7-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-hhm6l\" (UID: \"28acc751-e697-480b-8cc5-81a6600181b7\") " pod="cert-manager/cert-manager-cainjector-5545bd876-hhm6l" Feb 24 02:30:46.215023 master-0 kubenswrapper[31411]: I0224 02:30:46.214955 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-hhm6l" Feb 24 02:30:48.162230 master-0 kubenswrapper[31411]: I0224 02:30:48.160435 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-2lpl8"] Feb 24 02:30:48.162230 master-0 kubenswrapper[31411]: I0224 02:30:48.161706 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-2lpl8" Feb 24 02:30:48.164527 master-0 kubenswrapper[31411]: I0224 02:30:48.164475 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 24 02:30:48.165032 master-0 kubenswrapper[31411]: I0224 02:30:48.165004 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 24 02:30:48.187601 master-0 kubenswrapper[31411]: I0224 02:30:48.185784 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-2lpl8"] Feb 24 02:30:48.209830 master-0 kubenswrapper[31411]: I0224 02:30:48.209768 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk4kh\" (UniqueName: \"kubernetes.io/projected/874ce9d1-9595-482e-943a-81cf692e895b-kube-api-access-fk4kh\") pod \"obo-prometheus-operator-68bc856cb9-2lpl8\" (UID: \"874ce9d1-9595-482e-943a-81cf692e895b\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-2lpl8" Feb 24 02:30:48.314724 master-0 kubenswrapper[31411]: I0224 02:30:48.311919 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk4kh\" (UniqueName: \"kubernetes.io/projected/874ce9d1-9595-482e-943a-81cf692e895b-kube-api-access-fk4kh\") pod \"obo-prometheus-operator-68bc856cb9-2lpl8\" (UID: \"874ce9d1-9595-482e-943a-81cf692e895b\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-2lpl8" Feb 24 02:30:48.315681 master-0 kubenswrapper[31411]: I0224 02:30:48.315622 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-qbg2d"] Feb 24 02:30:48.319602 master-0 kubenswrapper[31411]: I0224 02:30:48.316848 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-qbg2d" Feb 24 02:30:48.325530 master-0 kubenswrapper[31411]: I0224 02:30:48.325050 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-qbg2d"] Feb 24 02:30:48.338602 master-0 kubenswrapper[31411]: I0224 02:30:48.336305 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 24 02:30:48.353790 master-0 kubenswrapper[31411]: I0224 02:30:48.353720 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-9t8v8"] Feb 24 02:30:48.355941 master-0 kubenswrapper[31411]: I0224 02:30:48.355909 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-9t8v8" Feb 24 02:30:48.363291 master-0 kubenswrapper[31411]: I0224 02:30:48.363244 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-hhm6l"] Feb 24 02:30:48.367716 master-0 kubenswrapper[31411]: I0224 02:30:48.366966 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk4kh\" (UniqueName: \"kubernetes.io/projected/874ce9d1-9595-482e-943a-81cf692e895b-kube-api-access-fk4kh\") pod \"obo-prometheus-operator-68bc856cb9-2lpl8\" (UID: \"874ce9d1-9595-482e-943a-81cf692e895b\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-2lpl8" Feb 24 02:30:48.389926 master-0 kubenswrapper[31411]: I0224 02:30:48.389880 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-9t8v8"] Feb 24 02:30:48.413664 master-0 kubenswrapper[31411]: I0224 02:30:48.413612 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/98dadea3-0bd6-4978-bc01-1a8b75da4751-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66946c8978-qbg2d\" (UID: \"98dadea3-0bd6-4978-bc01-1a8b75da4751\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-qbg2d" Feb 24 02:30:48.413975 master-0 kubenswrapper[31411]: I0224 02:30:48.413956 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0a70499b-5bfc-4d54-b605-3a2bf775f209-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66946c8978-9t8v8\" (UID: \"0a70499b-5bfc-4d54-b605-3a2bf775f209\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-9t8v8" Feb 24 02:30:48.414169 master-0 kubenswrapper[31411]: I0224 02:30:48.414148 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0a70499b-5bfc-4d54-b605-3a2bf775f209-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66946c8978-9t8v8\" (UID: \"0a70499b-5bfc-4d54-b605-3a2bf775f209\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-9t8v8" Feb 24 02:30:48.414336 master-0 kubenswrapper[31411]: I0224 02:30:48.414317 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/98dadea3-0bd6-4978-bc01-1a8b75da4751-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66946c8978-qbg2d\" (UID: \"98dadea3-0bd6-4978-bc01-1a8b75da4751\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-qbg2d" Feb 24 02:30:48.485859 master-0 kubenswrapper[31411]: I0224 02:30:48.483210 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-2lpl8" Feb 24 02:30:48.494708 master-0 kubenswrapper[31411]: I0224 02:30:48.491316 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-8lklf"] Feb 24 02:30:48.494708 master-0 kubenswrapper[31411]: I0224 02:30:48.493064 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-8lklf" Feb 24 02:30:48.497551 master-0 kubenswrapper[31411]: I0224 02:30:48.497514 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 24 02:30:48.522670 master-0 kubenswrapper[31411]: I0224 02:30:48.516185 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0a70499b-5bfc-4d54-b605-3a2bf775f209-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66946c8978-9t8v8\" (UID: \"0a70499b-5bfc-4d54-b605-3a2bf775f209\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-9t8v8" Feb 24 02:30:48.522670 master-0 kubenswrapper[31411]: I0224 02:30:48.521557 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/98dadea3-0bd6-4978-bc01-1a8b75da4751-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66946c8978-qbg2d\" (UID: \"98dadea3-0bd6-4978-bc01-1a8b75da4751\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-qbg2d" Feb 24 02:30:48.522670 master-0 kubenswrapper[31411]: I0224 02:30:48.521719 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/98dadea3-0bd6-4978-bc01-1a8b75da4751-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66946c8978-qbg2d\" (UID: \"98dadea3-0bd6-4978-bc01-1a8b75da4751\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-qbg2d" Feb 24 02:30:48.522670 master-0 kubenswrapper[31411]: I0224 02:30:48.521752 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0a70499b-5bfc-4d54-b605-3a2bf775f209-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66946c8978-9t8v8\" (UID: \"0a70499b-5bfc-4d54-b605-3a2bf775f209\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-9t8v8" Feb 24 02:30:48.522670 master-0 kubenswrapper[31411]: I0224 02:30:48.518597 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-8lklf"] Feb 24 02:30:48.531121 master-0 kubenswrapper[31411]: I0224 02:30:48.528541 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/98dadea3-0bd6-4978-bc01-1a8b75da4751-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66946c8978-qbg2d\" (UID: \"98dadea3-0bd6-4978-bc01-1a8b75da4751\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-qbg2d" Feb 24 02:30:48.531121 master-0 kubenswrapper[31411]: I0224 02:30:48.529811 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0a70499b-5bfc-4d54-b605-3a2bf775f209-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66946c8978-9t8v8\" (UID: \"0a70499b-5bfc-4d54-b605-3a2bf775f209\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-9t8v8" Feb 24 02:30:48.544791 master-0 kubenswrapper[31411]: I0224 02:30:48.544733 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0a70499b-5bfc-4d54-b605-3a2bf775f209-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66946c8978-9t8v8\" (UID: \"0a70499b-5bfc-4d54-b605-3a2bf775f209\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-9t8v8" Feb 24 02:30:48.546990 master-0 kubenswrapper[31411]: I0224 02:30:48.546940 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/98dadea3-0bd6-4978-bc01-1a8b75da4751-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66946c8978-qbg2d\" (UID: \"98dadea3-0bd6-4978-bc01-1a8b75da4751\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-qbg2d" Feb 24 02:30:48.552962 master-0 kubenswrapper[31411]: I0224 02:30:48.552917 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-j4m97" event={"ID":"3b5844ba-fc8b-4df5-b9ea-bdf8b1054111","Type":"ContainerStarted","Data":"6338ad7f7b29eb8bc1c7fd583a57e5186a977e0414f7d071b98a1a83ded1aa2e"} Feb 24 02:30:48.553031 master-0 kubenswrapper[31411]: I0224 02:30:48.553019 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-j4m97" Feb 24 02:30:48.553956 master-0 kubenswrapper[31411]: I0224 02:30:48.553886 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-hhm6l" event={"ID":"28acc751-e697-480b-8cc5-81a6600181b7","Type":"ContainerStarted","Data":"9d91db9d939be47667bb578d47f0cd9632fcedb2e8671413125362d274a503f0"} Feb 24 02:30:48.581523 master-0 kubenswrapper[31411]: I0224 02:30:48.581433 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-j4m97" podStartSLOduration=2.308644506 podStartE2EDuration="8.581403944s" podCreationTimestamp="2026-02-24 02:30:40 +0000 UTC" firstStartedPulling="2026-02-24 02:30:41.623880663 +0000 UTC m=+584.841078509" lastFinishedPulling="2026-02-24 02:30:47.896640101 +0000 UTC m=+591.113837947" observedRunningTime="2026-02-24 02:30:48.57269518 +0000 UTC m=+591.789893026" watchObservedRunningTime="2026-02-24 02:30:48.581403944 +0000 UTC m=+591.798601790" Feb 24 02:30:48.626101 master-0 kubenswrapper[31411]: I0224 02:30:48.623163 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/9f112702-e2e7-490d-8a47-7f8a098fc97b-observability-operator-tls\") pod \"observability-operator-59bdc8b94-8lklf\" (UID: \"9f112702-e2e7-490d-8a47-7f8a098fc97b\") " pod="openshift-operators/observability-operator-59bdc8b94-8lklf" Feb 24 02:30:48.626101 master-0 kubenswrapper[31411]: I0224 02:30:48.625772 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl9nd\" (UniqueName: \"kubernetes.io/projected/9f112702-e2e7-490d-8a47-7f8a098fc97b-kube-api-access-dl9nd\") pod \"observability-operator-59bdc8b94-8lklf\" (UID: \"9f112702-e2e7-490d-8a47-7f8a098fc97b\") " pod="openshift-operators/observability-operator-59bdc8b94-8lklf" Feb 24 02:30:48.659448 master-0 kubenswrapper[31411]: I0224 02:30:48.659382 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-qbg2d" Feb 24 02:30:48.702811 master-0 kubenswrapper[31411]: I0224 02:30:48.702314 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-rpqh9"] Feb 24 02:30:48.703506 master-0 kubenswrapper[31411]: I0224 02:30:48.703479 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-rpqh9" Feb 24 02:30:48.732216 master-0 kubenswrapper[31411]: I0224 02:30:48.727804 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/9f112702-e2e7-490d-8a47-7f8a098fc97b-observability-operator-tls\") pod \"observability-operator-59bdc8b94-8lklf\" (UID: \"9f112702-e2e7-490d-8a47-7f8a098fc97b\") " pod="openshift-operators/observability-operator-59bdc8b94-8lklf" Feb 24 02:30:48.732216 master-0 kubenswrapper[31411]: I0224 02:30:48.727998 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl9nd\" (UniqueName: \"kubernetes.io/projected/9f112702-e2e7-490d-8a47-7f8a098fc97b-kube-api-access-dl9nd\") pod \"observability-operator-59bdc8b94-8lklf\" (UID: \"9f112702-e2e7-490d-8a47-7f8a098fc97b\") " pod="openshift-operators/observability-operator-59bdc8b94-8lklf" Feb 24 02:30:48.732216 master-0 kubenswrapper[31411]: I0224 02:30:48.730120 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-9t8v8" Feb 24 02:30:48.741934 master-0 kubenswrapper[31411]: I0224 02:30:48.741879 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/9f112702-e2e7-490d-8a47-7f8a098fc97b-observability-operator-tls\") pod \"observability-operator-59bdc8b94-8lklf\" (UID: \"9f112702-e2e7-490d-8a47-7f8a098fc97b\") " pod="openshift-operators/observability-operator-59bdc8b94-8lklf" Feb 24 02:30:48.747417 master-0 kubenswrapper[31411]: I0224 02:30:48.747377 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-rpqh9"] Feb 24 02:30:48.761254 master-0 kubenswrapper[31411]: I0224 02:30:48.761197 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl9nd\" (UniqueName: \"kubernetes.io/projected/9f112702-e2e7-490d-8a47-7f8a098fc97b-kube-api-access-dl9nd\") pod \"observability-operator-59bdc8b94-8lklf\" (UID: \"9f112702-e2e7-490d-8a47-7f8a098fc97b\") " pod="openshift-operators/observability-operator-59bdc8b94-8lklf" Feb 24 02:30:48.833623 master-0 kubenswrapper[31411]: I0224 02:30:48.825970 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-8lklf" Feb 24 02:30:48.834208 master-0 kubenswrapper[31411]: I0224 02:30:48.834007 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/3f8fbc22-13fb-4a97-b14f-9671c06636fb-openshift-service-ca\") pod \"perses-operator-5bf474d74f-rpqh9\" (UID: \"3f8fbc22-13fb-4a97-b14f-9671c06636fb\") " pod="openshift-operators/perses-operator-5bf474d74f-rpqh9" Feb 24 02:30:48.834208 master-0 kubenswrapper[31411]: I0224 02:30:48.834086 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65wwg\" (UniqueName: \"kubernetes.io/projected/3f8fbc22-13fb-4a97-b14f-9671c06636fb-kube-api-access-65wwg\") pod \"perses-operator-5bf474d74f-rpqh9\" (UID: \"3f8fbc22-13fb-4a97-b14f-9671c06636fb\") " pod="openshift-operators/perses-operator-5bf474d74f-rpqh9" Feb 24 02:30:48.938753 master-0 kubenswrapper[31411]: I0224 02:30:48.937549 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/3f8fbc22-13fb-4a97-b14f-9671c06636fb-openshift-service-ca\") pod \"perses-operator-5bf474d74f-rpqh9\" (UID: \"3f8fbc22-13fb-4a97-b14f-9671c06636fb\") " pod="openshift-operators/perses-operator-5bf474d74f-rpqh9" Feb 24 02:30:48.938753 master-0 kubenswrapper[31411]: I0224 02:30:48.937634 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65wwg\" (UniqueName: \"kubernetes.io/projected/3f8fbc22-13fb-4a97-b14f-9671c06636fb-kube-api-access-65wwg\") pod \"perses-operator-5bf474d74f-rpqh9\" (UID: \"3f8fbc22-13fb-4a97-b14f-9671c06636fb\") " pod="openshift-operators/perses-operator-5bf474d74f-rpqh9" Feb 24 02:30:48.939239 master-0 kubenswrapper[31411]: I0224 02:30:48.939060 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/3f8fbc22-13fb-4a97-b14f-9671c06636fb-openshift-service-ca\") pod \"perses-operator-5bf474d74f-rpqh9\" (UID: \"3f8fbc22-13fb-4a97-b14f-9671c06636fb\") " pod="openshift-operators/perses-operator-5bf474d74f-rpqh9" Feb 24 02:30:48.971962 master-0 kubenswrapper[31411]: I0224 02:30:48.969534 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65wwg\" (UniqueName: \"kubernetes.io/projected/3f8fbc22-13fb-4a97-b14f-9671c06636fb-kube-api-access-65wwg\") pod \"perses-operator-5bf474d74f-rpqh9\" (UID: \"3f8fbc22-13fb-4a97-b14f-9671c06636fb\") " pod="openshift-operators/perses-operator-5bf474d74f-rpqh9" Feb 24 02:30:49.032664 master-0 kubenswrapper[31411]: I0224 02:30:49.030164 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-2lpl8"] Feb 24 02:30:49.052383 master-0 kubenswrapper[31411]: I0224 02:30:49.041530 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-rpqh9" Feb 24 02:30:49.077596 master-0 kubenswrapper[31411]: W0224 02:30:49.072831 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod874ce9d1_9595_482e_943a_81cf692e895b.slice/crio-0f701430940ef43ac7ff30e4c0d5a3bfa6f249c4bfbdea8555adbed26489f7bf WatchSource:0}: Error finding container 0f701430940ef43ac7ff30e4c0d5a3bfa6f249c4bfbdea8555adbed26489f7bf: Status 404 returned error can't find the container with id 0f701430940ef43ac7ff30e4c0d5a3bfa6f249c4bfbdea8555adbed26489f7bf Feb 24 02:30:49.210215 master-0 kubenswrapper[31411]: I0224 02:30:49.198817 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-qbg2d"] Feb 24 02:30:49.277782 master-0 kubenswrapper[31411]: W0224 02:30:49.276909 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98dadea3_0bd6_4978_bc01_1a8b75da4751.slice/crio-db23f4c398520c6c11e3103cb712c703fc36fc4ce1989782e28e58a65c9e4636 WatchSource:0}: Error finding container db23f4c398520c6c11e3103cb712c703fc36fc4ce1989782e28e58a65c9e4636: Status 404 returned error can't find the container with id db23f4c398520c6c11e3103cb712c703fc36fc4ce1989782e28e58a65c9e4636 Feb 24 02:30:49.423152 master-0 kubenswrapper[31411]: I0224 02:30:49.407978 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-9t8v8"] Feb 24 02:30:49.443373 master-0 kubenswrapper[31411]: I0224 02:30:49.432342 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-rpqh9"] Feb 24 02:30:49.443373 master-0 kubenswrapper[31411]: I0224 02:30:49.440155 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-8lklf"] Feb 24 02:30:49.447592 master-0 kubenswrapper[31411]: W0224 02:30:49.447476 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f112702_e2e7_490d_8a47_7f8a098fc97b.slice/crio-9d4418416a6272247194c3ff24d357f4397f87a1801b73dbe5437f477abde1dc WatchSource:0}: Error finding container 9d4418416a6272247194c3ff24d357f4397f87a1801b73dbe5437f477abde1dc: Status 404 returned error can't find the container with id 9d4418416a6272247194c3ff24d357f4397f87a1801b73dbe5437f477abde1dc Feb 24 02:30:49.572296 master-0 kubenswrapper[31411]: I0224 02:30:49.572192 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-9t8v8" event={"ID":"0a70499b-5bfc-4d54-b605-3a2bf775f209","Type":"ContainerStarted","Data":"3e2453389883068ea3349e0a7c71d9fcf4a428f74c3fafad205211adcb73e572"} Feb 24 02:30:49.578215 master-0 kubenswrapper[31411]: I0224 02:30:49.576793 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-8lklf" event={"ID":"9f112702-e2e7-490d-8a47-7f8a098fc97b","Type":"ContainerStarted","Data":"9d4418416a6272247194c3ff24d357f4397f87a1801b73dbe5437f477abde1dc"} Feb 24 02:30:49.578323 master-0 kubenswrapper[31411]: I0224 02:30:49.578208 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-2lpl8" event={"ID":"874ce9d1-9595-482e-943a-81cf692e895b","Type":"ContainerStarted","Data":"0f701430940ef43ac7ff30e4c0d5a3bfa6f249c4bfbdea8555adbed26489f7bf"} Feb 24 02:30:49.579950 master-0 kubenswrapper[31411]: I0224 02:30:49.579882 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-hhm6l" event={"ID":"28acc751-e697-480b-8cc5-81a6600181b7","Type":"ContainerStarted","Data":"82e2437d11e10a9cc727771907c5191c0cf590a8c6d2e46431862d591b5ff7e0"} Feb 24 02:30:49.581276 master-0 kubenswrapper[31411]: I0224 02:30:49.581234 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-qbg2d" event={"ID":"98dadea3-0bd6-4978-bc01-1a8b75da4751","Type":"ContainerStarted","Data":"db23f4c398520c6c11e3103cb712c703fc36fc4ce1989782e28e58a65c9e4636"} Feb 24 02:30:49.583557 master-0 kubenswrapper[31411]: I0224 02:30:49.583513 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-rpqh9" event={"ID":"3f8fbc22-13fb-4a97-b14f-9671c06636fb","Type":"ContainerStarted","Data":"1e2e749d034f9ed035fbe1036f2ab5f2260622abfef24de2bef3782029fc3b23"} Feb 24 02:30:49.604186 master-0 kubenswrapper[31411]: I0224 02:30:49.604128 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-hhm6l" podStartSLOduration=4.604118585 podStartE2EDuration="4.604118585s" podCreationTimestamp="2026-02-24 02:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:30:49.600048111 +0000 UTC m=+592.817245957" watchObservedRunningTime="2026-02-24 02:30:49.604118585 +0000 UTC m=+592.821316431" Feb 24 02:30:50.102785 master-0 kubenswrapper[31411]: I0224 02:30:50.102713 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-559d754c8d-8sgn7" Feb 24 02:30:51.757466 master-0 kubenswrapper[31411]: I0224 02:30:51.755707 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-54xdp"] Feb 24 02:30:51.757466 master-0 kubenswrapper[31411]: I0224 02:30:51.757154 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-54xdp" Feb 24 02:30:51.774412 master-0 kubenswrapper[31411]: I0224 02:30:51.774344 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-54xdp"] Feb 24 02:30:51.853150 master-0 kubenswrapper[31411]: I0224 02:30:51.853076 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5hxl\" (UniqueName: \"kubernetes.io/projected/2655f8d2-c670-4c07-92c6-391142ec7760-kube-api-access-d5hxl\") pod \"cert-manager-545d4d4674-54xdp\" (UID: \"2655f8d2-c670-4c07-92c6-391142ec7760\") " pod="cert-manager/cert-manager-545d4d4674-54xdp" Feb 24 02:30:51.854981 master-0 kubenswrapper[31411]: I0224 02:30:51.854965 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2655f8d2-c670-4c07-92c6-391142ec7760-bound-sa-token\") pod \"cert-manager-545d4d4674-54xdp\" (UID: \"2655f8d2-c670-4c07-92c6-391142ec7760\") " pod="cert-manager/cert-manager-545d4d4674-54xdp" Feb 24 02:30:51.964599 master-0 kubenswrapper[31411]: I0224 02:30:51.960964 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2655f8d2-c670-4c07-92c6-391142ec7760-bound-sa-token\") pod \"cert-manager-545d4d4674-54xdp\" (UID: \"2655f8d2-c670-4c07-92c6-391142ec7760\") " pod="cert-manager/cert-manager-545d4d4674-54xdp" Feb 24 02:30:51.964599 master-0 kubenswrapper[31411]: I0224 02:30:51.961048 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5hxl\" (UniqueName: \"kubernetes.io/projected/2655f8d2-c670-4c07-92c6-391142ec7760-kube-api-access-d5hxl\") pod \"cert-manager-545d4d4674-54xdp\" (UID: \"2655f8d2-c670-4c07-92c6-391142ec7760\") " pod="cert-manager/cert-manager-545d4d4674-54xdp" Feb 24 02:30:51.996335 master-0 kubenswrapper[31411]: I0224 02:30:51.996283 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5hxl\" (UniqueName: \"kubernetes.io/projected/2655f8d2-c670-4c07-92c6-391142ec7760-kube-api-access-d5hxl\") pod \"cert-manager-545d4d4674-54xdp\" (UID: \"2655f8d2-c670-4c07-92c6-391142ec7760\") " pod="cert-manager/cert-manager-545d4d4674-54xdp" Feb 24 02:30:51.997046 master-0 kubenswrapper[31411]: I0224 02:30:51.997000 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2655f8d2-c670-4c07-92c6-391142ec7760-bound-sa-token\") pod \"cert-manager-545d4d4674-54xdp\" (UID: \"2655f8d2-c670-4c07-92c6-391142ec7760\") " pod="cert-manager/cert-manager-545d4d4674-54xdp" Feb 24 02:30:52.096176 master-0 kubenswrapper[31411]: I0224 02:30:52.095974 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-54xdp" Feb 24 02:30:52.559176 master-0 kubenswrapper[31411]: I0224 02:30:52.559083 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-54xdp"] Feb 24 02:30:54.673376 master-0 kubenswrapper[31411]: I0224 02:30:54.673298 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-54xdp" event={"ID":"2655f8d2-c670-4c07-92c6-391142ec7760","Type":"ContainerStarted","Data":"dedbc004832c3bdf11d44f6b3bc242d8cd8d80382d88f627af22ae386b9ae3db"} Feb 24 02:30:56.127605 master-0 kubenswrapper[31411]: I0224 02:30:56.125991 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-j4m97" Feb 24 02:30:59.738007 master-0 kubenswrapper[31411]: I0224 02:30:59.736566 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-rpqh9" event={"ID":"3f8fbc22-13fb-4a97-b14f-9671c06636fb","Type":"ContainerStarted","Data":"cc8360be9704aefeb56c0db429ee9ac9abdb6c70bdf54a0f49d4c50b103512d8"} Feb 24 02:30:59.738007 master-0 kubenswrapper[31411]: I0224 02:30:59.737933 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-rpqh9" Feb 24 02:30:59.749400 master-0 kubenswrapper[31411]: I0224 02:30:59.748536 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-9t8v8" event={"ID":"0a70499b-5bfc-4d54-b605-3a2bf775f209","Type":"ContainerStarted","Data":"2e3a10a6046146dd13526e6dcf2c8490c414e89ae5ce6f25f381f20270156a3b"} Feb 24 02:30:59.753096 master-0 kubenswrapper[31411]: I0224 02:30:59.752320 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-2lpl8" event={"ID":"874ce9d1-9595-482e-943a-81cf692e895b","Type":"ContainerStarted","Data":"f5f1e8b630e21fc501247360d1af525baccf0847528f77b708808bb90b33388d"} Feb 24 02:30:59.756263 master-0 kubenswrapper[31411]: I0224 02:30:59.756219 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-54xdp" event={"ID":"2655f8d2-c670-4c07-92c6-391142ec7760","Type":"ContainerStarted","Data":"fab2f786111b999c68daf2d7814ad224d3508756c7aba791a175278a9b52f889"} Feb 24 02:30:59.759007 master-0 kubenswrapper[31411]: I0224 02:30:59.758938 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-8lklf" event={"ID":"9f112702-e2e7-490d-8a47-7f8a098fc97b","Type":"ContainerStarted","Data":"f3f51b0323b2a1916947183dd9c6ab47ccb98095e4f8acc808d9526b5aba28ac"} Feb 24 02:30:59.761681 master-0 kubenswrapper[31411]: I0224 02:30:59.760261 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-8lklf" Feb 24 02:30:59.764735 master-0 kubenswrapper[31411]: I0224 02:30:59.763715 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-8lklf" Feb 24 02:30:59.764735 master-0 kubenswrapper[31411]: I0224 02:30:59.763918 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-qbg2d" event={"ID":"98dadea3-0bd6-4978-bc01-1a8b75da4751","Type":"ContainerStarted","Data":"ff46fa9abd8952115546cbf63d48732c6bdeb6d9b132d21cf49928017e657693"} Feb 24 02:30:59.779375 master-0 kubenswrapper[31411]: I0224 02:30:59.779239 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-rpqh9" podStartSLOduration=2.375621189 podStartE2EDuration="11.779198801s" podCreationTimestamp="2026-02-24 02:30:48 +0000 UTC" firstStartedPulling="2026-02-24 02:30:49.448663433 +0000 UTC m=+592.665861279" lastFinishedPulling="2026-02-24 02:30:58.852241005 +0000 UTC m=+602.069438891" observedRunningTime="2026-02-24 02:30:59.772415941 +0000 UTC m=+602.989613827" watchObservedRunningTime="2026-02-24 02:30:59.779198801 +0000 UTC m=+602.996396677" Feb 24 02:30:59.813692 master-0 kubenswrapper[31411]: I0224 02:30:59.813516 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-8lklf" podStartSLOduration=2.392926322 podStartE2EDuration="11.813478838s" podCreationTimestamp="2026-02-24 02:30:48 +0000 UTC" firstStartedPulling="2026-02-24 02:30:49.450147495 +0000 UTC m=+592.667345341" lastFinishedPulling="2026-02-24 02:30:58.870700021 +0000 UTC m=+602.087897857" observedRunningTime="2026-02-24 02:30:59.809740684 +0000 UTC m=+603.026938620" watchObservedRunningTime="2026-02-24 02:30:59.813478838 +0000 UTC m=+603.030676734" Feb 24 02:30:59.883539 master-0 kubenswrapper[31411]: I0224 02:30:59.883462 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-2lpl8" podStartSLOduration=2.080398525 podStartE2EDuration="11.883436982s" podCreationTimestamp="2026-02-24 02:30:48 +0000 UTC" firstStartedPulling="2026-02-24 02:30:49.080372338 +0000 UTC m=+592.297570184" lastFinishedPulling="2026-02-24 02:30:58.883410795 +0000 UTC m=+602.100608641" observedRunningTime="2026-02-24 02:30:59.854767201 +0000 UTC m=+603.071965057" watchObservedRunningTime="2026-02-24 02:30:59.883436982 +0000 UTC m=+603.100634848" Feb 24 02:30:59.901590 master-0 kubenswrapper[31411]: I0224 02:30:59.901462 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-qbg2d" podStartSLOduration=2.421185912 podStartE2EDuration="11.901443025s" podCreationTimestamp="2026-02-24 02:30:48 +0000 UTC" firstStartedPulling="2026-02-24 02:30:49.329074944 +0000 UTC m=+592.546272790" lastFinishedPulling="2026-02-24 02:30:58.809332057 +0000 UTC m=+602.026529903" observedRunningTime="2026-02-24 02:30:59.89696061 +0000 UTC m=+603.114158466" watchObservedRunningTime="2026-02-24 02:30:59.901443025 +0000 UTC m=+603.118640881" Feb 24 02:30:59.958069 master-0 kubenswrapper[31411]: I0224 02:30:59.957999 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-54xdp" podStartSLOduration=8.957976234 podStartE2EDuration="8.957976234s" podCreationTimestamp="2026-02-24 02:30:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:30:59.950994209 +0000 UTC m=+603.168192055" watchObservedRunningTime="2026-02-24 02:30:59.957976234 +0000 UTC m=+603.175174080" Feb 24 02:30:59.965729 master-0 kubenswrapper[31411]: I0224 02:30:59.965685 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66946c8978-9t8v8" podStartSLOduration=2.581352046 podStartE2EDuration="11.965670939s" podCreationTimestamp="2026-02-24 02:30:48 +0000 UTC" firstStartedPulling="2026-02-24 02:30:49.430374983 +0000 UTC m=+592.647572829" lastFinishedPulling="2026-02-24 02:30:58.814693886 +0000 UTC m=+602.031891722" observedRunningTime="2026-02-24 02:30:59.928984154 +0000 UTC m=+603.146182010" watchObservedRunningTime="2026-02-24 02:30:59.965670939 +0000 UTC m=+603.182868785" Feb 24 02:31:09.047824 master-0 kubenswrapper[31411]: I0224 02:31:09.047723 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-rpqh9" Feb 24 02:31:09.796546 master-0 kubenswrapper[31411]: I0224 02:31:09.796448 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7577845998-zvq74" Feb 24 02:31:18.667601 master-0 kubenswrapper[31411]: I0224 02:31:18.665735 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-lthbs"] Feb 24 02:31:18.667601 master-0 kubenswrapper[31411]: I0224 02:31:18.667235 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-lthbs" Feb 24 02:31:18.668448 master-0 kubenswrapper[31411]: I0224 02:31:18.667999 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-gll2f"] Feb 24 02:31:18.674592 master-0 kubenswrapper[31411]: I0224 02:31:18.670326 31411 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 24 02:31:18.674592 master-0 kubenswrapper[31411]: I0224 02:31:18.671448 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-gll2f" Feb 24 02:31:18.682590 master-0 kubenswrapper[31411]: I0224 02:31:18.678760 31411 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 24 02:31:18.682590 master-0 kubenswrapper[31411]: I0224 02:31:18.678891 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 24 02:31:18.701598 master-0 kubenswrapper[31411]: I0224 02:31:18.693607 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-lthbs"] Feb 24 02:31:18.777991 master-0 kubenswrapper[31411]: I0224 02:31:18.774618 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs6rd\" (UniqueName: \"kubernetes.io/projected/092e38f4-b68c-422f-8663-f152fa7bb09f-kube-api-access-bs6rd\") pod \"frr-k8s-gll2f\" (UID: \"092e38f4-b68c-422f-8663-f152fa7bb09f\") " pod="metallb-system/frr-k8s-gll2f" Feb 24 02:31:18.777991 master-0 kubenswrapper[31411]: I0224 02:31:18.774739 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/092e38f4-b68c-422f-8663-f152fa7bb09f-metrics-certs\") pod \"frr-k8s-gll2f\" (UID: \"092e38f4-b68c-422f-8663-f152fa7bb09f\") " pod="metallb-system/frr-k8s-gll2f" Feb 24 02:31:18.781826 master-0 kubenswrapper[31411]: I0224 02:31:18.780516 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/092e38f4-b68c-422f-8663-f152fa7bb09f-metrics\") pod \"frr-k8s-gll2f\" (UID: \"092e38f4-b68c-422f-8663-f152fa7bb09f\") " pod="metallb-system/frr-k8s-gll2f" Feb 24 02:31:18.781826 master-0 kubenswrapper[31411]: I0224 02:31:18.780757 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/092e38f4-b68c-422f-8663-f152fa7bb09f-frr-conf\") pod \"frr-k8s-gll2f\" (UID: \"092e38f4-b68c-422f-8663-f152fa7bb09f\") " pod="metallb-system/frr-k8s-gll2f" Feb 24 02:31:18.781826 master-0 kubenswrapper[31411]: I0224 02:31:18.780847 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/092e38f4-b68c-422f-8663-f152fa7bb09f-reloader\") pod \"frr-k8s-gll2f\" (UID: \"092e38f4-b68c-422f-8663-f152fa7bb09f\") " pod="metallb-system/frr-k8s-gll2f" Feb 24 02:31:18.781826 master-0 kubenswrapper[31411]: I0224 02:31:18.781080 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/83bea055-a58c-42dd-8ae4-755f7f2944c0-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-lthbs\" (UID: \"83bea055-a58c-42dd-8ae4-755f7f2944c0\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-lthbs" Feb 24 02:31:18.781826 master-0 kubenswrapper[31411]: I0224 02:31:18.781101 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6knq4\" (UniqueName: \"kubernetes.io/projected/83bea055-a58c-42dd-8ae4-755f7f2944c0-kube-api-access-6knq4\") pod \"frr-k8s-webhook-server-78b44bf5bb-lthbs\" (UID: \"83bea055-a58c-42dd-8ae4-755f7f2944c0\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-lthbs" Feb 24 02:31:18.781826 master-0 kubenswrapper[31411]: I0224 02:31:18.781155 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/092e38f4-b68c-422f-8663-f152fa7bb09f-frr-startup\") pod \"frr-k8s-gll2f\" (UID: \"092e38f4-b68c-422f-8663-f152fa7bb09f\") " pod="metallb-system/frr-k8s-gll2f" Feb 24 02:31:18.781826 master-0 kubenswrapper[31411]: I0224 02:31:18.781188 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/092e38f4-b68c-422f-8663-f152fa7bb09f-frr-sockets\") pod \"frr-k8s-gll2f\" (UID: \"092e38f4-b68c-422f-8663-f152fa7bb09f\") " pod="metallb-system/frr-k8s-gll2f" Feb 24 02:31:18.811325 master-0 kubenswrapper[31411]: I0224 02:31:18.811265 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-lbfkl"] Feb 24 02:31:18.813279 master-0 kubenswrapper[31411]: I0224 02:31:18.813240 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-lbfkl" Feb 24 02:31:18.829592 master-0 kubenswrapper[31411]: I0224 02:31:18.825299 31411 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 24 02:31:18.829592 master-0 kubenswrapper[31411]: I0224 02:31:18.825469 31411 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 24 02:31:18.829592 master-0 kubenswrapper[31411]: I0224 02:31:18.825663 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 24 02:31:18.846690 master-0 kubenswrapper[31411]: I0224 02:31:18.845411 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-s2t6d"] Feb 24 02:31:18.847379 master-0 kubenswrapper[31411]: I0224 02:31:18.847313 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-s2t6d" Feb 24 02:31:18.853347 master-0 kubenswrapper[31411]: I0224 02:31:18.853315 31411 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 24 02:31:18.878853 master-0 kubenswrapper[31411]: I0224 02:31:18.876693 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-s2t6d"] Feb 24 02:31:18.894770 master-0 kubenswrapper[31411]: I0224 02:31:18.886554 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/092e38f4-b68c-422f-8663-f152fa7bb09f-frr-conf\") pod \"frr-k8s-gll2f\" (UID: \"092e38f4-b68c-422f-8663-f152fa7bb09f\") " pod="metallb-system/frr-k8s-gll2f" Feb 24 02:31:18.894770 master-0 kubenswrapper[31411]: I0224 02:31:18.886628 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/092e38f4-b68c-422f-8663-f152fa7bb09f-reloader\") pod \"frr-k8s-gll2f\" (UID: \"092e38f4-b68c-422f-8663-f152fa7bb09f\") " pod="metallb-system/frr-k8s-gll2f" Feb 24 02:31:18.894770 master-0 kubenswrapper[31411]: I0224 02:31:18.886655 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7ct4\" (UniqueName: \"kubernetes.io/projected/2376dbda-b2e8-45e5-af4c-7382f0994ae3-kube-api-access-r7ct4\") pod \"speaker-lbfkl\" (UID: \"2376dbda-b2e8-45e5-af4c-7382f0994ae3\") " pod="metallb-system/speaker-lbfkl" Feb 24 02:31:18.894770 master-0 kubenswrapper[31411]: I0224 02:31:18.886683 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nrrb\" (UniqueName: \"kubernetes.io/projected/5dd28fe3-673b-4b02-8fab-ab06e03d54e4-kube-api-access-4nrrb\") pod \"controller-69bbfbf88f-s2t6d\" (UID: \"5dd28fe3-673b-4b02-8fab-ab06e03d54e4\") " pod="metallb-system/controller-69bbfbf88f-s2t6d" Feb 24 02:31:18.894770 master-0 kubenswrapper[31411]: I0224 02:31:18.886707 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/83bea055-a58c-42dd-8ae4-755f7f2944c0-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-lthbs\" (UID: \"83bea055-a58c-42dd-8ae4-755f7f2944c0\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-lthbs" Feb 24 02:31:18.894770 master-0 kubenswrapper[31411]: I0224 02:31:18.886728 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6knq4\" (UniqueName: \"kubernetes.io/projected/83bea055-a58c-42dd-8ae4-755f7f2944c0-kube-api-access-6knq4\") pod \"frr-k8s-webhook-server-78b44bf5bb-lthbs\" (UID: \"83bea055-a58c-42dd-8ae4-755f7f2944c0\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-lthbs" Feb 24 02:31:18.894770 master-0 kubenswrapper[31411]: I0224 02:31:18.886754 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5dd28fe3-673b-4b02-8fab-ab06e03d54e4-metrics-certs\") pod \"controller-69bbfbf88f-s2t6d\" (UID: \"5dd28fe3-673b-4b02-8fab-ab06e03d54e4\") " pod="metallb-system/controller-69bbfbf88f-s2t6d" Feb 24 02:31:18.894770 master-0 kubenswrapper[31411]: I0224 02:31:18.886777 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/092e38f4-b68c-422f-8663-f152fa7bb09f-frr-startup\") pod \"frr-k8s-gll2f\" (UID: \"092e38f4-b68c-422f-8663-f152fa7bb09f\") " pod="metallb-system/frr-k8s-gll2f" Feb 24 02:31:18.894770 master-0 kubenswrapper[31411]: I0224 02:31:18.886801 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/092e38f4-b68c-422f-8663-f152fa7bb09f-frr-sockets\") pod \"frr-k8s-gll2f\" (UID: \"092e38f4-b68c-422f-8663-f152fa7bb09f\") " pod="metallb-system/frr-k8s-gll2f" Feb 24 02:31:18.894770 master-0 kubenswrapper[31411]: I0224 02:31:18.886849 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2376dbda-b2e8-45e5-af4c-7382f0994ae3-metrics-certs\") pod \"speaker-lbfkl\" (UID: \"2376dbda-b2e8-45e5-af4c-7382f0994ae3\") " pod="metallb-system/speaker-lbfkl" Feb 24 02:31:18.894770 master-0 kubenswrapper[31411]: I0224 02:31:18.886889 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs6rd\" (UniqueName: \"kubernetes.io/projected/092e38f4-b68c-422f-8663-f152fa7bb09f-kube-api-access-bs6rd\") pod \"frr-k8s-gll2f\" (UID: \"092e38f4-b68c-422f-8663-f152fa7bb09f\") " pod="metallb-system/frr-k8s-gll2f" Feb 24 02:31:18.894770 master-0 kubenswrapper[31411]: I0224 02:31:18.886925 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/092e38f4-b68c-422f-8663-f152fa7bb09f-metrics-certs\") pod \"frr-k8s-gll2f\" (UID: \"092e38f4-b68c-422f-8663-f152fa7bb09f\") " pod="metallb-system/frr-k8s-gll2f" Feb 24 02:31:18.894770 master-0 kubenswrapper[31411]: I0224 02:31:18.886956 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/092e38f4-b68c-422f-8663-f152fa7bb09f-metrics\") pod \"frr-k8s-gll2f\" (UID: \"092e38f4-b68c-422f-8663-f152fa7bb09f\") " pod="metallb-system/frr-k8s-gll2f" Feb 24 02:31:18.894770 master-0 kubenswrapper[31411]: I0224 02:31:18.886977 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2376dbda-b2e8-45e5-af4c-7382f0994ae3-metallb-excludel2\") pod \"speaker-lbfkl\" (UID: \"2376dbda-b2e8-45e5-af4c-7382f0994ae3\") " pod="metallb-system/speaker-lbfkl" Feb 24 02:31:18.894770 master-0 kubenswrapper[31411]: I0224 02:31:18.887033 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5dd28fe3-673b-4b02-8fab-ab06e03d54e4-cert\") pod \"controller-69bbfbf88f-s2t6d\" (UID: \"5dd28fe3-673b-4b02-8fab-ab06e03d54e4\") " pod="metallb-system/controller-69bbfbf88f-s2t6d" Feb 24 02:31:18.894770 master-0 kubenswrapper[31411]: I0224 02:31:18.887056 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2376dbda-b2e8-45e5-af4c-7382f0994ae3-memberlist\") pod \"speaker-lbfkl\" (UID: \"2376dbda-b2e8-45e5-af4c-7382f0994ae3\") " pod="metallb-system/speaker-lbfkl" Feb 24 02:31:18.894770 master-0 kubenswrapper[31411]: I0224 02:31:18.887649 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/092e38f4-b68c-422f-8663-f152fa7bb09f-frr-conf\") pod \"frr-k8s-gll2f\" (UID: \"092e38f4-b68c-422f-8663-f152fa7bb09f\") " pod="metallb-system/frr-k8s-gll2f" Feb 24 02:31:18.894770 master-0 kubenswrapper[31411]: I0224 02:31:18.887863 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/092e38f4-b68c-422f-8663-f152fa7bb09f-reloader\") pod \"frr-k8s-gll2f\" (UID: \"092e38f4-b68c-422f-8663-f152fa7bb09f\") " pod="metallb-system/frr-k8s-gll2f" Feb 24 02:31:18.894770 master-0 kubenswrapper[31411]: I0224 02:31:18.891839 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/092e38f4-b68c-422f-8663-f152fa7bb09f-frr-startup\") pod \"frr-k8s-gll2f\" (UID: \"092e38f4-b68c-422f-8663-f152fa7bb09f\") " pod="metallb-system/frr-k8s-gll2f" Feb 24 02:31:18.894770 master-0 kubenswrapper[31411]: I0224 02:31:18.892097 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/092e38f4-b68c-422f-8663-f152fa7bb09f-frr-sockets\") pod \"frr-k8s-gll2f\" (UID: \"092e38f4-b68c-422f-8663-f152fa7bb09f\") " pod="metallb-system/frr-k8s-gll2f" Feb 24 02:31:18.894770 master-0 kubenswrapper[31411]: I0224 02:31:18.892288 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/092e38f4-b68c-422f-8663-f152fa7bb09f-metrics\") pod \"frr-k8s-gll2f\" (UID: \"092e38f4-b68c-422f-8663-f152fa7bb09f\") " pod="metallb-system/frr-k8s-gll2f" Feb 24 02:31:18.894770 master-0 kubenswrapper[31411]: E0224 02:31:18.892498 31411 secret.go:189] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 24 02:31:18.894770 master-0 kubenswrapper[31411]: E0224 02:31:18.892544 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/092e38f4-b68c-422f-8663-f152fa7bb09f-metrics-certs podName:092e38f4-b68c-422f-8663-f152fa7bb09f nodeName:}" failed. No retries permitted until 2026-02-24 02:31:19.392527253 +0000 UTC m=+622.609725099 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/092e38f4-b68c-422f-8663-f152fa7bb09f-metrics-certs") pod "frr-k8s-gll2f" (UID: "092e38f4-b68c-422f-8663-f152fa7bb09f") : secret "frr-k8s-certs-secret" not found Feb 24 02:31:18.894770 master-0 kubenswrapper[31411]: I0224 02:31:18.893310 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/83bea055-a58c-42dd-8ae4-755f7f2944c0-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-lthbs\" (UID: \"83bea055-a58c-42dd-8ae4-755f7f2944c0\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-lthbs" Feb 24 02:31:18.935678 master-0 kubenswrapper[31411]: I0224 02:31:18.925897 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs6rd\" (UniqueName: \"kubernetes.io/projected/092e38f4-b68c-422f-8663-f152fa7bb09f-kube-api-access-bs6rd\") pod \"frr-k8s-gll2f\" (UID: \"092e38f4-b68c-422f-8663-f152fa7bb09f\") " pod="metallb-system/frr-k8s-gll2f" Feb 24 02:31:18.935678 master-0 kubenswrapper[31411]: I0224 02:31:18.934335 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6knq4\" (UniqueName: \"kubernetes.io/projected/83bea055-a58c-42dd-8ae4-755f7f2944c0-kube-api-access-6knq4\") pod \"frr-k8s-webhook-server-78b44bf5bb-lthbs\" (UID: \"83bea055-a58c-42dd-8ae4-755f7f2944c0\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-lthbs" Feb 24 02:31:18.996605 master-0 kubenswrapper[31411]: I0224 02:31:18.992610 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2376dbda-b2e8-45e5-af4c-7382f0994ae3-metrics-certs\") pod \"speaker-lbfkl\" (UID: \"2376dbda-b2e8-45e5-af4c-7382f0994ae3\") " pod="metallb-system/speaker-lbfkl" Feb 24 02:31:18.996605 master-0 kubenswrapper[31411]: I0224 02:31:18.992725 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2376dbda-b2e8-45e5-af4c-7382f0994ae3-metallb-excludel2\") pod \"speaker-lbfkl\" (UID: \"2376dbda-b2e8-45e5-af4c-7382f0994ae3\") " pod="metallb-system/speaker-lbfkl" Feb 24 02:31:18.996605 master-0 kubenswrapper[31411]: I0224 02:31:18.992751 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5dd28fe3-673b-4b02-8fab-ab06e03d54e4-cert\") pod \"controller-69bbfbf88f-s2t6d\" (UID: \"5dd28fe3-673b-4b02-8fab-ab06e03d54e4\") " pod="metallb-system/controller-69bbfbf88f-s2t6d" Feb 24 02:31:18.996605 master-0 kubenswrapper[31411]: I0224 02:31:18.992777 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2376dbda-b2e8-45e5-af4c-7382f0994ae3-memberlist\") pod \"speaker-lbfkl\" (UID: \"2376dbda-b2e8-45e5-af4c-7382f0994ae3\") " pod="metallb-system/speaker-lbfkl" Feb 24 02:31:18.996605 master-0 kubenswrapper[31411]: I0224 02:31:18.992816 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7ct4\" (UniqueName: \"kubernetes.io/projected/2376dbda-b2e8-45e5-af4c-7382f0994ae3-kube-api-access-r7ct4\") pod \"speaker-lbfkl\" (UID: \"2376dbda-b2e8-45e5-af4c-7382f0994ae3\") " pod="metallb-system/speaker-lbfkl" Feb 24 02:31:18.996605 master-0 kubenswrapper[31411]: I0224 02:31:18.992837 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nrrb\" (UniqueName: \"kubernetes.io/projected/5dd28fe3-673b-4b02-8fab-ab06e03d54e4-kube-api-access-4nrrb\") pod \"controller-69bbfbf88f-s2t6d\" (UID: \"5dd28fe3-673b-4b02-8fab-ab06e03d54e4\") " pod="metallb-system/controller-69bbfbf88f-s2t6d" Feb 24 02:31:18.996605 master-0 kubenswrapper[31411]: I0224 02:31:18.992859 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5dd28fe3-673b-4b02-8fab-ab06e03d54e4-metrics-certs\") pod \"controller-69bbfbf88f-s2t6d\" (UID: \"5dd28fe3-673b-4b02-8fab-ab06e03d54e4\") " pod="metallb-system/controller-69bbfbf88f-s2t6d" Feb 24 02:31:18.996605 master-0 kubenswrapper[31411]: E0224 02:31:18.996546 31411 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 24 02:31:18.997110 master-0 kubenswrapper[31411]: E0224 02:31:18.996648 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2376dbda-b2e8-45e5-af4c-7382f0994ae3-memberlist podName:2376dbda-b2e8-45e5-af4c-7382f0994ae3 nodeName:}" failed. No retries permitted until 2026-02-24 02:31:19.4966235 +0000 UTC m=+622.713821346 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2376dbda-b2e8-45e5-af4c-7382f0994ae3-memberlist") pod "speaker-lbfkl" (UID: "2376dbda-b2e8-45e5-af4c-7382f0994ae3") : secret "metallb-memberlist" not found Feb 24 02:31:18.997110 master-0 kubenswrapper[31411]: I0224 02:31:18.996727 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2376dbda-b2e8-45e5-af4c-7382f0994ae3-metallb-excludel2\") pod \"speaker-lbfkl\" (UID: \"2376dbda-b2e8-45e5-af4c-7382f0994ae3\") " pod="metallb-system/speaker-lbfkl" Feb 24 02:31:18.998129 master-0 kubenswrapper[31411]: I0224 02:31:18.997278 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5dd28fe3-673b-4b02-8fab-ab06e03d54e4-metrics-certs\") pod \"controller-69bbfbf88f-s2t6d\" (UID: \"5dd28fe3-673b-4b02-8fab-ab06e03d54e4\") " pod="metallb-system/controller-69bbfbf88f-s2t6d" Feb 24 02:31:18.999104 master-0 kubenswrapper[31411]: I0224 02:31:18.998695 31411 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 24 02:31:19.001897 master-0 kubenswrapper[31411]: I0224 02:31:19.001860 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2376dbda-b2e8-45e5-af4c-7382f0994ae3-metrics-certs\") pod \"speaker-lbfkl\" (UID: \"2376dbda-b2e8-45e5-af4c-7382f0994ae3\") " pod="metallb-system/speaker-lbfkl" Feb 24 02:31:19.016597 master-0 kubenswrapper[31411]: I0224 02:31:19.013968 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5dd28fe3-673b-4b02-8fab-ab06e03d54e4-cert\") pod \"controller-69bbfbf88f-s2t6d\" (UID: \"5dd28fe3-673b-4b02-8fab-ab06e03d54e4\") " pod="metallb-system/controller-69bbfbf88f-s2t6d" Feb 24 02:31:19.031167 master-0 kubenswrapper[31411]: I0224 02:31:19.031032 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-lthbs" Feb 24 02:31:19.054598 master-0 kubenswrapper[31411]: I0224 02:31:19.049477 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7ct4\" (UniqueName: \"kubernetes.io/projected/2376dbda-b2e8-45e5-af4c-7382f0994ae3-kube-api-access-r7ct4\") pod \"speaker-lbfkl\" (UID: \"2376dbda-b2e8-45e5-af4c-7382f0994ae3\") " pod="metallb-system/speaker-lbfkl" Feb 24 02:31:19.054598 master-0 kubenswrapper[31411]: I0224 02:31:19.052404 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nrrb\" (UniqueName: \"kubernetes.io/projected/5dd28fe3-673b-4b02-8fab-ab06e03d54e4-kube-api-access-4nrrb\") pod \"controller-69bbfbf88f-s2t6d\" (UID: \"5dd28fe3-673b-4b02-8fab-ab06e03d54e4\") " pod="metallb-system/controller-69bbfbf88f-s2t6d" Feb 24 02:31:19.178600 master-0 kubenswrapper[31411]: I0224 02:31:19.176917 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-s2t6d" Feb 24 02:31:19.423938 master-0 kubenswrapper[31411]: I0224 02:31:19.423832 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/092e38f4-b68c-422f-8663-f152fa7bb09f-metrics-certs\") pod \"frr-k8s-gll2f\" (UID: \"092e38f4-b68c-422f-8663-f152fa7bb09f\") " pod="metallb-system/frr-k8s-gll2f" Feb 24 02:31:19.449672 master-0 kubenswrapper[31411]: I0224 02:31:19.448438 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/092e38f4-b68c-422f-8663-f152fa7bb09f-metrics-certs\") pod \"frr-k8s-gll2f\" (UID: \"092e38f4-b68c-422f-8663-f152fa7bb09f\") " pod="metallb-system/frr-k8s-gll2f" Feb 24 02:31:19.540895 master-0 kubenswrapper[31411]: I0224 02:31:19.540455 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2376dbda-b2e8-45e5-af4c-7382f0994ae3-memberlist\") pod \"speaker-lbfkl\" (UID: \"2376dbda-b2e8-45e5-af4c-7382f0994ae3\") " pod="metallb-system/speaker-lbfkl" Feb 24 02:31:19.540895 master-0 kubenswrapper[31411]: E0224 02:31:19.540681 31411 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 24 02:31:19.540895 master-0 kubenswrapper[31411]: E0224 02:31:19.540792 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2376dbda-b2e8-45e5-af4c-7382f0994ae3-memberlist podName:2376dbda-b2e8-45e5-af4c-7382f0994ae3 nodeName:}" failed. No retries permitted until 2026-02-24 02:31:20.540766726 +0000 UTC m=+623.757964572 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2376dbda-b2e8-45e5-af4c-7382f0994ae3-memberlist") pod "speaker-lbfkl" (UID: "2376dbda-b2e8-45e5-af4c-7382f0994ae3") : secret "metallb-memberlist" not found Feb 24 02:31:19.601772 master-0 kubenswrapper[31411]: I0224 02:31:19.601688 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-lthbs"] Feb 24 02:31:19.629593 master-0 kubenswrapper[31411]: W0224 02:31:19.625948 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83bea055_a58c_42dd_8ae4_755f7f2944c0.slice/crio-3ff268eb60a7a9d482bad2d9ba7d33d4c778334a33d24c73a11cd102dd130e45 WatchSource:0}: Error finding container 3ff268eb60a7a9d482bad2d9ba7d33d4c778334a33d24c73a11cd102dd130e45: Status 404 returned error can't find the container with id 3ff268eb60a7a9d482bad2d9ba7d33d4c778334a33d24c73a11cd102dd130e45 Feb 24 02:31:19.685642 master-0 kubenswrapper[31411]: I0224 02:31:19.684658 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-gll2f" Feb 24 02:31:19.748867 master-0 kubenswrapper[31411]: I0224 02:31:19.748804 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-s2t6d"] Feb 24 02:31:20.062618 master-0 kubenswrapper[31411]: I0224 02:31:20.062374 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gll2f" event={"ID":"092e38f4-b68c-422f-8663-f152fa7bb09f","Type":"ContainerStarted","Data":"03e637dbf2d00d88c5be186a43b74507f1476f3105b04862c3cb9fe023bd8632"} Feb 24 02:31:20.065123 master-0 kubenswrapper[31411]: I0224 02:31:20.064507 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-s2t6d" event={"ID":"5dd28fe3-673b-4b02-8fab-ab06e03d54e4","Type":"ContainerStarted","Data":"c263ad6a01f47aa9ffe1d65247d8c8a6636833e3f5294db726fcbc8452c1e0e9"} Feb 24 02:31:20.065123 master-0 kubenswrapper[31411]: I0224 02:31:20.064600 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-s2t6d" event={"ID":"5dd28fe3-673b-4b02-8fab-ab06e03d54e4","Type":"ContainerStarted","Data":"b2e2d57ce29f09d57888a3bf21c37c3a1743ec39507461656ac46a38cb80a52f"} Feb 24 02:31:20.065842 master-0 kubenswrapper[31411]: I0224 02:31:20.065800 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-lthbs" event={"ID":"83bea055-a58c-42dd-8ae4-755f7f2944c0","Type":"ContainerStarted","Data":"3ff268eb60a7a9d482bad2d9ba7d33d4c778334a33d24c73a11cd102dd130e45"} Feb 24 02:31:20.567892 master-0 kubenswrapper[31411]: I0224 02:31:20.567806 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2376dbda-b2e8-45e5-af4c-7382f0994ae3-memberlist\") pod \"speaker-lbfkl\" (UID: \"2376dbda-b2e8-45e5-af4c-7382f0994ae3\") " pod="metallb-system/speaker-lbfkl" Feb 24 02:31:20.573297 master-0 kubenswrapper[31411]: I0224 02:31:20.573257 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2376dbda-b2e8-45e5-af4c-7382f0994ae3-memberlist\") pod \"speaker-lbfkl\" (UID: \"2376dbda-b2e8-45e5-af4c-7382f0994ae3\") " pod="metallb-system/speaker-lbfkl" Feb 24 02:31:20.660548 master-0 kubenswrapper[31411]: I0224 02:31:20.658533 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-lbfkl" Feb 24 02:31:20.716459 master-0 kubenswrapper[31411]: W0224 02:31:20.716400 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2376dbda_b2e8_45e5_af4c_7382f0994ae3.slice/crio-722f20ddf767fae1ff276e23d86adb74a352b193c5c5fd434c3a5ceca344b1e6 WatchSource:0}: Error finding container 722f20ddf767fae1ff276e23d86adb74a352b193c5c5fd434c3a5ceca344b1e6: Status 404 returned error can't find the container with id 722f20ddf767fae1ff276e23d86adb74a352b193c5c5fd434c3a5ceca344b1e6 Feb 24 02:31:20.775150 master-0 kubenswrapper[31411]: I0224 02:31:20.774653 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-zx9wt"] Feb 24 02:31:20.776690 master-0 kubenswrapper[31411]: I0224 02:31:20.776641 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-zx9wt" Feb 24 02:31:20.806614 master-0 kubenswrapper[31411]: I0224 02:31:20.802916 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-rft7d"] Feb 24 02:31:20.806614 master-0 kubenswrapper[31411]: I0224 02:31:20.804705 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-rft7d" Feb 24 02:31:20.809194 master-0 kubenswrapper[31411]: I0224 02:31:20.809150 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 24 02:31:20.824695 master-0 kubenswrapper[31411]: I0224 02:31:20.821052 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-zx9wt"] Feb 24 02:31:20.832282 master-0 kubenswrapper[31411]: I0224 02:31:20.832040 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-rft7d"] Feb 24 02:31:20.840176 master-0 kubenswrapper[31411]: I0224 02:31:20.839694 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-bpzvz"] Feb 24 02:31:20.841383 master-0 kubenswrapper[31411]: I0224 02:31:20.841340 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-bpzvz" Feb 24 02:31:20.881020 master-0 kubenswrapper[31411]: I0224 02:31:20.880915 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ce0aa18f-9e7b-4c9e-965d-55b21a7e14d5-ovs-socket\") pod \"nmstate-handler-bpzvz\" (UID: \"ce0aa18f-9e7b-4c9e-965d-55b21a7e14d5\") " pod="openshift-nmstate/nmstate-handler-bpzvz" Feb 24 02:31:20.881205 master-0 kubenswrapper[31411]: I0224 02:31:20.881047 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrzzl\" (UniqueName: \"kubernetes.io/projected/ce0aa18f-9e7b-4c9e-965d-55b21a7e14d5-kube-api-access-vrzzl\") pod \"nmstate-handler-bpzvz\" (UID: \"ce0aa18f-9e7b-4c9e-965d-55b21a7e14d5\") " pod="openshift-nmstate/nmstate-handler-bpzvz" Feb 24 02:31:20.881205 master-0 kubenswrapper[31411]: I0224 02:31:20.881159 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ce0aa18f-9e7b-4c9e-965d-55b21a7e14d5-nmstate-lock\") pod \"nmstate-handler-bpzvz\" (UID: \"ce0aa18f-9e7b-4c9e-965d-55b21a7e14d5\") " pod="openshift-nmstate/nmstate-handler-bpzvz" Feb 24 02:31:20.881301 master-0 kubenswrapper[31411]: I0224 02:31:20.881200 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/92671039-2aaf-47d0-bfe1-7395d0d41e17-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-rft7d\" (UID: \"92671039-2aaf-47d0-bfe1-7395d0d41e17\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-rft7d" Feb 24 02:31:20.882664 master-0 kubenswrapper[31411]: I0224 02:31:20.881266 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsbqg\" (UniqueName: \"kubernetes.io/projected/005ab2f6-2bb8-45d7-9846-0149a6aa9742-kube-api-access-zsbqg\") pod \"nmstate-metrics-58c85c668d-zx9wt\" (UID: \"005ab2f6-2bb8-45d7-9846-0149a6aa9742\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-zx9wt" Feb 24 02:31:20.882847 master-0 kubenswrapper[31411]: I0224 02:31:20.882766 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ce0aa18f-9e7b-4c9e-965d-55b21a7e14d5-dbus-socket\") pod \"nmstate-handler-bpzvz\" (UID: \"ce0aa18f-9e7b-4c9e-965d-55b21a7e14d5\") " pod="openshift-nmstate/nmstate-handler-bpzvz" Feb 24 02:31:20.883066 master-0 kubenswrapper[31411]: I0224 02:31:20.883018 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9sqd\" (UniqueName: \"kubernetes.io/projected/92671039-2aaf-47d0-bfe1-7395d0d41e17-kube-api-access-x9sqd\") pod \"nmstate-webhook-866bcb46dc-rft7d\" (UID: \"92671039-2aaf-47d0-bfe1-7395d0d41e17\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-rft7d" Feb 24 02:31:20.986936 master-0 kubenswrapper[31411]: I0224 02:31:20.984617 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ce0aa18f-9e7b-4c9e-965d-55b21a7e14d5-nmstate-lock\") pod \"nmstate-handler-bpzvz\" (UID: \"ce0aa18f-9e7b-4c9e-965d-55b21a7e14d5\") " pod="openshift-nmstate/nmstate-handler-bpzvz" Feb 24 02:31:20.986936 master-0 kubenswrapper[31411]: I0224 02:31:20.984659 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/92671039-2aaf-47d0-bfe1-7395d0d41e17-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-rft7d\" (UID: \"92671039-2aaf-47d0-bfe1-7395d0d41e17\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-rft7d" Feb 24 02:31:20.986936 master-0 kubenswrapper[31411]: I0224 02:31:20.984716 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsbqg\" (UniqueName: \"kubernetes.io/projected/005ab2f6-2bb8-45d7-9846-0149a6aa9742-kube-api-access-zsbqg\") pod \"nmstate-metrics-58c85c668d-zx9wt\" (UID: \"005ab2f6-2bb8-45d7-9846-0149a6aa9742\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-zx9wt" Feb 24 02:31:20.986936 master-0 kubenswrapper[31411]: I0224 02:31:20.984742 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ce0aa18f-9e7b-4c9e-965d-55b21a7e14d5-dbus-socket\") pod \"nmstate-handler-bpzvz\" (UID: \"ce0aa18f-9e7b-4c9e-965d-55b21a7e14d5\") " pod="openshift-nmstate/nmstate-handler-bpzvz" Feb 24 02:31:20.986936 master-0 kubenswrapper[31411]: I0224 02:31:20.984790 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9sqd\" (UniqueName: \"kubernetes.io/projected/92671039-2aaf-47d0-bfe1-7395d0d41e17-kube-api-access-x9sqd\") pod \"nmstate-webhook-866bcb46dc-rft7d\" (UID: \"92671039-2aaf-47d0-bfe1-7395d0d41e17\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-rft7d" Feb 24 02:31:20.986936 master-0 kubenswrapper[31411]: I0224 02:31:20.984855 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ce0aa18f-9e7b-4c9e-965d-55b21a7e14d5-ovs-socket\") pod \"nmstate-handler-bpzvz\" (UID: \"ce0aa18f-9e7b-4c9e-965d-55b21a7e14d5\") " pod="openshift-nmstate/nmstate-handler-bpzvz" Feb 24 02:31:20.986936 master-0 kubenswrapper[31411]: I0224 02:31:20.984897 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrzzl\" (UniqueName: \"kubernetes.io/projected/ce0aa18f-9e7b-4c9e-965d-55b21a7e14d5-kube-api-access-vrzzl\") pod \"nmstate-handler-bpzvz\" (UID: \"ce0aa18f-9e7b-4c9e-965d-55b21a7e14d5\") " pod="openshift-nmstate/nmstate-handler-bpzvz" Feb 24 02:31:20.986936 master-0 kubenswrapper[31411]: I0224 02:31:20.985290 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ce0aa18f-9e7b-4c9e-965d-55b21a7e14d5-nmstate-lock\") pod \"nmstate-handler-bpzvz\" (UID: \"ce0aa18f-9e7b-4c9e-965d-55b21a7e14d5\") " pod="openshift-nmstate/nmstate-handler-bpzvz" Feb 24 02:31:20.988122 master-0 kubenswrapper[31411]: I0224 02:31:20.988061 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ce0aa18f-9e7b-4c9e-965d-55b21a7e14d5-ovs-socket\") pod \"nmstate-handler-bpzvz\" (UID: \"ce0aa18f-9e7b-4c9e-965d-55b21a7e14d5\") " pod="openshift-nmstate/nmstate-handler-bpzvz" Feb 24 02:31:20.988213 master-0 kubenswrapper[31411]: I0224 02:31:20.988168 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nsdtc"] Feb 24 02:31:20.989984 master-0 kubenswrapper[31411]: I0224 02:31:20.989953 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ce0aa18f-9e7b-4c9e-965d-55b21a7e14d5-dbus-socket\") pod \"nmstate-handler-bpzvz\" (UID: \"ce0aa18f-9e7b-4c9e-965d-55b21a7e14d5\") " pod="openshift-nmstate/nmstate-handler-bpzvz" Feb 24 02:31:20.990098 master-0 kubenswrapper[31411]: I0224 02:31:20.990030 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nsdtc" Feb 24 02:31:20.998675 master-0 kubenswrapper[31411]: I0224 02:31:20.991210 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/92671039-2aaf-47d0-bfe1-7395d0d41e17-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-rft7d\" (UID: \"92671039-2aaf-47d0-bfe1-7395d0d41e17\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-rft7d" Feb 24 02:31:20.998675 master-0 kubenswrapper[31411]: I0224 02:31:20.992928 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 24 02:31:20.998675 master-0 kubenswrapper[31411]: I0224 02:31:20.993020 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 24 02:31:21.003590 master-0 kubenswrapper[31411]: I0224 02:31:21.000890 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nsdtc"] Feb 24 02:31:21.005755 master-0 kubenswrapper[31411]: I0224 02:31:21.004537 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrzzl\" (UniqueName: \"kubernetes.io/projected/ce0aa18f-9e7b-4c9e-965d-55b21a7e14d5-kube-api-access-vrzzl\") pod \"nmstate-handler-bpzvz\" (UID: \"ce0aa18f-9e7b-4c9e-965d-55b21a7e14d5\") " pod="openshift-nmstate/nmstate-handler-bpzvz" Feb 24 02:31:21.034663 master-0 kubenswrapper[31411]: I0224 02:31:21.034240 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9sqd\" (UniqueName: \"kubernetes.io/projected/92671039-2aaf-47d0-bfe1-7395d0d41e17-kube-api-access-x9sqd\") pod \"nmstate-webhook-866bcb46dc-rft7d\" (UID: \"92671039-2aaf-47d0-bfe1-7395d0d41e17\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-rft7d" Feb 24 02:31:21.035434 master-0 kubenswrapper[31411]: I0224 02:31:21.035394 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsbqg\" (UniqueName: \"kubernetes.io/projected/005ab2f6-2bb8-45d7-9846-0149a6aa9742-kube-api-access-zsbqg\") pod \"nmstate-metrics-58c85c668d-zx9wt\" (UID: \"005ab2f6-2bb8-45d7-9846-0149a6aa9742\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-zx9wt" Feb 24 02:31:21.087812 master-0 kubenswrapper[31411]: I0224 02:31:21.087755 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/67ece790-a3fa-4b4f-8e72-a7079e197d01-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-nsdtc\" (UID: \"67ece790-a3fa-4b4f-8e72-a7079e197d01\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nsdtc" Feb 24 02:31:21.088134 master-0 kubenswrapper[31411]: I0224 02:31:21.087830 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/67ece790-a3fa-4b4f-8e72-a7079e197d01-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-nsdtc\" (UID: \"67ece790-a3fa-4b4f-8e72-a7079e197d01\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nsdtc" Feb 24 02:31:21.088638 master-0 kubenswrapper[31411]: I0224 02:31:21.088591 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz7xj\" (UniqueName: \"kubernetes.io/projected/67ece790-a3fa-4b4f-8e72-a7079e197d01-kube-api-access-vz7xj\") pod \"nmstate-console-plugin-5c78fc5d65-nsdtc\" (UID: \"67ece790-a3fa-4b4f-8e72-a7079e197d01\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nsdtc" Feb 24 02:31:21.116788 master-0 kubenswrapper[31411]: I0224 02:31:21.115743 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lbfkl" event={"ID":"2376dbda-b2e8-45e5-af4c-7382f0994ae3","Type":"ContainerStarted","Data":"722f20ddf767fae1ff276e23d86adb74a352b193c5c5fd434c3a5ceca344b1e6"} Feb 24 02:31:21.137247 master-0 kubenswrapper[31411]: I0224 02:31:21.137177 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-zx9wt" Feb 24 02:31:21.152741 master-0 kubenswrapper[31411]: I0224 02:31:21.152696 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-rft7d" Feb 24 02:31:21.196893 master-0 kubenswrapper[31411]: I0224 02:31:21.196514 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-bpzvz" Feb 24 02:31:21.200724 master-0 kubenswrapper[31411]: I0224 02:31:21.200672 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/67ece790-a3fa-4b4f-8e72-a7079e197d01-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-nsdtc\" (UID: \"67ece790-a3fa-4b4f-8e72-a7079e197d01\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nsdtc" Feb 24 02:31:21.201322 master-0 kubenswrapper[31411]: I0224 02:31:21.200763 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/67ece790-a3fa-4b4f-8e72-a7079e197d01-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-nsdtc\" (UID: \"67ece790-a3fa-4b4f-8e72-a7079e197d01\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nsdtc" Feb 24 02:31:21.203689 master-0 kubenswrapper[31411]: I0224 02:31:21.203318 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/67ece790-a3fa-4b4f-8e72-a7079e197d01-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-nsdtc\" (UID: \"67ece790-a3fa-4b4f-8e72-a7079e197d01\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nsdtc" Feb 24 02:31:21.204926 master-0 kubenswrapper[31411]: I0224 02:31:21.204875 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz7xj\" (UniqueName: \"kubernetes.io/projected/67ece790-a3fa-4b4f-8e72-a7079e197d01-kube-api-access-vz7xj\") pod \"nmstate-console-plugin-5c78fc5d65-nsdtc\" (UID: \"67ece790-a3fa-4b4f-8e72-a7079e197d01\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nsdtc" Feb 24 02:31:21.214221 master-0 kubenswrapper[31411]: I0224 02:31:21.214176 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/67ece790-a3fa-4b4f-8e72-a7079e197d01-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-nsdtc\" (UID: \"67ece790-a3fa-4b4f-8e72-a7079e197d01\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nsdtc" Feb 24 02:31:21.232273 master-0 kubenswrapper[31411]: I0224 02:31:21.232208 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz7xj\" (UniqueName: \"kubernetes.io/projected/67ece790-a3fa-4b4f-8e72-a7079e197d01-kube-api-access-vz7xj\") pod \"nmstate-console-plugin-5c78fc5d65-nsdtc\" (UID: \"67ece790-a3fa-4b4f-8e72-a7079e197d01\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nsdtc" Feb 24 02:31:21.239830 master-0 kubenswrapper[31411]: I0224 02:31:21.239772 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7db5f64756-h92rx"] Feb 24 02:31:21.252897 master-0 kubenswrapper[31411]: I0224 02:31:21.252781 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7db5f64756-h92rx" Feb 24 02:31:21.261060 master-0 kubenswrapper[31411]: I0224 02:31:21.261022 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7db5f64756-h92rx"] Feb 24 02:31:21.311957 master-0 kubenswrapper[31411]: I0224 02:31:21.311650 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4cdc2a12-e6fc-4502-9814-32dd7b61b02e-console-config\") pod \"console-7db5f64756-h92rx\" (UID: \"4cdc2a12-e6fc-4502-9814-32dd7b61b02e\") " pod="openshift-console/console-7db5f64756-h92rx" Feb 24 02:31:21.311957 master-0 kubenswrapper[31411]: I0224 02:31:21.311854 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4cdc2a12-e6fc-4502-9814-32dd7b61b02e-console-serving-cert\") pod \"console-7db5f64756-h92rx\" (UID: \"4cdc2a12-e6fc-4502-9814-32dd7b61b02e\") " pod="openshift-console/console-7db5f64756-h92rx" Feb 24 02:31:21.311957 master-0 kubenswrapper[31411]: I0224 02:31:21.311919 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4cdc2a12-e6fc-4502-9814-32dd7b61b02e-console-oauth-config\") pod \"console-7db5f64756-h92rx\" (UID: \"4cdc2a12-e6fc-4502-9814-32dd7b61b02e\") " pod="openshift-console/console-7db5f64756-h92rx" Feb 24 02:31:21.311957 master-0 kubenswrapper[31411]: I0224 02:31:21.311955 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cdc2a12-e6fc-4502-9814-32dd7b61b02e-trusted-ca-bundle\") pod \"console-7db5f64756-h92rx\" (UID: \"4cdc2a12-e6fc-4502-9814-32dd7b61b02e\") " pod="openshift-console/console-7db5f64756-h92rx" Feb 24 02:31:21.312838 master-0 kubenswrapper[31411]: I0224 02:31:21.312001 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4cdc2a12-e6fc-4502-9814-32dd7b61b02e-oauth-serving-cert\") pod \"console-7db5f64756-h92rx\" (UID: \"4cdc2a12-e6fc-4502-9814-32dd7b61b02e\") " pod="openshift-console/console-7db5f64756-h92rx" Feb 24 02:31:21.312838 master-0 kubenswrapper[31411]: I0224 02:31:21.312050 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfrvs\" (UniqueName: \"kubernetes.io/projected/4cdc2a12-e6fc-4502-9814-32dd7b61b02e-kube-api-access-zfrvs\") pod \"console-7db5f64756-h92rx\" (UID: \"4cdc2a12-e6fc-4502-9814-32dd7b61b02e\") " pod="openshift-console/console-7db5f64756-h92rx" Feb 24 02:31:21.312838 master-0 kubenswrapper[31411]: I0224 02:31:21.312087 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4cdc2a12-e6fc-4502-9814-32dd7b61b02e-service-ca\") pod \"console-7db5f64756-h92rx\" (UID: \"4cdc2a12-e6fc-4502-9814-32dd7b61b02e\") " pod="openshift-console/console-7db5f64756-h92rx" Feb 24 02:31:21.415772 master-0 kubenswrapper[31411]: I0224 02:31:21.413900 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4cdc2a12-e6fc-4502-9814-32dd7b61b02e-console-serving-cert\") pod \"console-7db5f64756-h92rx\" (UID: \"4cdc2a12-e6fc-4502-9814-32dd7b61b02e\") " pod="openshift-console/console-7db5f64756-h92rx" Feb 24 02:31:21.415772 master-0 kubenswrapper[31411]: I0224 02:31:21.414006 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4cdc2a12-e6fc-4502-9814-32dd7b61b02e-console-oauth-config\") pod \"console-7db5f64756-h92rx\" (UID: \"4cdc2a12-e6fc-4502-9814-32dd7b61b02e\") " pod="openshift-console/console-7db5f64756-h92rx" Feb 24 02:31:21.415772 master-0 kubenswrapper[31411]: I0224 02:31:21.414040 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cdc2a12-e6fc-4502-9814-32dd7b61b02e-trusted-ca-bundle\") pod \"console-7db5f64756-h92rx\" (UID: \"4cdc2a12-e6fc-4502-9814-32dd7b61b02e\") " pod="openshift-console/console-7db5f64756-h92rx" Feb 24 02:31:21.415772 master-0 kubenswrapper[31411]: I0224 02:31:21.414060 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4cdc2a12-e6fc-4502-9814-32dd7b61b02e-oauth-serving-cert\") pod \"console-7db5f64756-h92rx\" (UID: \"4cdc2a12-e6fc-4502-9814-32dd7b61b02e\") " pod="openshift-console/console-7db5f64756-h92rx" Feb 24 02:31:21.415772 master-0 kubenswrapper[31411]: I0224 02:31:21.414107 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfrvs\" (UniqueName: \"kubernetes.io/projected/4cdc2a12-e6fc-4502-9814-32dd7b61b02e-kube-api-access-zfrvs\") pod \"console-7db5f64756-h92rx\" (UID: \"4cdc2a12-e6fc-4502-9814-32dd7b61b02e\") " pod="openshift-console/console-7db5f64756-h92rx" Feb 24 02:31:21.415772 master-0 kubenswrapper[31411]: I0224 02:31:21.414136 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4cdc2a12-e6fc-4502-9814-32dd7b61b02e-service-ca\") pod \"console-7db5f64756-h92rx\" (UID: \"4cdc2a12-e6fc-4502-9814-32dd7b61b02e\") " pod="openshift-console/console-7db5f64756-h92rx" Feb 24 02:31:21.415772 master-0 kubenswrapper[31411]: I0224 02:31:21.414174 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4cdc2a12-e6fc-4502-9814-32dd7b61b02e-console-config\") pod \"console-7db5f64756-h92rx\" (UID: \"4cdc2a12-e6fc-4502-9814-32dd7b61b02e\") " pod="openshift-console/console-7db5f64756-h92rx" Feb 24 02:31:21.415772 master-0 kubenswrapper[31411]: I0224 02:31:21.415141 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4cdc2a12-e6fc-4502-9814-32dd7b61b02e-console-config\") pod \"console-7db5f64756-h92rx\" (UID: \"4cdc2a12-e6fc-4502-9814-32dd7b61b02e\") " pod="openshift-console/console-7db5f64756-h92rx" Feb 24 02:31:21.418546 master-0 kubenswrapper[31411]: I0224 02:31:21.418461 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4cdc2a12-e6fc-4502-9814-32dd7b61b02e-console-serving-cert\") pod \"console-7db5f64756-h92rx\" (UID: \"4cdc2a12-e6fc-4502-9814-32dd7b61b02e\") " pod="openshift-console/console-7db5f64756-h92rx" Feb 24 02:31:21.424619 master-0 kubenswrapper[31411]: I0224 02:31:21.424557 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4cdc2a12-e6fc-4502-9814-32dd7b61b02e-console-oauth-config\") pod \"console-7db5f64756-h92rx\" (UID: \"4cdc2a12-e6fc-4502-9814-32dd7b61b02e\") " pod="openshift-console/console-7db5f64756-h92rx" Feb 24 02:31:21.427211 master-0 kubenswrapper[31411]: I0224 02:31:21.427045 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cdc2a12-e6fc-4502-9814-32dd7b61b02e-trusted-ca-bundle\") pod \"console-7db5f64756-h92rx\" (UID: \"4cdc2a12-e6fc-4502-9814-32dd7b61b02e\") " pod="openshift-console/console-7db5f64756-h92rx" Feb 24 02:31:21.428345 master-0 kubenswrapper[31411]: I0224 02:31:21.428310 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4cdc2a12-e6fc-4502-9814-32dd7b61b02e-service-ca\") pod \"console-7db5f64756-h92rx\" (UID: \"4cdc2a12-e6fc-4502-9814-32dd7b61b02e\") " pod="openshift-console/console-7db5f64756-h92rx" Feb 24 02:31:21.428442 master-0 kubenswrapper[31411]: I0224 02:31:21.428391 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4cdc2a12-e6fc-4502-9814-32dd7b61b02e-oauth-serving-cert\") pod \"console-7db5f64756-h92rx\" (UID: \"4cdc2a12-e6fc-4502-9814-32dd7b61b02e\") " pod="openshift-console/console-7db5f64756-h92rx" Feb 24 02:31:21.430058 master-0 kubenswrapper[31411]: I0224 02:31:21.430018 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nsdtc" Feb 24 02:31:21.452081 master-0 kubenswrapper[31411]: I0224 02:31:21.452026 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfrvs\" (UniqueName: \"kubernetes.io/projected/4cdc2a12-e6fc-4502-9814-32dd7b61b02e-kube-api-access-zfrvs\") pod \"console-7db5f64756-h92rx\" (UID: \"4cdc2a12-e6fc-4502-9814-32dd7b61b02e\") " pod="openshift-console/console-7db5f64756-h92rx" Feb 24 02:31:21.577218 master-0 kubenswrapper[31411]: I0224 02:31:21.577157 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7db5f64756-h92rx" Feb 24 02:31:21.724463 master-0 kubenswrapper[31411]: W0224 02:31:21.724379 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92671039_2aaf_47d0_bfe1_7395d0d41e17.slice/crio-fd6220afa3727461ad677506899452ed345acc51b24c6f861c9f38c06cc4df53 WatchSource:0}: Error finding container fd6220afa3727461ad677506899452ed345acc51b24c6f861c9f38c06cc4df53: Status 404 returned error can't find the container with id fd6220afa3727461ad677506899452ed345acc51b24c6f861c9f38c06cc4df53 Feb 24 02:31:21.725915 master-0 kubenswrapper[31411]: I0224 02:31:21.725870 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-rft7d"] Feb 24 02:31:21.785211 master-0 kubenswrapper[31411]: I0224 02:31:21.784555 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-zx9wt"] Feb 24 02:31:21.788753 master-0 kubenswrapper[31411]: W0224 02:31:21.788607 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod005ab2f6_2bb8_45d7_9846_0149a6aa9742.slice/crio-d27053e8c7cc4f4dd192a95e557104e6342d69f821e5cd34046853c93e1e6315 WatchSource:0}: Error finding container d27053e8c7cc4f4dd192a95e557104e6342d69f821e5cd34046853c93e1e6315: Status 404 returned error can't find the container with id d27053e8c7cc4f4dd192a95e557104e6342d69f821e5cd34046853c93e1e6315 Feb 24 02:31:21.947655 master-0 kubenswrapper[31411]: W0224 02:31:21.947439 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67ece790_a3fa_4b4f_8e72_a7079e197d01.slice/crio-48ca9228b37a349adc5cd1a46684c5ddb50d0ebe00e3cb4eb9e87fdcd2005a8b WatchSource:0}: Error finding container 48ca9228b37a349adc5cd1a46684c5ddb50d0ebe00e3cb4eb9e87fdcd2005a8b: Status 404 returned error can't find the container with id 48ca9228b37a349adc5cd1a46684c5ddb50d0ebe00e3cb4eb9e87fdcd2005a8b Feb 24 02:31:21.948017 master-0 kubenswrapper[31411]: I0224 02:31:21.947939 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nsdtc"] Feb 24 02:31:22.037440 master-0 kubenswrapper[31411]: I0224 02:31:22.037380 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7db5f64756-h92rx"] Feb 24 02:31:22.056670 master-0 kubenswrapper[31411]: W0224 02:31:22.056616 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cdc2a12_e6fc_4502_9814_32dd7b61b02e.slice/crio-cc641f39f6a1d2454453322a9aec89d5085dc67a15141e4f29fcf74d8014325e WatchSource:0}: Error finding container cc641f39f6a1d2454453322a9aec89d5085dc67a15141e4f29fcf74d8014325e: Status 404 returned error can't find the container with id cc641f39f6a1d2454453322a9aec89d5085dc67a15141e4f29fcf74d8014325e Feb 24 02:31:22.133934 master-0 kubenswrapper[31411]: I0224 02:31:22.133872 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-rft7d" event={"ID":"92671039-2aaf-47d0-bfe1-7395d0d41e17","Type":"ContainerStarted","Data":"fd6220afa3727461ad677506899452ed345acc51b24c6f861c9f38c06cc4df53"} Feb 24 02:31:22.136793 master-0 kubenswrapper[31411]: I0224 02:31:22.136760 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lbfkl" event={"ID":"2376dbda-b2e8-45e5-af4c-7382f0994ae3","Type":"ContainerStarted","Data":"c64ab974bcff1c5303d29d99fc039d03a531934bb5b1b055414c9fb9036c549d"} Feb 24 02:31:22.139541 master-0 kubenswrapper[31411]: I0224 02:31:22.138878 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-zx9wt" event={"ID":"005ab2f6-2bb8-45d7-9846-0149a6aa9742","Type":"ContainerStarted","Data":"d27053e8c7cc4f4dd192a95e557104e6342d69f821e5cd34046853c93e1e6315"} Feb 24 02:31:22.141985 master-0 kubenswrapper[31411]: I0224 02:31:22.141940 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-s2t6d" event={"ID":"5dd28fe3-673b-4b02-8fab-ab06e03d54e4","Type":"ContainerStarted","Data":"514faffb6ecbab105177e1054cf46681a0b49e1ec7bb97dc07fc86d24614b55c"} Feb 24 02:31:22.142130 master-0 kubenswrapper[31411]: I0224 02:31:22.142103 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-s2t6d" Feb 24 02:31:22.143435 master-0 kubenswrapper[31411]: I0224 02:31:22.143380 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-bpzvz" event={"ID":"ce0aa18f-9e7b-4c9e-965d-55b21a7e14d5","Type":"ContainerStarted","Data":"c17ab4d860542bf5deb1d3cea45d5bdb4246096026a37b9194e413ad5974abbf"} Feb 24 02:31:22.145558 master-0 kubenswrapper[31411]: I0224 02:31:22.145531 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nsdtc" event={"ID":"67ece790-a3fa-4b4f-8e72-a7079e197d01","Type":"ContainerStarted","Data":"48ca9228b37a349adc5cd1a46684c5ddb50d0ebe00e3cb4eb9e87fdcd2005a8b"} Feb 24 02:31:22.146898 master-0 kubenswrapper[31411]: I0224 02:31:22.146872 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7db5f64756-h92rx" event={"ID":"4cdc2a12-e6fc-4502-9814-32dd7b61b02e","Type":"ContainerStarted","Data":"cc641f39f6a1d2454453322a9aec89d5085dc67a15141e4f29fcf74d8014325e"} Feb 24 02:31:22.172470 master-0 kubenswrapper[31411]: I0224 02:31:22.172391 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-s2t6d" podStartSLOduration=2.812398679 podStartE2EDuration="4.172371598s" podCreationTimestamp="2026-02-24 02:31:18 +0000 UTC" firstStartedPulling="2026-02-24 02:31:19.922695382 +0000 UTC m=+623.139893228" lastFinishedPulling="2026-02-24 02:31:21.282668301 +0000 UTC m=+624.499866147" observedRunningTime="2026-02-24 02:31:22.164355034 +0000 UTC m=+625.381552880" watchObservedRunningTime="2026-02-24 02:31:22.172371598 +0000 UTC m=+625.389569444" Feb 24 02:31:23.164737 master-0 kubenswrapper[31411]: I0224 02:31:23.164675 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7db5f64756-h92rx" event={"ID":"4cdc2a12-e6fc-4502-9814-32dd7b61b02e","Type":"ContainerStarted","Data":"761f7d45489b5877812f787f36f0e685bfdff05ea747fc75d1bc2656aa414fc8"} Feb 24 02:31:23.174717 master-0 kubenswrapper[31411]: I0224 02:31:23.174557 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lbfkl" event={"ID":"2376dbda-b2e8-45e5-af4c-7382f0994ae3","Type":"ContainerStarted","Data":"ac1ffee3374dc53b4eb0da760c5ae739319293f40499b344857f42c0e18b16f4"} Feb 24 02:31:23.174846 master-0 kubenswrapper[31411]: I0224 02:31:23.174810 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-lbfkl" Feb 24 02:31:23.195352 master-0 kubenswrapper[31411]: I0224 02:31:23.195282 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7db5f64756-h92rx" podStartSLOduration=2.195266864 podStartE2EDuration="2.195266864s" podCreationTimestamp="2026-02-24 02:31:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:31:23.186656824 +0000 UTC m=+626.403854680" watchObservedRunningTime="2026-02-24 02:31:23.195266864 +0000 UTC m=+626.412464730" Feb 24 02:31:23.215058 master-0 kubenswrapper[31411]: I0224 02:31:23.214975 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-lbfkl" podStartSLOduration=4.395772067 podStartE2EDuration="5.214948834s" podCreationTimestamp="2026-02-24 02:31:18 +0000 UTC" firstStartedPulling="2026-02-24 02:31:21.234471065 +0000 UTC m=+624.451668911" lastFinishedPulling="2026-02-24 02:31:22.053647812 +0000 UTC m=+625.270845678" observedRunningTime="2026-02-24 02:31:23.208452862 +0000 UTC m=+626.425650718" watchObservedRunningTime="2026-02-24 02:31:23.214948834 +0000 UTC m=+626.432146680" Feb 24 02:31:28.256671 master-0 kubenswrapper[31411]: I0224 02:31:28.256534 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-zx9wt" event={"ID":"005ab2f6-2bb8-45d7-9846-0149a6aa9742","Type":"ContainerStarted","Data":"bed64040280524c823266f141e393d5909026eeb19759cb82e23a9f9d13f9702"} Feb 24 02:31:28.284921 master-0 kubenswrapper[31411]: I0224 02:31:28.284851 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gll2f" event={"ID":"092e38f4-b68c-422f-8663-f152fa7bb09f","Type":"ContainerDied","Data":"05ed8d5c2f6915fc4992d863c2d8e4a88ca28377be286350a71aba979c27e24d"} Feb 24 02:31:28.285131 master-0 kubenswrapper[31411]: I0224 02:31:28.284875 31411 generic.go:334] "Generic (PLEG): container finished" podID="092e38f4-b68c-422f-8663-f152fa7bb09f" containerID="05ed8d5c2f6915fc4992d863c2d8e4a88ca28377be286350a71aba979c27e24d" exitCode=0 Feb 24 02:31:28.297537 master-0 kubenswrapper[31411]: I0224 02:31:28.297474 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-lthbs" event={"ID":"83bea055-a58c-42dd-8ae4-755f7f2944c0","Type":"ContainerStarted","Data":"0caee7f427e0ba06a0efb4ff1c1d859b4a15f5b47bea5e87a1541f214bb09e82"} Feb 24 02:31:28.297807 master-0 kubenswrapper[31411]: I0224 02:31:28.297755 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-lthbs" Feb 24 02:31:28.300199 master-0 kubenswrapper[31411]: I0224 02:31:28.300094 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nsdtc" event={"ID":"67ece790-a3fa-4b4f-8e72-a7079e197d01","Type":"ContainerStarted","Data":"ff095db5c831c1c9798dc8a6659bbc77afdfe63f28a4664d880b39144efa6a9b"} Feb 24 02:31:28.303329 master-0 kubenswrapper[31411]: I0224 02:31:28.303275 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-rft7d" event={"ID":"92671039-2aaf-47d0-bfe1-7395d0d41e17","Type":"ContainerStarted","Data":"484d8d485c2d8d7ccae70bb04fbcc447c249cf229719da797c44ac196338a20e"} Feb 24 02:31:28.303460 master-0 kubenswrapper[31411]: I0224 02:31:28.303428 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-rft7d" Feb 24 02:31:28.337430 master-0 kubenswrapper[31411]: I0224 02:31:28.337334 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nsdtc" podStartSLOduration=2.421198936 podStartE2EDuration="8.337308293s" podCreationTimestamp="2026-02-24 02:31:20 +0000 UTC" firstStartedPulling="2026-02-24 02:31:21.951560321 +0000 UTC m=+625.168758177" lastFinishedPulling="2026-02-24 02:31:27.867669648 +0000 UTC m=+631.084867534" observedRunningTime="2026-02-24 02:31:28.336064779 +0000 UTC m=+631.553262625" watchObservedRunningTime="2026-02-24 02:31:28.337308293 +0000 UTC m=+631.554506139" Feb 24 02:31:28.390063 master-0 kubenswrapper[31411]: I0224 02:31:28.389924 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-rft7d" podStartSLOduration=2.198903238 podStartE2EDuration="8.389895192s" podCreationTimestamp="2026-02-24 02:31:20 +0000 UTC" firstStartedPulling="2026-02-24 02:31:21.727832583 +0000 UTC m=+624.945030419" lastFinishedPulling="2026-02-24 02:31:27.918824487 +0000 UTC m=+631.136022373" observedRunningTime="2026-02-24 02:31:28.38337851 +0000 UTC m=+631.600576356" watchObservedRunningTime="2026-02-24 02:31:28.389895192 +0000 UTC m=+631.607093048" Feb 24 02:31:28.394019 master-0 kubenswrapper[31411]: I0224 02:31:28.393863 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-lthbs" podStartSLOduration=2.107274416 podStartE2EDuration="10.393850842s" podCreationTimestamp="2026-02-24 02:31:18 +0000 UTC" firstStartedPulling="2026-02-24 02:31:19.632136678 +0000 UTC m=+622.849334514" lastFinishedPulling="2026-02-24 02:31:27.918713054 +0000 UTC m=+631.135910940" observedRunningTime="2026-02-24 02:31:28.364958115 +0000 UTC m=+631.582155981" watchObservedRunningTime="2026-02-24 02:31:28.393850842 +0000 UTC m=+631.611048698" Feb 24 02:31:29.183161 master-0 kubenswrapper[31411]: I0224 02:31:29.183069 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-s2t6d" Feb 24 02:31:29.318525 master-0 kubenswrapper[31411]: I0224 02:31:29.318426 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-bpzvz" event={"ID":"ce0aa18f-9e7b-4c9e-965d-55b21a7e14d5","Type":"ContainerStarted","Data":"7310f8a19303bc7b2a375d87eb6eca4a1feec646d109aa7c610bab10190351be"} Feb 24 02:31:29.319494 master-0 kubenswrapper[31411]: I0224 02:31:29.318649 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-bpzvz" Feb 24 02:31:29.322169 master-0 kubenswrapper[31411]: I0224 02:31:29.322090 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-zx9wt" event={"ID":"005ab2f6-2bb8-45d7-9846-0149a6aa9742","Type":"ContainerStarted","Data":"95528c3b8324c24f8eb658aa5568f5de100b804cc94ee6e9dad0b6151d50573f"} Feb 24 02:31:29.325638 master-0 kubenswrapper[31411]: I0224 02:31:29.325559 31411 generic.go:334] "Generic (PLEG): container finished" podID="092e38f4-b68c-422f-8663-f152fa7bb09f" containerID="67abc9587e66ae33ea42b12ff12a7ea42059618deeb441b7a36c6d341d6fa7c8" exitCode=0 Feb 24 02:31:29.325862 master-0 kubenswrapper[31411]: I0224 02:31:29.325814 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gll2f" event={"ID":"092e38f4-b68c-422f-8663-f152fa7bb09f","Type":"ContainerDied","Data":"67abc9587e66ae33ea42b12ff12a7ea42059618deeb441b7a36c6d341d6fa7c8"} Feb 24 02:31:29.360869 master-0 kubenswrapper[31411]: I0224 02:31:29.358925 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-bpzvz" podStartSLOduration=2.788821522 podStartE2EDuration="9.358898573s" podCreationTimestamp="2026-02-24 02:31:20 +0000 UTC" firstStartedPulling="2026-02-24 02:31:21.294022128 +0000 UTC m=+624.511219974" lastFinishedPulling="2026-02-24 02:31:27.864099149 +0000 UTC m=+631.081297025" observedRunningTime="2026-02-24 02:31:29.351145427 +0000 UTC m=+632.568343313" watchObservedRunningTime="2026-02-24 02:31:29.358898573 +0000 UTC m=+632.576096449" Feb 24 02:31:29.419852 master-0 kubenswrapper[31411]: I0224 02:31:29.418429 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-zx9wt" podStartSLOduration=3.349752577 podStartE2EDuration="9.418344863s" podCreationTimestamp="2026-02-24 02:31:20 +0000 UTC" firstStartedPulling="2026-02-24 02:31:21.793824466 +0000 UTC m=+625.011022322" lastFinishedPulling="2026-02-24 02:31:27.862416762 +0000 UTC m=+631.079614608" observedRunningTime="2026-02-24 02:31:29.411340888 +0000 UTC m=+632.628538774" watchObservedRunningTime="2026-02-24 02:31:29.418344863 +0000 UTC m=+632.635542749" Feb 24 02:31:30.341801 master-0 kubenswrapper[31411]: I0224 02:31:30.341712 31411 generic.go:334] "Generic (PLEG): container finished" podID="092e38f4-b68c-422f-8663-f152fa7bb09f" containerID="133994a19a6253d9d15db0d4b3b86df2a2b71ba9060bf94bcb0c894584b680a4" exitCode=0 Feb 24 02:31:30.342694 master-0 kubenswrapper[31411]: I0224 02:31:30.341823 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gll2f" event={"ID":"092e38f4-b68c-422f-8663-f152fa7bb09f","Type":"ContainerDied","Data":"133994a19a6253d9d15db0d4b3b86df2a2b71ba9060bf94bcb0c894584b680a4"} Feb 24 02:31:31.366547 master-0 kubenswrapper[31411]: I0224 02:31:31.366455 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gll2f" event={"ID":"092e38f4-b68c-422f-8663-f152fa7bb09f","Type":"ContainerStarted","Data":"0a82acf6eb69f5e0da994b03e904abc39ba9e5b11ef46ed0bce906a5e2240771"} Feb 24 02:31:31.366547 master-0 kubenswrapper[31411]: I0224 02:31:31.366546 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gll2f" event={"ID":"092e38f4-b68c-422f-8663-f152fa7bb09f","Type":"ContainerStarted","Data":"433f3cc5211281d7d74041c9643ca631372d1030ce144028297441c8d28632ad"} Feb 24 02:31:31.367482 master-0 kubenswrapper[31411]: I0224 02:31:31.366596 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gll2f" event={"ID":"092e38f4-b68c-422f-8663-f152fa7bb09f","Type":"ContainerStarted","Data":"b1c3d25e797baeb720b88f78326c67b1aa7167320e8bebfa8060c5bf9145789f"} Feb 24 02:31:31.367482 master-0 kubenswrapper[31411]: I0224 02:31:31.366619 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gll2f" event={"ID":"092e38f4-b68c-422f-8663-f152fa7bb09f","Type":"ContainerStarted","Data":"1c4640c0947685d753f59dba872b6b9a093c6f54a5890b2ed6bd24f081964c96"} Feb 24 02:31:31.578405 master-0 kubenswrapper[31411]: I0224 02:31:31.578317 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7db5f64756-h92rx" Feb 24 02:31:31.578405 master-0 kubenswrapper[31411]: I0224 02:31:31.578402 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7db5f64756-h92rx" Feb 24 02:31:31.587762 master-0 kubenswrapper[31411]: I0224 02:31:31.587685 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7db5f64756-h92rx" Feb 24 02:31:32.389599 master-0 kubenswrapper[31411]: I0224 02:31:32.389466 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gll2f" event={"ID":"092e38f4-b68c-422f-8663-f152fa7bb09f","Type":"ContainerStarted","Data":"65f837b2df87de1019cc05350d9dc28347d62e222561ca2ea4338f747667fffd"} Feb 24 02:31:32.389599 master-0 kubenswrapper[31411]: I0224 02:31:32.389559 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gll2f" event={"ID":"092e38f4-b68c-422f-8663-f152fa7bb09f","Type":"ContainerStarted","Data":"a33ffc839491fc8d243d81c41b7cb22eb34b8f90f506d6b3712d4413443b46fd"} Feb 24 02:31:32.393505 master-0 kubenswrapper[31411]: I0224 02:31:32.393448 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7db5f64756-h92rx" Feb 24 02:31:32.444456 master-0 kubenswrapper[31411]: I0224 02:31:32.444335 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-gll2f" podStartSLOduration=6.438609127 podStartE2EDuration="14.444303038s" podCreationTimestamp="2026-02-24 02:31:18 +0000 UTC" firstStartedPulling="2026-02-24 02:31:19.85707321 +0000 UTC m=+623.074271066" lastFinishedPulling="2026-02-24 02:31:27.862767091 +0000 UTC m=+631.079964977" observedRunningTime="2026-02-24 02:31:32.432334914 +0000 UTC m=+635.649532800" watchObservedRunningTime="2026-02-24 02:31:32.444303038 +0000 UTC m=+635.661500914" Feb 24 02:31:32.545180 master-0 kubenswrapper[31411]: I0224 02:31:32.545118 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6647cb86fc-wzjr8"] Feb 24 02:31:33.401712 master-0 kubenswrapper[31411]: I0224 02:31:33.401640 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-gll2f" Feb 24 02:31:34.685639 master-0 kubenswrapper[31411]: I0224 02:31:34.685401 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-gll2f" Feb 24 02:31:34.743763 master-0 kubenswrapper[31411]: I0224 02:31:34.743690 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-gll2f" Feb 24 02:31:36.241703 master-0 kubenswrapper[31411]: I0224 02:31:36.241638 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-bpzvz" Feb 24 02:31:39.039540 master-0 kubenswrapper[31411]: I0224 02:31:39.039448 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-lthbs" Feb 24 02:31:40.663857 master-0 kubenswrapper[31411]: I0224 02:31:40.663791 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-lbfkl" Feb 24 02:31:41.160630 master-0 kubenswrapper[31411]: I0224 02:31:41.160519 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-rft7d" Feb 24 02:31:46.629392 master-0 kubenswrapper[31411]: I0224 02:31:46.629314 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-q84n6"] Feb 24 02:31:46.631659 master-0 kubenswrapper[31411]: I0224 02:31:46.631522 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.668238 master-0 kubenswrapper[31411]: I0224 02:31:46.635399 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-csi-plugin-dir\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.668238 master-0 kubenswrapper[31411]: I0224 02:31:46.635478 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-lvmd-config\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.668238 master-0 kubenswrapper[31411]: I0224 02:31:46.635565 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-registration-dir\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.668238 master-0 kubenswrapper[31411]: I0224 02:31:46.635642 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-pod-volumes-dir\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.668238 master-0 kubenswrapper[31411]: I0224 02:31:46.635727 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-metrics-cert\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.668238 master-0 kubenswrapper[31411]: I0224 02:31:46.635739 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Feb 24 02:31:46.668238 master-0 kubenswrapper[31411]: I0224 02:31:46.635827 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-node-plugin-dir\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.668238 master-0 kubenswrapper[31411]: I0224 02:31:46.635901 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-run-udev\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.668238 master-0 kubenswrapper[31411]: I0224 02:31:46.635987 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5545\" (UniqueName: \"kubernetes.io/projected/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-kube-api-access-d5545\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.668238 master-0 kubenswrapper[31411]: I0224 02:31:46.636042 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-sys\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.668238 master-0 kubenswrapper[31411]: I0224 02:31:46.636292 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-device-dir\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.668238 master-0 kubenswrapper[31411]: I0224 02:31:46.636332 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-file-lock-dir\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.668238 master-0 kubenswrapper[31411]: I0224 02:31:46.645289 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-q84n6"] Feb 24 02:31:46.738345 master-0 kubenswrapper[31411]: I0224 02:31:46.738274 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-file-lock-dir\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.738751 master-0 kubenswrapper[31411]: I0224 02:31:46.738383 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-csi-plugin-dir\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.738751 master-0 kubenswrapper[31411]: I0224 02:31:46.738409 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-lvmd-config\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.738751 master-0 kubenswrapper[31411]: I0224 02:31:46.738445 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-registration-dir\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.738751 master-0 kubenswrapper[31411]: I0224 02:31:46.738480 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-pod-volumes-dir\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.738751 master-0 kubenswrapper[31411]: I0224 02:31:46.738522 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-metrics-cert\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.738751 master-0 kubenswrapper[31411]: I0224 02:31:46.738671 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-node-plugin-dir\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.738751 master-0 kubenswrapper[31411]: I0224 02:31:46.738718 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-run-udev\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.739251 master-0 kubenswrapper[31411]: I0224 02:31:46.738764 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5545\" (UniqueName: \"kubernetes.io/projected/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-kube-api-access-d5545\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.739251 master-0 kubenswrapper[31411]: I0224 02:31:46.738798 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-sys\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.739251 master-0 kubenswrapper[31411]: I0224 02:31:46.738850 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-device-dir\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.739251 master-0 kubenswrapper[31411]: I0224 02:31:46.738972 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-device-dir\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.740138 master-0 kubenswrapper[31411]: I0224 02:31:46.739834 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-run-udev\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.740138 master-0 kubenswrapper[31411]: I0224 02:31:46.739854 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-file-lock-dir\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.740368 master-0 kubenswrapper[31411]: I0224 02:31:46.740173 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-node-plugin-dir\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.740368 master-0 kubenswrapper[31411]: I0224 02:31:46.740269 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-sys\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.740368 master-0 kubenswrapper[31411]: I0224 02:31:46.740313 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-lvmd-config\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.740648 master-0 kubenswrapper[31411]: I0224 02:31:46.740373 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-pod-volumes-dir\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.740648 master-0 kubenswrapper[31411]: I0224 02:31:46.740464 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-registration-dir\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.740648 master-0 kubenswrapper[31411]: I0224 02:31:46.740493 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-csi-plugin-dir\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.745345 master-0 kubenswrapper[31411]: I0224 02:31:46.745305 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-metrics-cert\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:46.759613 master-0 kubenswrapper[31411]: I0224 02:31:46.758026 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5545\" (UniqueName: \"kubernetes.io/projected/98ad869d-6a1d-4f5c-be96-1b58de1dd31b-kube-api-access-d5545\") pod \"vg-manager-q84n6\" (UID: \"98ad869d-6a1d-4f5c-be96-1b58de1dd31b\") " pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:47.003317 master-0 kubenswrapper[31411]: I0224 02:31:47.003215 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:47.539957 master-0 kubenswrapper[31411]: W0224 02:31:47.539888 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98ad869d_6a1d_4f5c_be96_1b58de1dd31b.slice/crio-4a75cc8843525c73e001c4ca31d40c6a1f33ea2d97317bd03e1eeba57f8ff43b WatchSource:0}: Error finding container 4a75cc8843525c73e001c4ca31d40c6a1f33ea2d97317bd03e1eeba57f8ff43b: Status 404 returned error can't find the container with id 4a75cc8843525c73e001c4ca31d40c6a1f33ea2d97317bd03e1eeba57f8ff43b Feb 24 02:31:47.546699 master-0 kubenswrapper[31411]: I0224 02:31:47.544652 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-q84n6"] Feb 24 02:31:47.628016 master-0 kubenswrapper[31411]: I0224 02:31:47.627936 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-q84n6" event={"ID":"98ad869d-6a1d-4f5c-be96-1b58de1dd31b","Type":"ContainerStarted","Data":"4a75cc8843525c73e001c4ca31d40c6a1f33ea2d97317bd03e1eeba57f8ff43b"} Feb 24 02:31:48.642374 master-0 kubenswrapper[31411]: I0224 02:31:48.642299 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-q84n6" event={"ID":"98ad869d-6a1d-4f5c-be96-1b58de1dd31b","Type":"ContainerStarted","Data":"0a60c2807ea307cae6f618a01e4f713d77c4b7c373d2fc8bb6bdf8a3ab8b272e"} Feb 24 02:31:48.682962 master-0 kubenswrapper[31411]: I0224 02:31:48.682845 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-q84n6" podStartSLOduration=2.682812307 podStartE2EDuration="2.682812307s" podCreationTimestamp="2026-02-24 02:31:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:31:48.680985566 +0000 UTC m=+651.898183452" watchObservedRunningTime="2026-02-24 02:31:48.682812307 +0000 UTC m=+651.900010193" Feb 24 02:31:49.690666 master-0 kubenswrapper[31411]: I0224 02:31:49.690482 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-gll2f" Feb 24 02:31:50.684445 master-0 kubenswrapper[31411]: I0224 02:31:50.684362 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-q84n6_98ad869d-6a1d-4f5c-be96-1b58de1dd31b/vg-manager/0.log" Feb 24 02:31:50.684445 master-0 kubenswrapper[31411]: I0224 02:31:50.684443 31411 generic.go:334] "Generic (PLEG): container finished" podID="98ad869d-6a1d-4f5c-be96-1b58de1dd31b" containerID="0a60c2807ea307cae6f618a01e4f713d77c4b7c373d2fc8bb6bdf8a3ab8b272e" exitCode=1 Feb 24 02:31:50.684911 master-0 kubenswrapper[31411]: I0224 02:31:50.684489 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-q84n6" event={"ID":"98ad869d-6a1d-4f5c-be96-1b58de1dd31b","Type":"ContainerDied","Data":"0a60c2807ea307cae6f618a01e4f713d77c4b7c373d2fc8bb6bdf8a3ab8b272e"} Feb 24 02:31:50.685536 master-0 kubenswrapper[31411]: I0224 02:31:50.685494 31411 scope.go:117] "RemoveContainer" containerID="0a60c2807ea307cae6f618a01e4f713d77c4b7c373d2fc8bb6bdf8a3ab8b272e" Feb 24 02:31:51.104049 master-0 kubenswrapper[31411]: I0224 02:31:51.102535 31411 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Feb 24 02:31:51.707609 master-0 kubenswrapper[31411]: I0224 02:31:51.707532 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-q84n6_98ad869d-6a1d-4f5c-be96-1b58de1dd31b/vg-manager/0.log" Feb 24 02:31:51.707938 master-0 kubenswrapper[31411]: I0224 02:31:51.707663 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-q84n6" event={"ID":"98ad869d-6a1d-4f5c-be96-1b58de1dd31b","Type":"ContainerStarted","Data":"55e2403122936db7abaaee8e1cc2e57297c93033d9266dfd1b54026fdcfcf4aa"} Feb 24 02:31:51.795355 master-0 kubenswrapper[31411]: I0224 02:31:51.795108 31411 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2026-02-24T02:31:51.102596743Z","Handler":null,"Name":""} Feb 24 02:31:51.797683 master-0 kubenswrapper[31411]: I0224 02:31:51.797628 31411 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Feb 24 02:31:51.797759 master-0 kubenswrapper[31411]: I0224 02:31:51.797693 31411 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Feb 24 02:31:57.004161 master-0 kubenswrapper[31411]: I0224 02:31:57.004023 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:57.008069 master-0 kubenswrapper[31411]: I0224 02:31:57.007993 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:57.610086 master-0 kubenswrapper[31411]: I0224 02:31:57.609718 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6647cb86fc-wzjr8" podUID="2c5669c1-c4ed-4f40-b9f7-4e51d1fad369" containerName="console" containerID="cri-o://6e4ad7afb43c18234ab6e709dbec5a6f90b6c6185396ae54cfe05f08c10f1fd8" gracePeriod=15 Feb 24 02:31:57.803614 master-0 kubenswrapper[31411]: I0224 02:31:57.801020 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6647cb86fc-wzjr8_2c5669c1-c4ed-4f40-b9f7-4e51d1fad369/console/0.log" Feb 24 02:31:57.803614 master-0 kubenswrapper[31411]: I0224 02:31:57.801107 31411 generic.go:334] "Generic (PLEG): container finished" podID="2c5669c1-c4ed-4f40-b9f7-4e51d1fad369" containerID="6e4ad7afb43c18234ab6e709dbec5a6f90b6c6185396ae54cfe05f08c10f1fd8" exitCode=2 Feb 24 02:31:57.803614 master-0 kubenswrapper[31411]: I0224 02:31:57.801683 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6647cb86fc-wzjr8" event={"ID":"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369","Type":"ContainerDied","Data":"6e4ad7afb43c18234ab6e709dbec5a6f90b6c6185396ae54cfe05f08c10f1fd8"} Feb 24 02:31:57.803614 master-0 kubenswrapper[31411]: I0224 02:31:57.802285 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:57.807605 master-0 kubenswrapper[31411]: I0224 02:31:57.804988 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-q84n6" Feb 24 02:31:58.257753 master-0 kubenswrapper[31411]: I0224 02:31:58.257696 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6647cb86fc-wzjr8_2c5669c1-c4ed-4f40-b9f7-4e51d1fad369/console/0.log" Feb 24 02:31:58.259697 master-0 kubenswrapper[31411]: I0224 02:31:58.257794 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6647cb86fc-wzjr8" Feb 24 02:31:58.350092 master-0 kubenswrapper[31411]: I0224 02:31:58.350032 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-oauth-serving-cert\") pod \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\" (UID: \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\") " Feb 24 02:31:58.350496 master-0 kubenswrapper[31411]: I0224 02:31:58.350143 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-service-ca\") pod \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\" (UID: \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\") " Feb 24 02:31:58.350496 master-0 kubenswrapper[31411]: I0224 02:31:58.350280 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-console-serving-cert\") pod \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\" (UID: \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\") " Feb 24 02:31:58.350496 master-0 kubenswrapper[31411]: I0224 02:31:58.350338 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft6s6\" (UniqueName: \"kubernetes.io/projected/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-kube-api-access-ft6s6\") pod \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\" (UID: \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\") " Feb 24 02:31:58.350496 master-0 kubenswrapper[31411]: I0224 02:31:58.350385 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-console-oauth-config\") pod \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\" (UID: \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\") " Feb 24 02:31:58.350724 master-0 kubenswrapper[31411]: I0224 02:31:58.350705 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-console-config\") pod \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\" (UID: \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\") " Feb 24 02:31:58.350869 master-0 kubenswrapper[31411]: I0224 02:31:58.350800 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-trusted-ca-bundle\") pod \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\" (UID: \"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369\") " Feb 24 02:31:58.350902 master-0 kubenswrapper[31411]: I0224 02:31:58.350853 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-service-ca" (OuterVolumeSpecName: "service-ca") pod "2c5669c1-c4ed-4f40-b9f7-4e51d1fad369" (UID: "2c5669c1-c4ed-4f40-b9f7-4e51d1fad369"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:31:58.351415 master-0 kubenswrapper[31411]: I0224 02:31:58.351390 31411 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 02:31:58.351490 master-0 kubenswrapper[31411]: I0224 02:31:58.351396 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "2c5669c1-c4ed-4f40-b9f7-4e51d1fad369" (UID: "2c5669c1-c4ed-4f40-b9f7-4e51d1fad369"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:31:58.351989 master-0 kubenswrapper[31411]: I0224 02:31:58.351964 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "2c5669c1-c4ed-4f40-b9f7-4e51d1fad369" (UID: "2c5669c1-c4ed-4f40-b9f7-4e51d1fad369"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:31:58.354677 master-0 kubenswrapper[31411]: I0224 02:31:58.354560 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-console-config" (OuterVolumeSpecName: "console-config") pod "2c5669c1-c4ed-4f40-b9f7-4e51d1fad369" (UID: "2c5669c1-c4ed-4f40-b9f7-4e51d1fad369"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:31:58.355597 master-0 kubenswrapper[31411]: I0224 02:31:58.355554 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "2c5669c1-c4ed-4f40-b9f7-4e51d1fad369" (UID: "2c5669c1-c4ed-4f40-b9f7-4e51d1fad369"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:31:58.356115 master-0 kubenswrapper[31411]: I0224 02:31:58.356045 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-kube-api-access-ft6s6" (OuterVolumeSpecName: "kube-api-access-ft6s6") pod "2c5669c1-c4ed-4f40-b9f7-4e51d1fad369" (UID: "2c5669c1-c4ed-4f40-b9f7-4e51d1fad369"). InnerVolumeSpecName "kube-api-access-ft6s6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:31:58.356848 master-0 kubenswrapper[31411]: I0224 02:31:58.356784 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "2c5669c1-c4ed-4f40-b9f7-4e51d1fad369" (UID: "2c5669c1-c4ed-4f40-b9f7-4e51d1fad369"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:31:58.454438 master-0 kubenswrapper[31411]: I0224 02:31:58.454358 31411 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-console-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:31:58.454438 master-0 kubenswrapper[31411]: I0224 02:31:58.454423 31411 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:31:58.454938 master-0 kubenswrapper[31411]: I0224 02:31:58.454453 31411 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 02:31:58.454938 master-0 kubenswrapper[31411]: I0224 02:31:58.454475 31411 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 02:31:58.454938 master-0 kubenswrapper[31411]: I0224 02:31:58.454501 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft6s6\" (UniqueName: \"kubernetes.io/projected/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-kube-api-access-ft6s6\") on node \"master-0\" DevicePath \"\"" Feb 24 02:31:58.454938 master-0 kubenswrapper[31411]: I0224 02:31:58.454521 31411 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:31:58.817517 master-0 kubenswrapper[31411]: I0224 02:31:58.817301 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6647cb86fc-wzjr8_2c5669c1-c4ed-4f40-b9f7-4e51d1fad369/console/0.log" Feb 24 02:31:58.817910 master-0 kubenswrapper[31411]: I0224 02:31:58.817545 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6647cb86fc-wzjr8" event={"ID":"2c5669c1-c4ed-4f40-b9f7-4e51d1fad369","Type":"ContainerDied","Data":"7450de429ca07579a07d0b8e24d4f9c4c1527bb17a077ce11bf1b62717a13795"} Feb 24 02:31:58.817910 master-0 kubenswrapper[31411]: I0224 02:31:58.817628 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6647cb86fc-wzjr8" Feb 24 02:31:58.817910 master-0 kubenswrapper[31411]: I0224 02:31:58.817718 31411 scope.go:117] "RemoveContainer" containerID="6e4ad7afb43c18234ab6e709dbec5a6f90b6c6185396ae54cfe05f08c10f1fd8" Feb 24 02:31:58.894154 master-0 kubenswrapper[31411]: I0224 02:31:58.893963 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6647cb86fc-wzjr8"] Feb 24 02:31:58.902465 master-0 kubenswrapper[31411]: I0224 02:31:58.902400 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6647cb86fc-wzjr8"] Feb 24 02:31:59.109106 master-0 kubenswrapper[31411]: I0224 02:31:59.108905 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c5669c1-c4ed-4f40-b9f7-4e51d1fad369" path="/var/lib/kubelet/pods/2c5669c1-c4ed-4f40-b9f7-4e51d1fad369/volumes" Feb 24 02:32:00.080940 master-0 kubenswrapper[31411]: I0224 02:32:00.080868 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-jjl54"] Feb 24 02:32:00.081628 master-0 kubenswrapper[31411]: E0224 02:32:00.081314 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c5669c1-c4ed-4f40-b9f7-4e51d1fad369" containerName="console" Feb 24 02:32:00.081628 master-0 kubenswrapper[31411]: I0224 02:32:00.081330 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c5669c1-c4ed-4f40-b9f7-4e51d1fad369" containerName="console" Feb 24 02:32:00.081628 master-0 kubenswrapper[31411]: I0224 02:32:00.081514 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c5669c1-c4ed-4f40-b9f7-4e51d1fad369" containerName="console" Feb 24 02:32:00.082378 master-0 kubenswrapper[31411]: I0224 02:32:00.082330 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jjl54" Feb 24 02:32:00.084382 master-0 kubenswrapper[31411]: I0224 02:32:00.084352 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 24 02:32:00.087113 master-0 kubenswrapper[31411]: I0224 02:32:00.087077 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 24 02:32:00.101943 master-0 kubenswrapper[31411]: I0224 02:32:00.101895 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-jjl54"] Feb 24 02:32:00.227688 master-0 kubenswrapper[31411]: I0224 02:32:00.227606 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b85q\" (UniqueName: \"kubernetes.io/projected/97d278c4-c0d2-4b50-bd1e-50ff254685a6-kube-api-access-8b85q\") pod \"openstack-operator-index-jjl54\" (UID: \"97d278c4-c0d2-4b50-bd1e-50ff254685a6\") " pod="openstack-operators/openstack-operator-index-jjl54" Feb 24 02:32:00.330718 master-0 kubenswrapper[31411]: I0224 02:32:00.330629 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b85q\" (UniqueName: \"kubernetes.io/projected/97d278c4-c0d2-4b50-bd1e-50ff254685a6-kube-api-access-8b85q\") pod \"openstack-operator-index-jjl54\" (UID: \"97d278c4-c0d2-4b50-bd1e-50ff254685a6\") " pod="openstack-operators/openstack-operator-index-jjl54" Feb 24 02:32:00.359566 master-0 kubenswrapper[31411]: I0224 02:32:00.359397 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b85q\" (UniqueName: \"kubernetes.io/projected/97d278c4-c0d2-4b50-bd1e-50ff254685a6-kube-api-access-8b85q\") pod \"openstack-operator-index-jjl54\" (UID: \"97d278c4-c0d2-4b50-bd1e-50ff254685a6\") " pod="openstack-operators/openstack-operator-index-jjl54" Feb 24 02:32:00.399119 master-0 kubenswrapper[31411]: I0224 02:32:00.399051 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jjl54" Feb 24 02:32:00.935313 master-0 kubenswrapper[31411]: I0224 02:32:00.933345 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-jjl54"] Feb 24 02:32:01.870371 master-0 kubenswrapper[31411]: I0224 02:32:01.870255 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jjl54" event={"ID":"97d278c4-c0d2-4b50-bd1e-50ff254685a6","Type":"ContainerStarted","Data":"9fcd0b5645ee5d1796f48ca930f67b81b9aec94927a69b18cfc0ec55697bcd59"} Feb 24 02:32:02.894012 master-0 kubenswrapper[31411]: I0224 02:32:02.891512 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jjl54" event={"ID":"97d278c4-c0d2-4b50-bd1e-50ff254685a6","Type":"ContainerStarted","Data":"b841e45070bc1bfe8a8f3819af18db1c144854e921eca675b9640b246acccfb9"} Feb 24 02:32:02.942053 master-0 kubenswrapper[31411]: I0224 02:32:02.941821 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-jjl54" podStartSLOduration=1.925602313 podStartE2EDuration="2.941786732s" podCreationTimestamp="2026-02-24 02:32:00 +0000 UTC" firstStartedPulling="2026-02-24 02:32:00.944063312 +0000 UTC m=+664.161261148" lastFinishedPulling="2026-02-24 02:32:01.960247681 +0000 UTC m=+665.177445567" observedRunningTime="2026-02-24 02:32:02.927260676 +0000 UTC m=+666.144458562" watchObservedRunningTime="2026-02-24 02:32:02.941786732 +0000 UTC m=+666.158984618" Feb 24 02:32:04.030520 master-0 kubenswrapper[31411]: I0224 02:32:04.030428 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-jjl54"] Feb 24 02:32:04.641612 master-0 kubenswrapper[31411]: I0224 02:32:04.641494 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-kxwsj"] Feb 24 02:32:04.643907 master-0 kubenswrapper[31411]: I0224 02:32:04.643793 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kxwsj" Feb 24 02:32:04.656694 master-0 kubenswrapper[31411]: I0224 02:32:04.656629 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kxwsj"] Feb 24 02:32:04.754824 master-0 kubenswrapper[31411]: I0224 02:32:04.754744 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8q47\" (UniqueName: \"kubernetes.io/projected/c96d10df-3ceb-4da9-bb6d-169b971d225a-kube-api-access-t8q47\") pod \"openstack-operator-index-kxwsj\" (UID: \"c96d10df-3ceb-4da9-bb6d-169b971d225a\") " pod="openstack-operators/openstack-operator-index-kxwsj" Feb 24 02:32:04.858046 master-0 kubenswrapper[31411]: I0224 02:32:04.857852 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8q47\" (UniqueName: \"kubernetes.io/projected/c96d10df-3ceb-4da9-bb6d-169b971d225a-kube-api-access-t8q47\") pod \"openstack-operator-index-kxwsj\" (UID: \"c96d10df-3ceb-4da9-bb6d-169b971d225a\") " pod="openstack-operators/openstack-operator-index-kxwsj" Feb 24 02:32:04.888273 master-0 kubenswrapper[31411]: I0224 02:32:04.888201 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8q47\" (UniqueName: \"kubernetes.io/projected/c96d10df-3ceb-4da9-bb6d-169b971d225a-kube-api-access-t8q47\") pod \"openstack-operator-index-kxwsj\" (UID: \"c96d10df-3ceb-4da9-bb6d-169b971d225a\") " pod="openstack-operators/openstack-operator-index-kxwsj" Feb 24 02:32:04.960564 master-0 kubenswrapper[31411]: I0224 02:32:04.960352 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-jjl54" podUID="97d278c4-c0d2-4b50-bd1e-50ff254685a6" containerName="registry-server" containerID="cri-o://b841e45070bc1bfe8a8f3819af18db1c144854e921eca675b9640b246acccfb9" gracePeriod=2 Feb 24 02:32:04.973344 master-0 kubenswrapper[31411]: I0224 02:32:04.973274 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kxwsj" Feb 24 02:32:05.487307 master-0 kubenswrapper[31411]: I0224 02:32:05.487246 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kxwsj"] Feb 24 02:32:05.493209 master-0 kubenswrapper[31411]: W0224 02:32:05.493150 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc96d10df_3ceb_4da9_bb6d_169b971d225a.slice/crio-39a00a91b3a15be1c397e1be50a92acdd515f36f488b14bd8aa1715cb4ecb985 WatchSource:0}: Error finding container 39a00a91b3a15be1c397e1be50a92acdd515f36f488b14bd8aa1715cb4ecb985: Status 404 returned error can't find the container with id 39a00a91b3a15be1c397e1be50a92acdd515f36f488b14bd8aa1715cb4ecb985 Feb 24 02:32:05.591400 master-0 kubenswrapper[31411]: I0224 02:32:05.591306 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jjl54" Feb 24 02:32:05.682410 master-0 kubenswrapper[31411]: I0224 02:32:05.680879 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b85q\" (UniqueName: \"kubernetes.io/projected/97d278c4-c0d2-4b50-bd1e-50ff254685a6-kube-api-access-8b85q\") pod \"97d278c4-c0d2-4b50-bd1e-50ff254685a6\" (UID: \"97d278c4-c0d2-4b50-bd1e-50ff254685a6\") " Feb 24 02:32:05.686804 master-0 kubenswrapper[31411]: I0224 02:32:05.686659 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97d278c4-c0d2-4b50-bd1e-50ff254685a6-kube-api-access-8b85q" (OuterVolumeSpecName: "kube-api-access-8b85q") pod "97d278c4-c0d2-4b50-bd1e-50ff254685a6" (UID: "97d278c4-c0d2-4b50-bd1e-50ff254685a6"). InnerVolumeSpecName "kube-api-access-8b85q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:32:05.784715 master-0 kubenswrapper[31411]: I0224 02:32:05.784547 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b85q\" (UniqueName: \"kubernetes.io/projected/97d278c4-c0d2-4b50-bd1e-50ff254685a6-kube-api-access-8b85q\") on node \"master-0\" DevicePath \"\"" Feb 24 02:32:05.975621 master-0 kubenswrapper[31411]: I0224 02:32:05.975526 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kxwsj" event={"ID":"c96d10df-3ceb-4da9-bb6d-169b971d225a","Type":"ContainerStarted","Data":"39a00a91b3a15be1c397e1be50a92acdd515f36f488b14bd8aa1715cb4ecb985"} Feb 24 02:32:05.978197 master-0 kubenswrapper[31411]: I0224 02:32:05.977849 31411 generic.go:334] "Generic (PLEG): container finished" podID="97d278c4-c0d2-4b50-bd1e-50ff254685a6" containerID="b841e45070bc1bfe8a8f3819af18db1c144854e921eca675b9640b246acccfb9" exitCode=0 Feb 24 02:32:05.978197 master-0 kubenswrapper[31411]: I0224 02:32:05.977903 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jjl54" event={"ID":"97d278c4-c0d2-4b50-bd1e-50ff254685a6","Type":"ContainerDied","Data":"b841e45070bc1bfe8a8f3819af18db1c144854e921eca675b9640b246acccfb9"} Feb 24 02:32:05.978197 master-0 kubenswrapper[31411]: I0224 02:32:05.977934 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jjl54" event={"ID":"97d278c4-c0d2-4b50-bd1e-50ff254685a6","Type":"ContainerDied","Data":"9fcd0b5645ee5d1796f48ca930f67b81b9aec94927a69b18cfc0ec55697bcd59"} Feb 24 02:32:05.978197 master-0 kubenswrapper[31411]: I0224 02:32:05.977950 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jjl54" Feb 24 02:32:05.978197 master-0 kubenswrapper[31411]: I0224 02:32:05.977970 31411 scope.go:117] "RemoveContainer" containerID="b841e45070bc1bfe8a8f3819af18db1c144854e921eca675b9640b246acccfb9" Feb 24 02:32:06.038960 master-0 kubenswrapper[31411]: I0224 02:32:06.038882 31411 scope.go:117] "RemoveContainer" containerID="b841e45070bc1bfe8a8f3819af18db1c144854e921eca675b9640b246acccfb9" Feb 24 02:32:06.040770 master-0 kubenswrapper[31411]: E0224 02:32:06.040692 31411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b841e45070bc1bfe8a8f3819af18db1c144854e921eca675b9640b246acccfb9\": container with ID starting with b841e45070bc1bfe8a8f3819af18db1c144854e921eca675b9640b246acccfb9 not found: ID does not exist" containerID="b841e45070bc1bfe8a8f3819af18db1c144854e921eca675b9640b246acccfb9" Feb 24 02:32:06.040895 master-0 kubenswrapper[31411]: I0224 02:32:06.040779 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b841e45070bc1bfe8a8f3819af18db1c144854e921eca675b9640b246acccfb9"} err="failed to get container status \"b841e45070bc1bfe8a8f3819af18db1c144854e921eca675b9640b246acccfb9\": rpc error: code = NotFound desc = could not find container \"b841e45070bc1bfe8a8f3819af18db1c144854e921eca675b9640b246acccfb9\": container with ID starting with b841e45070bc1bfe8a8f3819af18db1c144854e921eca675b9640b246acccfb9 not found: ID does not exist" Feb 24 02:32:06.049614 master-0 kubenswrapper[31411]: I0224 02:32:06.049518 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-jjl54"] Feb 24 02:32:06.056847 master-0 kubenswrapper[31411]: I0224 02:32:06.056798 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-jjl54"] Feb 24 02:32:06.995829 master-0 kubenswrapper[31411]: I0224 02:32:06.994951 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kxwsj" event={"ID":"c96d10df-3ceb-4da9-bb6d-169b971d225a","Type":"ContainerStarted","Data":"a0b48eb406fab3db21f04d4c3b89f2a56a35b6d7250d4d3f3370140421c623c5"} Feb 24 02:32:07.028456 master-0 kubenswrapper[31411]: I0224 02:32:07.028310 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-kxwsj" podStartSLOduration=2.590802008 podStartE2EDuration="3.028278145s" podCreationTimestamp="2026-02-24 02:32:04 +0000 UTC" firstStartedPulling="2026-02-24 02:32:05.519702845 +0000 UTC m=+668.736900701" lastFinishedPulling="2026-02-24 02:32:05.957178952 +0000 UTC m=+669.174376838" observedRunningTime="2026-02-24 02:32:07.019391157 +0000 UTC m=+670.236589053" watchObservedRunningTime="2026-02-24 02:32:07.028278145 +0000 UTC m=+670.245476031" Feb 24 02:32:07.119823 master-0 kubenswrapper[31411]: I0224 02:32:07.119691 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97d278c4-c0d2-4b50-bd1e-50ff254685a6" path="/var/lib/kubelet/pods/97d278c4-c0d2-4b50-bd1e-50ff254685a6/volumes" Feb 24 02:32:14.973696 master-0 kubenswrapper[31411]: I0224 02:32:14.973564 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-kxwsj" Feb 24 02:32:14.973696 master-0 kubenswrapper[31411]: I0224 02:32:14.973685 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-kxwsj" Feb 24 02:32:15.045469 master-0 kubenswrapper[31411]: I0224 02:32:15.045405 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-kxwsj" Feb 24 02:32:15.146988 master-0 kubenswrapper[31411]: I0224 02:32:15.146904 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-kxwsj" Feb 24 02:32:16.903795 master-0 kubenswrapper[31411]: I0224 02:32:16.903701 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq"] Feb 24 02:32:16.904687 master-0 kubenswrapper[31411]: E0224 02:32:16.904331 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97d278c4-c0d2-4b50-bd1e-50ff254685a6" containerName="registry-server" Feb 24 02:32:16.904687 master-0 kubenswrapper[31411]: I0224 02:32:16.904356 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="97d278c4-c0d2-4b50-bd1e-50ff254685a6" containerName="registry-server" Feb 24 02:32:16.904851 master-0 kubenswrapper[31411]: I0224 02:32:16.904780 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="97d278c4-c0d2-4b50-bd1e-50ff254685a6" containerName="registry-server" Feb 24 02:32:16.907925 master-0 kubenswrapper[31411]: I0224 02:32:16.907869 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq" Feb 24 02:32:16.924295 master-0 kubenswrapper[31411]: I0224 02:32:16.924224 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq"] Feb 24 02:32:17.096692 master-0 kubenswrapper[31411]: I0224 02:32:17.096568 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/33033660-765b-494d-8bc4-a6af0592fac5-util\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq\" (UID: \"33033660-765b-494d-8bc4-a6af0592fac5\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq" Feb 24 02:32:17.097163 master-0 kubenswrapper[31411]: I0224 02:32:17.097079 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b94fd\" (UniqueName: \"kubernetes.io/projected/33033660-765b-494d-8bc4-a6af0592fac5-kube-api-access-b94fd\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq\" (UID: \"33033660-765b-494d-8bc4-a6af0592fac5\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq" Feb 24 02:32:17.097534 master-0 kubenswrapper[31411]: I0224 02:32:17.097434 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/33033660-765b-494d-8bc4-a6af0592fac5-bundle\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq\" (UID: \"33033660-765b-494d-8bc4-a6af0592fac5\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq" Feb 24 02:32:17.200656 master-0 kubenswrapper[31411]: I0224 02:32:17.200537 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b94fd\" (UniqueName: \"kubernetes.io/projected/33033660-765b-494d-8bc4-a6af0592fac5-kube-api-access-b94fd\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq\" (UID: \"33033660-765b-494d-8bc4-a6af0592fac5\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq" Feb 24 02:32:17.200960 master-0 kubenswrapper[31411]: I0224 02:32:17.200741 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/33033660-765b-494d-8bc4-a6af0592fac5-bundle\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq\" (UID: \"33033660-765b-494d-8bc4-a6af0592fac5\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq" Feb 24 02:32:17.200960 master-0 kubenswrapper[31411]: I0224 02:32:17.200879 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/33033660-765b-494d-8bc4-a6af0592fac5-util\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq\" (UID: \"33033660-765b-494d-8bc4-a6af0592fac5\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq" Feb 24 02:32:17.201881 master-0 kubenswrapper[31411]: I0224 02:32:17.201806 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/33033660-765b-494d-8bc4-a6af0592fac5-bundle\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq\" (UID: \"33033660-765b-494d-8bc4-a6af0592fac5\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq" Feb 24 02:32:17.201994 master-0 kubenswrapper[31411]: I0224 02:32:17.201949 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/33033660-765b-494d-8bc4-a6af0592fac5-util\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq\" (UID: \"33033660-765b-494d-8bc4-a6af0592fac5\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq" Feb 24 02:32:17.232299 master-0 kubenswrapper[31411]: I0224 02:32:17.232205 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b94fd\" (UniqueName: \"kubernetes.io/projected/33033660-765b-494d-8bc4-a6af0592fac5-kube-api-access-b94fd\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq\" (UID: \"33033660-765b-494d-8bc4-a6af0592fac5\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq" Feb 24 02:32:17.279327 master-0 kubenswrapper[31411]: I0224 02:32:17.279241 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq" Feb 24 02:32:17.819028 master-0 kubenswrapper[31411]: I0224 02:32:17.818977 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq"] Feb 24 02:32:17.828114 master-0 kubenswrapper[31411]: W0224 02:32:17.828081 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33033660_765b_494d_8bc4_a6af0592fac5.slice/crio-06b6e19958a6040f6074ce236636d5c75c1d84dd8ff36daf97f88710b0eaf8a4 WatchSource:0}: Error finding container 06b6e19958a6040f6074ce236636d5c75c1d84dd8ff36daf97f88710b0eaf8a4: Status 404 returned error can't find the container with id 06b6e19958a6040f6074ce236636d5c75c1d84dd8ff36daf97f88710b0eaf8a4 Feb 24 02:32:18.146038 master-0 kubenswrapper[31411]: I0224 02:32:18.145962 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq" event={"ID":"33033660-765b-494d-8bc4-a6af0592fac5","Type":"ContainerStarted","Data":"fd1f6b3d32098c1924674bf21fdeca2851190eb4adad801e7ef205737231234d"} Feb 24 02:32:18.146038 master-0 kubenswrapper[31411]: I0224 02:32:18.146042 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq" event={"ID":"33033660-765b-494d-8bc4-a6af0592fac5","Type":"ContainerStarted","Data":"06b6e19958a6040f6074ce236636d5c75c1d84dd8ff36daf97f88710b0eaf8a4"} Feb 24 02:32:19.160931 master-0 kubenswrapper[31411]: I0224 02:32:19.160822 31411 generic.go:334] "Generic (PLEG): container finished" podID="33033660-765b-494d-8bc4-a6af0592fac5" containerID="fd1f6b3d32098c1924674bf21fdeca2851190eb4adad801e7ef205737231234d" exitCode=0 Feb 24 02:32:19.160931 master-0 kubenswrapper[31411]: I0224 02:32:19.160918 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq" event={"ID":"33033660-765b-494d-8bc4-a6af0592fac5","Type":"ContainerDied","Data":"fd1f6b3d32098c1924674bf21fdeca2851190eb4adad801e7ef205737231234d"} Feb 24 02:32:21.245368 master-0 kubenswrapper[31411]: I0224 02:32:21.245258 31411 generic.go:334] "Generic (PLEG): container finished" podID="33033660-765b-494d-8bc4-a6af0592fac5" containerID="b8042ad8c316cf06e9e96a625b768fb59fc7f1553c4744848a1d6dd89c7aed01" exitCode=0 Feb 24 02:32:21.245368 master-0 kubenswrapper[31411]: I0224 02:32:21.245344 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq" event={"ID":"33033660-765b-494d-8bc4-a6af0592fac5","Type":"ContainerDied","Data":"b8042ad8c316cf06e9e96a625b768fb59fc7f1553c4744848a1d6dd89c7aed01"} Feb 24 02:32:22.267886 master-0 kubenswrapper[31411]: I0224 02:32:22.267778 31411 generic.go:334] "Generic (PLEG): container finished" podID="33033660-765b-494d-8bc4-a6af0592fac5" containerID="1cd4c040518d51429c7b02339a63ec11242f08a5a65a2967f148fc9160882df0" exitCode=0 Feb 24 02:32:22.267886 master-0 kubenswrapper[31411]: I0224 02:32:22.267870 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq" event={"ID":"33033660-765b-494d-8bc4-a6af0592fac5","Type":"ContainerDied","Data":"1cd4c040518d51429c7b02339a63ec11242f08a5a65a2967f148fc9160882df0"} Feb 24 02:32:23.809845 master-0 kubenswrapper[31411]: I0224 02:32:23.809724 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq" Feb 24 02:32:24.211742 master-0 kubenswrapper[31411]: I0224 02:32:24.211669 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/33033660-765b-494d-8bc4-a6af0592fac5-bundle\") pod \"33033660-765b-494d-8bc4-a6af0592fac5\" (UID: \"33033660-765b-494d-8bc4-a6af0592fac5\") " Feb 24 02:32:24.212110 master-0 kubenswrapper[31411]: I0224 02:32:24.211772 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b94fd\" (UniqueName: \"kubernetes.io/projected/33033660-765b-494d-8bc4-a6af0592fac5-kube-api-access-b94fd\") pod \"33033660-765b-494d-8bc4-a6af0592fac5\" (UID: \"33033660-765b-494d-8bc4-a6af0592fac5\") " Feb 24 02:32:24.212769 master-0 kubenswrapper[31411]: I0224 02:32:24.212308 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/33033660-765b-494d-8bc4-a6af0592fac5-util\") pod \"33033660-765b-494d-8bc4-a6af0592fac5\" (UID: \"33033660-765b-494d-8bc4-a6af0592fac5\") " Feb 24 02:32:24.213401 master-0 kubenswrapper[31411]: I0224 02:32:24.213330 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33033660-765b-494d-8bc4-a6af0592fac5-bundle" (OuterVolumeSpecName: "bundle") pod "33033660-765b-494d-8bc4-a6af0592fac5" (UID: "33033660-765b-494d-8bc4-a6af0592fac5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:32:24.216948 master-0 kubenswrapper[31411]: I0224 02:32:24.216080 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33033660-765b-494d-8bc4-a6af0592fac5-kube-api-access-b94fd" (OuterVolumeSpecName: "kube-api-access-b94fd") pod "33033660-765b-494d-8bc4-a6af0592fac5" (UID: "33033660-765b-494d-8bc4-a6af0592fac5"). InnerVolumeSpecName "kube-api-access-b94fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:32:24.237072 master-0 kubenswrapper[31411]: I0224 02:32:24.237021 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33033660-765b-494d-8bc4-a6af0592fac5-util" (OuterVolumeSpecName: "util") pod "33033660-765b-494d-8bc4-a6af0592fac5" (UID: "33033660-765b-494d-8bc4-a6af0592fac5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:32:24.298502 master-0 kubenswrapper[31411]: I0224 02:32:24.298422 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq" event={"ID":"33033660-765b-494d-8bc4-a6af0592fac5","Type":"ContainerDied","Data":"06b6e19958a6040f6074ce236636d5c75c1d84dd8ff36daf97f88710b0eaf8a4"} Feb 24 02:32:24.298502 master-0 kubenswrapper[31411]: I0224 02:32:24.298509 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06b6e19958a6040f6074ce236636d5c75c1d84dd8ff36daf97f88710b0eaf8a4" Feb 24 02:32:24.298897 master-0 kubenswrapper[31411]: I0224 02:32:24.298863 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq" Feb 24 02:32:24.314852 master-0 kubenswrapper[31411]: I0224 02:32:24.314770 31411 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/33033660-765b-494d-8bc4-a6af0592fac5-util\") on node \"master-0\" DevicePath \"\"" Feb 24 02:32:24.314852 master-0 kubenswrapper[31411]: I0224 02:32:24.314809 31411 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/33033660-765b-494d-8bc4-a6af0592fac5-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:32:24.314852 master-0 kubenswrapper[31411]: I0224 02:32:24.314824 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b94fd\" (UniqueName: \"kubernetes.io/projected/33033660-765b-494d-8bc4-a6af0592fac5-kube-api-access-b94fd\") on node \"master-0\" DevicePath \"\"" Feb 24 02:32:29.184358 master-0 kubenswrapper[31411]: I0224 02:32:29.184287 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-55c649df44-lm7cq"] Feb 24 02:32:29.185099 master-0 kubenswrapper[31411]: E0224 02:32:29.184757 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33033660-765b-494d-8bc4-a6af0592fac5" containerName="extract" Feb 24 02:32:29.185099 master-0 kubenswrapper[31411]: I0224 02:32:29.184771 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="33033660-765b-494d-8bc4-a6af0592fac5" containerName="extract" Feb 24 02:32:29.185099 master-0 kubenswrapper[31411]: E0224 02:32:29.184785 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33033660-765b-494d-8bc4-a6af0592fac5" containerName="pull" Feb 24 02:32:29.185099 master-0 kubenswrapper[31411]: I0224 02:32:29.184793 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="33033660-765b-494d-8bc4-a6af0592fac5" containerName="pull" Feb 24 02:32:29.185099 master-0 kubenswrapper[31411]: E0224 02:32:29.184810 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33033660-765b-494d-8bc4-a6af0592fac5" containerName="util" Feb 24 02:32:29.185099 master-0 kubenswrapper[31411]: I0224 02:32:29.184817 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="33033660-765b-494d-8bc4-a6af0592fac5" containerName="util" Feb 24 02:32:29.185099 master-0 kubenswrapper[31411]: I0224 02:32:29.185039 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="33033660-765b-494d-8bc4-a6af0592fac5" containerName="extract" Feb 24 02:32:29.185734 master-0 kubenswrapper[31411]: I0224 02:32:29.185711 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-55c649df44-lm7cq" Feb 24 02:32:29.232654 master-0 kubenswrapper[31411]: I0224 02:32:29.232598 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-55c649df44-lm7cq"] Feb 24 02:32:29.280309 master-0 kubenswrapper[31411]: I0224 02:32:29.280260 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnf4w\" (UniqueName: \"kubernetes.io/projected/d2f839c2-1e28-423e-bd0f-4f6b2c6f1e18-kube-api-access-gnf4w\") pod \"openstack-operator-controller-init-55c649df44-lm7cq\" (UID: \"d2f839c2-1e28-423e-bd0f-4f6b2c6f1e18\") " pod="openstack-operators/openstack-operator-controller-init-55c649df44-lm7cq" Feb 24 02:32:29.382444 master-0 kubenswrapper[31411]: I0224 02:32:29.382364 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnf4w\" (UniqueName: \"kubernetes.io/projected/d2f839c2-1e28-423e-bd0f-4f6b2c6f1e18-kube-api-access-gnf4w\") pod \"openstack-operator-controller-init-55c649df44-lm7cq\" (UID: \"d2f839c2-1e28-423e-bd0f-4f6b2c6f1e18\") " pod="openstack-operators/openstack-operator-controller-init-55c649df44-lm7cq" Feb 24 02:32:29.399262 master-0 kubenswrapper[31411]: I0224 02:32:29.398520 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnf4w\" (UniqueName: \"kubernetes.io/projected/d2f839c2-1e28-423e-bd0f-4f6b2c6f1e18-kube-api-access-gnf4w\") pod \"openstack-operator-controller-init-55c649df44-lm7cq\" (UID: \"d2f839c2-1e28-423e-bd0f-4f6b2c6f1e18\") " pod="openstack-operators/openstack-operator-controller-init-55c649df44-lm7cq" Feb 24 02:32:29.514804 master-0 kubenswrapper[31411]: I0224 02:32:29.514745 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-55c649df44-lm7cq" Feb 24 02:32:30.113196 master-0 kubenswrapper[31411]: I0224 02:32:30.113123 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-55c649df44-lm7cq"] Feb 24 02:32:30.565013 master-0 kubenswrapper[31411]: I0224 02:32:30.564866 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-55c649df44-lm7cq" event={"ID":"d2f839c2-1e28-423e-bd0f-4f6b2c6f1e18","Type":"ContainerStarted","Data":"7171ef5ea1d2bf34e22c2cc9506e295112972eb2db8308693f51d2856d6bc516"} Feb 24 02:32:35.630223 master-0 kubenswrapper[31411]: I0224 02:32:35.630113 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-55c649df44-lm7cq" event={"ID":"d2f839c2-1e28-423e-bd0f-4f6b2c6f1e18","Type":"ContainerStarted","Data":"e1fb1a38c01c5c75ba6da3765e1560f2b4d9aec78637f48b8a5b7058d35713db"} Feb 24 02:32:35.631323 master-0 kubenswrapper[31411]: I0224 02:32:35.630459 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-55c649df44-lm7cq" Feb 24 02:32:35.707297 master-0 kubenswrapper[31411]: I0224 02:32:35.706352 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-55c649df44-lm7cq" podStartSLOduration=1.646723901 podStartE2EDuration="6.706316269s" podCreationTimestamp="2026-02-24 02:32:29 +0000 UTC" firstStartedPulling="2026-02-24 02:32:30.11959671 +0000 UTC m=+693.336794556" lastFinishedPulling="2026-02-24 02:32:35.179189088 +0000 UTC m=+698.396386924" observedRunningTime="2026-02-24 02:32:35.685677982 +0000 UTC m=+698.902875858" watchObservedRunningTime="2026-02-24 02:32:35.706316269 +0000 UTC m=+698.923514155" Feb 24 02:32:49.519223 master-0 kubenswrapper[31411]: I0224 02:32:49.519105 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-55c649df44-lm7cq" Feb 24 02:33:10.885207 master-0 kubenswrapper[31411]: I0224 02:33:10.885136 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-b72xt"] Feb 24 02:33:10.886892 master-0 kubenswrapper[31411]: I0224 02:33:10.886864 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-b72xt" Feb 24 02:33:10.898044 master-0 kubenswrapper[31411]: I0224 02:33:10.897796 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-2ldv2"] Feb 24 02:33:10.899501 master-0 kubenswrapper[31411]: I0224 02:33:10.899476 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2ldv2" Feb 24 02:33:10.905733 master-0 kubenswrapper[31411]: I0224 02:33:10.904365 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-2ldv2"] Feb 24 02:33:10.912065 master-0 kubenswrapper[31411]: I0224 02:33:10.911447 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-b72xt"] Feb 24 02:33:10.929623 master-0 kubenswrapper[31411]: I0224 02:33:10.929538 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-zfd69"] Feb 24 02:33:10.933326 master-0 kubenswrapper[31411]: I0224 02:33:10.933302 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-zfd69" Feb 24 02:33:10.958536 master-0 kubenswrapper[31411]: I0224 02:33:10.958427 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-dzbvc"] Feb 24 02:33:10.965708 master-0 kubenswrapper[31411]: I0224 02:33:10.965138 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dzbvc" Feb 24 02:33:10.976566 master-0 kubenswrapper[31411]: I0224 02:33:10.976507 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-zfd69"] Feb 24 02:33:10.989594 master-0 kubenswrapper[31411]: I0224 02:33:10.987988 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-dzbvc"] Feb 24 02:33:11.016813 master-0 kubenswrapper[31411]: I0224 02:33:11.016664 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-5t6bt"] Feb 24 02:33:11.019760 master-0 kubenswrapper[31411]: I0224 02:33:11.019744 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-5t6bt"] Feb 24 02:33:11.019924 master-0 kubenswrapper[31411]: I0224 02:33:11.019884 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5t6bt" Feb 24 02:33:11.029129 master-0 kubenswrapper[31411]: I0224 02:33:11.029034 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-49gvb"] Feb 24 02:33:11.037076 master-0 kubenswrapper[31411]: I0224 02:33:11.035456 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-49gvb" Feb 24 02:33:11.053478 master-0 kubenswrapper[31411]: I0224 02:33:11.053433 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-49gvb"] Feb 24 02:33:11.061086 master-0 kubenswrapper[31411]: I0224 02:33:11.061039 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44njx\" (UniqueName: \"kubernetes.io/projected/129f6086-7edd-41da-adf1-38c9b82e0932-kube-api-access-44njx\") pod \"designate-operator-controller-manager-6d8bf5c495-dzbvc\" (UID: \"129f6086-7edd-41da-adf1-38c9b82e0932\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dzbvc" Feb 24 02:33:11.061187 master-0 kubenswrapper[31411]: I0224 02:33:11.061121 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmz84\" (UniqueName: \"kubernetes.io/projected/7cac8bf4-b1c9-4b3c-a536-8408f6ad8495-kube-api-access-nmz84\") pod \"barbican-operator-controller-manager-868647ff47-2ldv2\" (UID: \"7cac8bf4-b1c9-4b3c-a536-8408f6ad8495\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2ldv2" Feb 24 02:33:11.061187 master-0 kubenswrapper[31411]: I0224 02:33:11.061168 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjt49\" (UniqueName: \"kubernetes.io/projected/44cfb629-0b50-4e8c-9b4c-e329a1b3c533-kube-api-access-zjt49\") pod \"glance-operator-controller-manager-784b5bb6c5-zfd69\" (UID: \"44cfb629-0b50-4e8c-9b4c-e329a1b3c533\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-zfd69" Feb 24 02:33:11.061256 master-0 kubenswrapper[31411]: I0224 02:33:11.061221 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8frhn\" (UniqueName: \"kubernetes.io/projected/dd4e8aac-8f11-4e85-ac94-2160ae3adf4c-kube-api-access-8frhn\") pod \"cinder-operator-controller-manager-55d77d7b5c-b72xt\" (UID: \"dd4e8aac-8f11-4e85-ac94-2160ae3adf4c\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-b72xt" Feb 24 02:33:11.063091 master-0 kubenswrapper[31411]: I0224 02:33:11.063042 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f879c76b6-2kk8t"] Feb 24 02:33:11.065255 master-0 kubenswrapper[31411]: I0224 02:33:11.064414 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2kk8t" Feb 24 02:33:11.072876 master-0 kubenswrapper[31411]: I0224 02:33:11.066356 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 24 02:33:11.091431 master-0 kubenswrapper[31411]: I0224 02:33:11.091373 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f879c76b6-2kk8t"] Feb 24 02:33:11.128955 master-0 kubenswrapper[31411]: I0224 02:33:11.128896 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-hksp2"] Feb 24 02:33:11.132591 master-0 kubenswrapper[31411]: I0224 02:33:11.130293 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-ws6cb"] Feb 24 02:33:11.132693 master-0 kubenswrapper[31411]: I0224 02:33:11.132664 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ws6cb" Feb 24 02:33:11.133891 master-0 kubenswrapper[31411]: I0224 02:33:11.133869 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hksp2" Feb 24 02:33:11.156755 master-0 kubenswrapper[31411]: I0224 02:33:11.152137 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-hksp2"] Feb 24 02:33:11.163206 master-0 kubenswrapper[31411]: I0224 02:33:11.163154 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wmpf\" (UniqueName: \"kubernetes.io/projected/3bb72077-6f36-439c-8cc0-83bdbfcc3935-kube-api-access-9wmpf\") pod \"heat-operator-controller-manager-69f49c598c-5t6bt\" (UID: \"3bb72077-6f36-439c-8cc0-83bdbfcc3935\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5t6bt" Feb 24 02:33:11.163326 master-0 kubenswrapper[31411]: I0224 02:33:11.163272 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8frhn\" (UniqueName: \"kubernetes.io/projected/dd4e8aac-8f11-4e85-ac94-2160ae3adf4c-kube-api-access-8frhn\") pod \"cinder-operator-controller-manager-55d77d7b5c-b72xt\" (UID: \"dd4e8aac-8f11-4e85-ac94-2160ae3adf4c\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-b72xt" Feb 24 02:33:11.163326 master-0 kubenswrapper[31411]: I0224 02:33:11.163314 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4-cert\") pod \"infra-operator-controller-manager-5f879c76b6-2kk8t\" (UID: \"37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2kk8t" Feb 24 02:33:11.163404 master-0 kubenswrapper[31411]: I0224 02:33:11.163352 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-488wl\" (UniqueName: \"kubernetes.io/projected/37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4-kube-api-access-488wl\") pod \"infra-operator-controller-manager-5f879c76b6-2kk8t\" (UID: \"37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2kk8t" Feb 24 02:33:11.163404 master-0 kubenswrapper[31411]: I0224 02:33:11.163385 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fz8n\" (UniqueName: \"kubernetes.io/projected/6ae1849b-a4a6-4f60-bf3d-713c1f0df81f-kube-api-access-2fz8n\") pod \"horizon-operator-controller-manager-5b9b8895d5-49gvb\" (UID: \"6ae1849b-a4a6-4f60-bf3d-713c1f0df81f\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-49gvb" Feb 24 02:33:11.163482 master-0 kubenswrapper[31411]: I0224 02:33:11.163428 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44njx\" (UniqueName: \"kubernetes.io/projected/129f6086-7edd-41da-adf1-38c9b82e0932-kube-api-access-44njx\") pod \"designate-operator-controller-manager-6d8bf5c495-dzbvc\" (UID: \"129f6086-7edd-41da-adf1-38c9b82e0932\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dzbvc" Feb 24 02:33:11.164839 master-0 kubenswrapper[31411]: I0224 02:33:11.164294 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmz84\" (UniqueName: \"kubernetes.io/projected/7cac8bf4-b1c9-4b3c-a536-8408f6ad8495-kube-api-access-nmz84\") pod \"barbican-operator-controller-manager-868647ff47-2ldv2\" (UID: \"7cac8bf4-b1c9-4b3c-a536-8408f6ad8495\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2ldv2" Feb 24 02:33:11.164839 master-0 kubenswrapper[31411]: I0224 02:33:11.164365 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjt49\" (UniqueName: \"kubernetes.io/projected/44cfb629-0b50-4e8c-9b4c-e329a1b3c533-kube-api-access-zjt49\") pod \"glance-operator-controller-manager-784b5bb6c5-zfd69\" (UID: \"44cfb629-0b50-4e8c-9b4c-e329a1b3c533\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-zfd69" Feb 24 02:33:11.169425 master-0 kubenswrapper[31411]: I0224 02:33:11.169384 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-ws6cb"] Feb 24 02:33:11.235777 master-0 kubenswrapper[31411]: I0224 02:33:11.235727 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmz84\" (UniqueName: \"kubernetes.io/projected/7cac8bf4-b1c9-4b3c-a536-8408f6ad8495-kube-api-access-nmz84\") pod \"barbican-operator-controller-manager-868647ff47-2ldv2\" (UID: \"7cac8bf4-b1c9-4b3c-a536-8408f6ad8495\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2ldv2" Feb 24 02:33:11.239622 master-0 kubenswrapper[31411]: I0224 02:33:11.236134 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44njx\" (UniqueName: \"kubernetes.io/projected/129f6086-7edd-41da-adf1-38c9b82e0932-kube-api-access-44njx\") pod \"designate-operator-controller-manager-6d8bf5c495-dzbvc\" (UID: \"129f6086-7edd-41da-adf1-38c9b82e0932\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dzbvc" Feb 24 02:33:11.239622 master-0 kubenswrapper[31411]: I0224 02:33:11.236243 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8frhn\" (UniqueName: \"kubernetes.io/projected/dd4e8aac-8f11-4e85-ac94-2160ae3adf4c-kube-api-access-8frhn\") pod \"cinder-operator-controller-manager-55d77d7b5c-b72xt\" (UID: \"dd4e8aac-8f11-4e85-ac94-2160ae3adf4c\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-b72xt" Feb 24 02:33:11.239622 master-0 kubenswrapper[31411]: I0224 02:33:11.236741 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjt49\" (UniqueName: \"kubernetes.io/projected/44cfb629-0b50-4e8c-9b4c-e329a1b3c533-kube-api-access-zjt49\") pod \"glance-operator-controller-manager-784b5bb6c5-zfd69\" (UID: \"44cfb629-0b50-4e8c-9b4c-e329a1b3c533\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-zfd69" Feb 24 02:33:11.239622 master-0 kubenswrapper[31411]: I0224 02:33:11.236782 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-psxsg"] Feb 24 02:33:11.240442 master-0 kubenswrapper[31411]: I0224 02:33:11.240397 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-b72xt" Feb 24 02:33:11.240878 master-0 kubenswrapper[31411]: I0224 02:33:11.240717 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-psxsg" Feb 24 02:33:11.267603 master-0 kubenswrapper[31411]: I0224 02:33:11.255303 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2ldv2" Feb 24 02:33:11.287638 master-0 kubenswrapper[31411]: I0224 02:33:11.287556 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4-cert\") pod \"infra-operator-controller-manager-5f879c76b6-2kk8t\" (UID: \"37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2kk8t" Feb 24 02:33:11.287892 master-0 kubenswrapper[31411]: I0224 02:33:11.287667 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-488wl\" (UniqueName: \"kubernetes.io/projected/37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4-kube-api-access-488wl\") pod \"infra-operator-controller-manager-5f879c76b6-2kk8t\" (UID: \"37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2kk8t" Feb 24 02:33:11.287892 master-0 kubenswrapper[31411]: I0224 02:33:11.287716 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fz8n\" (UniqueName: \"kubernetes.io/projected/6ae1849b-a4a6-4f60-bf3d-713c1f0df81f-kube-api-access-2fz8n\") pod \"horizon-operator-controller-manager-5b9b8895d5-49gvb\" (UID: \"6ae1849b-a4a6-4f60-bf3d-713c1f0df81f\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-49gvb" Feb 24 02:33:11.287892 master-0 kubenswrapper[31411]: I0224 02:33:11.287887 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ncwh\" (UniqueName: \"kubernetes.io/projected/70a415e4-fc72-4449-87a5-67a04c4ee4aa-kube-api-access-2ncwh\") pod \"keystone-operator-controller-manager-b4d948c87-ws6cb\" (UID: \"70a415e4-fc72-4449-87a5-67a04c4ee4aa\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ws6cb" Feb 24 02:33:11.288824 master-0 kubenswrapper[31411]: I0224 02:33:11.288008 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wmpf\" (UniqueName: \"kubernetes.io/projected/3bb72077-6f36-439c-8cc0-83bdbfcc3935-kube-api-access-9wmpf\") pod \"heat-operator-controller-manager-69f49c598c-5t6bt\" (UID: \"3bb72077-6f36-439c-8cc0-83bdbfcc3935\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5t6bt" Feb 24 02:33:11.288824 master-0 kubenswrapper[31411]: E0224 02:33:11.288180 31411 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 24 02:33:11.288824 master-0 kubenswrapper[31411]: I0224 02:33:11.288253 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c56pq\" (UniqueName: \"kubernetes.io/projected/2da684b9-3acd-40d2-8562-f212bc136dc5-kube-api-access-c56pq\") pod \"ironic-operator-controller-manager-554564d7fc-hksp2\" (UID: \"2da684b9-3acd-40d2-8562-f212bc136dc5\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hksp2" Feb 24 02:33:11.288824 master-0 kubenswrapper[31411]: E0224 02:33:11.288301 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4-cert podName:37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4 nodeName:}" failed. No retries permitted until 2026-02-24 02:33:11.788270105 +0000 UTC m=+735.005467941 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4-cert") pod "infra-operator-controller-manager-5f879c76b6-2kk8t" (UID: "37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4") : secret "infra-operator-webhook-server-cert" not found Feb 24 02:33:11.288824 master-0 kubenswrapper[31411]: I0224 02:33:11.288211 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-zfd69" Feb 24 02:33:11.310513 master-0 kubenswrapper[31411]: I0224 02:33:11.300377 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dzbvc" Feb 24 02:33:11.330259 master-0 kubenswrapper[31411]: I0224 02:33:11.326443 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-psxsg"] Feb 24 02:33:11.330259 master-0 kubenswrapper[31411]: I0224 02:33:11.326518 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-5xt4j"] Feb 24 02:33:11.330259 master-0 kubenswrapper[31411]: I0224 02:33:11.327865 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5xt4j" Feb 24 02:33:11.345653 master-0 kubenswrapper[31411]: I0224 02:33:11.343335 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wmpf\" (UniqueName: \"kubernetes.io/projected/3bb72077-6f36-439c-8cc0-83bdbfcc3935-kube-api-access-9wmpf\") pod \"heat-operator-controller-manager-69f49c598c-5t6bt\" (UID: \"3bb72077-6f36-439c-8cc0-83bdbfcc3935\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5t6bt" Feb 24 02:33:11.353719 master-0 kubenswrapper[31411]: I0224 02:33:11.349879 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-488wl\" (UniqueName: \"kubernetes.io/projected/37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4-kube-api-access-488wl\") pod \"infra-operator-controller-manager-5f879c76b6-2kk8t\" (UID: \"37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2kk8t" Feb 24 02:33:11.361232 master-0 kubenswrapper[31411]: I0224 02:33:11.361166 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5t6bt" Feb 24 02:33:11.398375 master-0 kubenswrapper[31411]: I0224 02:33:11.392110 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fz8n\" (UniqueName: \"kubernetes.io/projected/6ae1849b-a4a6-4f60-bf3d-713c1f0df81f-kube-api-access-2fz8n\") pod \"horizon-operator-controller-manager-5b9b8895d5-49gvb\" (UID: \"6ae1849b-a4a6-4f60-bf3d-713c1f0df81f\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-49gvb" Feb 24 02:33:11.398375 master-0 kubenswrapper[31411]: I0224 02:33:11.394323 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c56pq\" (UniqueName: \"kubernetes.io/projected/2da684b9-3acd-40d2-8562-f212bc136dc5-kube-api-access-c56pq\") pod \"ironic-operator-controller-manager-554564d7fc-hksp2\" (UID: \"2da684b9-3acd-40d2-8562-f212bc136dc5\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hksp2" Feb 24 02:33:11.398375 master-0 kubenswrapper[31411]: I0224 02:33:11.394425 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8hb2\" (UniqueName: \"kubernetes.io/projected/3e724dd4-e900-4138-90c3-ee1fc4fc8350-kube-api-access-l8hb2\") pod \"manila-operator-controller-manager-67d996989d-psxsg\" (UID: \"3e724dd4-e900-4138-90c3-ee1fc4fc8350\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-psxsg" Feb 24 02:33:11.398375 master-0 kubenswrapper[31411]: I0224 02:33:11.394542 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ncwh\" (UniqueName: \"kubernetes.io/projected/70a415e4-fc72-4449-87a5-67a04c4ee4aa-kube-api-access-2ncwh\") pod \"keystone-operator-controller-manager-b4d948c87-ws6cb\" (UID: \"70a415e4-fc72-4449-87a5-67a04c4ee4aa\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ws6cb" Feb 24 02:33:11.398907 master-0 kubenswrapper[31411]: I0224 02:33:11.398814 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-5xt4j"] Feb 24 02:33:11.421285 master-0 kubenswrapper[31411]: I0224 02:33:11.420321 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-lwlws"] Feb 24 02:33:11.425913 master-0 kubenswrapper[31411]: I0224 02:33:11.421991 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-lwlws" Feb 24 02:33:11.428991 master-0 kubenswrapper[31411]: I0224 02:33:11.427641 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c56pq\" (UniqueName: \"kubernetes.io/projected/2da684b9-3acd-40d2-8562-f212bc136dc5-kube-api-access-c56pq\") pod \"ironic-operator-controller-manager-554564d7fc-hksp2\" (UID: \"2da684b9-3acd-40d2-8562-f212bc136dc5\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hksp2" Feb 24 02:33:11.467261 master-0 kubenswrapper[31411]: I0224 02:33:11.467000 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ncwh\" (UniqueName: \"kubernetes.io/projected/70a415e4-fc72-4449-87a5-67a04c4ee4aa-kube-api-access-2ncwh\") pod \"keystone-operator-controller-manager-b4d948c87-ws6cb\" (UID: \"70a415e4-fc72-4449-87a5-67a04c4ee4aa\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ws6cb" Feb 24 02:33:11.474192 master-0 kubenswrapper[31411]: I0224 02:33:11.474151 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ws6cb" Feb 24 02:33:11.485041 master-0 kubenswrapper[31411]: I0224 02:33:11.478686 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-lwlws"] Feb 24 02:33:11.492020 master-0 kubenswrapper[31411]: I0224 02:33:11.488598 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hksp2" Feb 24 02:33:11.518906 master-0 kubenswrapper[31411]: I0224 02:33:11.518804 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v5xt\" (UniqueName: \"kubernetes.io/projected/98db9e0e-7186-41ca-af3e-d192ec846273-kube-api-access-6v5xt\") pod \"neutron-operator-controller-manager-6bd4687957-lwlws\" (UID: \"98db9e0e-7186-41ca-af3e-d192ec846273\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-lwlws" Feb 24 02:33:11.519176 master-0 kubenswrapper[31411]: I0224 02:33:11.519047 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8hb2\" (UniqueName: \"kubernetes.io/projected/3e724dd4-e900-4138-90c3-ee1fc4fc8350-kube-api-access-l8hb2\") pod \"manila-operator-controller-manager-67d996989d-psxsg\" (UID: \"3e724dd4-e900-4138-90c3-ee1fc4fc8350\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-psxsg" Feb 24 02:33:11.519781 master-0 kubenswrapper[31411]: I0224 02:33:11.519757 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7z22\" (UniqueName: \"kubernetes.io/projected/7211589b-d7b6-48c3-b3f2-d74d133733b0-kube-api-access-j7z22\") pod \"mariadb-operator-controller-manager-6994f66f48-5xt4j\" (UID: \"7211589b-d7b6-48c3-b3f2-d74d133733b0\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5xt4j" Feb 24 02:33:11.548375 master-0 kubenswrapper[31411]: I0224 02:33:11.548318 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8hb2\" (UniqueName: \"kubernetes.io/projected/3e724dd4-e900-4138-90c3-ee1fc4fc8350-kube-api-access-l8hb2\") pod \"manila-operator-controller-manager-67d996989d-psxsg\" (UID: \"3e724dd4-e900-4138-90c3-ee1fc4fc8350\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-psxsg" Feb 24 02:33:11.559255 master-0 kubenswrapper[31411]: I0224 02:33:11.559181 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-nffrm"] Feb 24 02:33:11.560811 master-0 kubenswrapper[31411]: I0224 02:33:11.560779 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-nffrm" Feb 24 02:33:11.566531 master-0 kubenswrapper[31411]: I0224 02:33:11.566486 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-74cdr"] Feb 24 02:33:11.568205 master-0 kubenswrapper[31411]: I0224 02:33:11.567998 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-74cdr" Feb 24 02:33:11.594094 master-0 kubenswrapper[31411]: I0224 02:33:11.593463 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-nffrm"] Feb 24 02:33:11.602274 master-0 kubenswrapper[31411]: I0224 02:33:11.602217 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-74cdr"] Feb 24 02:33:11.614437 master-0 kubenswrapper[31411]: I0224 02:33:11.614391 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9tqsfz"] Feb 24 02:33:11.618658 master-0 kubenswrapper[31411]: I0224 02:33:11.616362 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9tqsfz" Feb 24 02:33:11.622604 master-0 kubenswrapper[31411]: I0224 02:33:11.622222 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7z22\" (UniqueName: \"kubernetes.io/projected/7211589b-d7b6-48c3-b3f2-d74d133733b0-kube-api-access-j7z22\") pod \"mariadb-operator-controller-manager-6994f66f48-5xt4j\" (UID: \"7211589b-d7b6-48c3-b3f2-d74d133733b0\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5xt4j" Feb 24 02:33:11.622604 master-0 kubenswrapper[31411]: I0224 02:33:11.622346 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v5xt\" (UniqueName: \"kubernetes.io/projected/98db9e0e-7186-41ca-af3e-d192ec846273-kube-api-access-6v5xt\") pod \"neutron-operator-controller-manager-6bd4687957-lwlws\" (UID: \"98db9e0e-7186-41ca-af3e-d192ec846273\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-lwlws" Feb 24 02:33:11.622604 master-0 kubenswrapper[31411]: I0224 02:33:11.622472 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 24 02:33:11.652066 master-0 kubenswrapper[31411]: I0224 02:33:11.651979 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9tqsfz"] Feb 24 02:33:11.660347 master-0 kubenswrapper[31411]: I0224 02:33:11.659703 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-55b7d"] Feb 24 02:33:11.661660 master-0 kubenswrapper[31411]: I0224 02:33:11.661627 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-55b7d" Feb 24 02:33:11.663901 master-0 kubenswrapper[31411]: I0224 02:33:11.663524 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-49gvb" Feb 24 02:33:11.671289 master-0 kubenswrapper[31411]: I0224 02:33:11.669486 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-55b7d"] Feb 24 02:33:11.687884 master-0 kubenswrapper[31411]: I0224 02:33:11.687388 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v5xt\" (UniqueName: \"kubernetes.io/projected/98db9e0e-7186-41ca-af3e-d192ec846273-kube-api-access-6v5xt\") pod \"neutron-operator-controller-manager-6bd4687957-lwlws\" (UID: \"98db9e0e-7186-41ca-af3e-d192ec846273\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-lwlws" Feb 24 02:33:11.689196 master-0 kubenswrapper[31411]: I0224 02:33:11.689156 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-nn47h"] Feb 24 02:33:11.690910 master-0 kubenswrapper[31411]: I0224 02:33:11.690879 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-nn47h" Feb 24 02:33:11.691619 master-0 kubenswrapper[31411]: I0224 02:33:11.691541 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7z22\" (UniqueName: \"kubernetes.io/projected/7211589b-d7b6-48c3-b3f2-d74d133733b0-kube-api-access-j7z22\") pod \"mariadb-operator-controller-manager-6994f66f48-5xt4j\" (UID: \"7211589b-d7b6-48c3-b3f2-d74d133733b0\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5xt4j" Feb 24 02:33:11.697687 master-0 kubenswrapper[31411]: I0224 02:33:11.697657 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-psxsg" Feb 24 02:33:11.714955 master-0 kubenswrapper[31411]: I0224 02:33:11.711920 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-nn47h"] Feb 24 02:33:11.724095 master-0 kubenswrapper[31411]: I0224 02:33:11.724036 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dgf2\" (UniqueName: \"kubernetes.io/projected/8cb468a5-7f35-4562-b24a-ee51dfb14055-kube-api-access-5dgf2\") pod \"nova-operator-controller-manager-567668f5cf-nffrm\" (UID: \"8cb468a5-7f35-4562-b24a-ee51dfb14055\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-nffrm" Feb 24 02:33:11.724193 master-0 kubenswrapper[31411]: I0224 02:33:11.724172 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e812dec6-4f25-4ba5-b08b-c2c7db77b4b3-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b9tqsfz\" (UID: \"e812dec6-4f25-4ba5-b08b-c2c7db77b4b3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9tqsfz" Feb 24 02:33:11.724234 master-0 kubenswrapper[31411]: I0224 02:33:11.724206 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwl6d\" (UniqueName: \"kubernetes.io/projected/e812dec6-4f25-4ba5-b08b-c2c7db77b4b3-kube-api-access-lwl6d\") pod \"openstack-baremetal-operator-controller-manager-579b7786b9tqsfz\" (UID: \"e812dec6-4f25-4ba5-b08b-c2c7db77b4b3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9tqsfz" Feb 24 02:33:11.724288 master-0 kubenswrapper[31411]: I0224 02:33:11.724267 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89wmh\" (UniqueName: \"kubernetes.io/projected/0ec9ca9d-8f74-4018-970c-370187583fae-kube-api-access-89wmh\") pod \"octavia-operator-controller-manager-659dc6bbfc-74cdr\" (UID: \"0ec9ca9d-8f74-4018-970c-370187583fae\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-74cdr" Feb 24 02:33:11.748327 master-0 kubenswrapper[31411]: I0224 02:33:11.748062 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-pztlf"] Feb 24 02:33:11.748327 master-0 kubenswrapper[31411]: I0224 02:33:11.748283 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5xt4j" Feb 24 02:33:11.749437 master-0 kubenswrapper[31411]: I0224 02:33:11.749409 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-pztlf" Feb 24 02:33:11.766941 master-0 kubenswrapper[31411]: I0224 02:33:11.758194 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-pztlf"] Feb 24 02:33:11.766941 master-0 kubenswrapper[31411]: I0224 02:33:11.766003 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-kwb4z"] Feb 24 02:33:11.771116 master-0 kubenswrapper[31411]: I0224 02:33:11.770728 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-kwb4z" Feb 24 02:33:11.783808 master-0 kubenswrapper[31411]: I0224 02:33:11.783720 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-lwlws" Feb 24 02:33:11.784058 master-0 kubenswrapper[31411]: I0224 02:33:11.784010 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-kwb4z"] Feb 24 02:33:11.792014 master-0 kubenswrapper[31411]: I0224 02:33:11.791967 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-4djnj"] Feb 24 02:33:11.793272 master-0 kubenswrapper[31411]: I0224 02:33:11.793235 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4djnj" Feb 24 02:33:11.800105 master-0 kubenswrapper[31411]: I0224 02:33:11.799132 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-4djnj"] Feb 24 02:33:11.848596 master-0 kubenswrapper[31411]: I0224 02:33:11.833060 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-4pjvq"] Feb 24 02:33:11.848596 master-0 kubenswrapper[31411]: I0224 02:33:11.835865 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-4pjvq" Feb 24 02:33:11.857512 master-0 kubenswrapper[31411]: I0224 02:33:11.850277 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs5vv\" (UniqueName: \"kubernetes.io/projected/8b4222ec-7c7a-4d15-9b19-75b8a88a722f-kube-api-access-xs5vv\") pod \"placement-operator-controller-manager-8497b45c89-nn47h\" (UID: \"8b4222ec-7c7a-4d15-9b19-75b8a88a722f\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-nn47h" Feb 24 02:33:11.857512 master-0 kubenswrapper[31411]: I0224 02:33:11.850347 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e812dec6-4f25-4ba5-b08b-c2c7db77b4b3-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b9tqsfz\" (UID: \"e812dec6-4f25-4ba5-b08b-c2c7db77b4b3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9tqsfz" Feb 24 02:33:11.857512 master-0 kubenswrapper[31411]: I0224 02:33:11.850381 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwl6d\" (UniqueName: \"kubernetes.io/projected/e812dec6-4f25-4ba5-b08b-c2c7db77b4b3-kube-api-access-lwl6d\") pod \"openstack-baremetal-operator-controller-manager-579b7786b9tqsfz\" (UID: \"e812dec6-4f25-4ba5-b08b-c2c7db77b4b3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9tqsfz" Feb 24 02:33:11.857512 master-0 kubenswrapper[31411]: I0224 02:33:11.850444 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsfpc\" (UniqueName: \"kubernetes.io/projected/7d5cd5c8-c10a-4aee-86ef-810478c8e513-kube-api-access-hsfpc\") pod \"ovn-operator-controller-manager-5955d8c787-55b7d\" (UID: \"7d5cd5c8-c10a-4aee-86ef-810478c8e513\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-55b7d" Feb 24 02:33:11.857512 master-0 kubenswrapper[31411]: I0224 02:33:11.850487 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89wmh\" (UniqueName: \"kubernetes.io/projected/0ec9ca9d-8f74-4018-970c-370187583fae-kube-api-access-89wmh\") pod \"octavia-operator-controller-manager-659dc6bbfc-74cdr\" (UID: \"0ec9ca9d-8f74-4018-970c-370187583fae\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-74cdr" Feb 24 02:33:11.857512 master-0 kubenswrapper[31411]: I0224 02:33:11.850550 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4-cert\") pod \"infra-operator-controller-manager-5f879c76b6-2kk8t\" (UID: \"37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2kk8t" Feb 24 02:33:11.857512 master-0 kubenswrapper[31411]: I0224 02:33:11.850587 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dgf2\" (UniqueName: \"kubernetes.io/projected/8cb468a5-7f35-4562-b24a-ee51dfb14055-kube-api-access-5dgf2\") pod \"nova-operator-controller-manager-567668f5cf-nffrm\" (UID: \"8cb468a5-7f35-4562-b24a-ee51dfb14055\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-nffrm" Feb 24 02:33:11.857512 master-0 kubenswrapper[31411]: I0224 02:33:11.850673 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4m86\" (UniqueName: \"kubernetes.io/projected/9cb06807-1302-4179-b56e-08d8b40bb159-kube-api-access-d4m86\") pod \"telemetry-operator-controller-manager-589c568786-kwb4z\" (UID: \"9cb06807-1302-4179-b56e-08d8b40bb159\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-kwb4z" Feb 24 02:33:11.857512 master-0 kubenswrapper[31411]: E0224 02:33:11.851300 31411 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 02:33:11.857512 master-0 kubenswrapper[31411]: E0224 02:33:11.851354 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e812dec6-4f25-4ba5-b08b-c2c7db77b4b3-cert podName:e812dec6-4f25-4ba5-b08b-c2c7db77b4b3 nodeName:}" failed. No retries permitted until 2026-02-24 02:33:12.351335688 +0000 UTC m=+735.568533534 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e812dec6-4f25-4ba5-b08b-c2c7db77b4b3-cert") pod "openstack-baremetal-operator-controller-manager-579b7786b9tqsfz" (UID: "e812dec6-4f25-4ba5-b08b-c2c7db77b4b3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 02:33:11.857512 master-0 kubenswrapper[31411]: E0224 02:33:11.851412 31411 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 24 02:33:11.857512 master-0 kubenswrapper[31411]: E0224 02:33:11.851717 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4-cert podName:37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4 nodeName:}" failed. No retries permitted until 2026-02-24 02:33:12.851705269 +0000 UTC m=+736.068903115 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4-cert") pod "infra-operator-controller-manager-5f879c76b6-2kk8t" (UID: "37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4") : secret "infra-operator-webhook-server-cert" not found Feb 24 02:33:11.865633 master-0 kubenswrapper[31411]: I0224 02:33:11.864591 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-4pjvq"] Feb 24 02:33:11.905834 master-0 kubenswrapper[31411]: I0224 02:33:11.900963 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89wmh\" (UniqueName: \"kubernetes.io/projected/0ec9ca9d-8f74-4018-970c-370187583fae-kube-api-access-89wmh\") pod \"octavia-operator-controller-manager-659dc6bbfc-74cdr\" (UID: \"0ec9ca9d-8f74-4018-970c-370187583fae\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-74cdr" Feb 24 02:33:11.905834 master-0 kubenswrapper[31411]: I0224 02:33:11.902735 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-74cdr" Feb 24 02:33:11.942668 master-0 kubenswrapper[31411]: I0224 02:33:11.942546 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dgf2\" (UniqueName: \"kubernetes.io/projected/8cb468a5-7f35-4562-b24a-ee51dfb14055-kube-api-access-5dgf2\") pod \"nova-operator-controller-manager-567668f5cf-nffrm\" (UID: \"8cb468a5-7f35-4562-b24a-ee51dfb14055\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-nffrm" Feb 24 02:33:11.942773 master-0 kubenswrapper[31411]: I0224 02:33:11.942668 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq"] Feb 24 02:33:11.945940 master-0 kubenswrapper[31411]: I0224 02:33:11.945917 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq" Feb 24 02:33:11.946775 master-0 kubenswrapper[31411]: I0224 02:33:11.946732 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwl6d\" (UniqueName: \"kubernetes.io/projected/e812dec6-4f25-4ba5-b08b-c2c7db77b4b3-kube-api-access-lwl6d\") pod \"openstack-baremetal-operator-controller-manager-579b7786b9tqsfz\" (UID: \"e812dec6-4f25-4ba5-b08b-c2c7db77b4b3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9tqsfz" Feb 24 02:33:11.951371 master-0 kubenswrapper[31411]: I0224 02:33:11.951323 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 24 02:33:11.951844 master-0 kubenswrapper[31411]: I0224 02:33:11.951801 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 24 02:33:11.960913 master-0 kubenswrapper[31411]: I0224 02:33:11.960872 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4m86\" (UniqueName: \"kubernetes.io/projected/9cb06807-1302-4179-b56e-08d8b40bb159-kube-api-access-d4m86\") pod \"telemetry-operator-controller-manager-589c568786-kwb4z\" (UID: \"9cb06807-1302-4179-b56e-08d8b40bb159\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-kwb4z" Feb 24 02:33:11.960996 master-0 kubenswrapper[31411]: I0224 02:33:11.960954 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvlpg\" (UniqueName: \"kubernetes.io/projected/0133c144-0458-420c-a0fa-a5f2874e918f-kube-api-access-rvlpg\") pod \"watcher-operator-controller-manager-bccc79885-4pjvq\" (UID: \"0133c144-0458-420c-a0fa-a5f2874e918f\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-4pjvq" Feb 24 02:33:11.960996 master-0 kubenswrapper[31411]: I0224 02:33:11.960990 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs5vv\" (UniqueName: \"kubernetes.io/projected/8b4222ec-7c7a-4d15-9b19-75b8a88a722f-kube-api-access-xs5vv\") pod \"placement-operator-controller-manager-8497b45c89-nn47h\" (UID: \"8b4222ec-7c7a-4d15-9b19-75b8a88a722f\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-nn47h" Feb 24 02:33:11.961453 master-0 kubenswrapper[31411]: I0224 02:33:11.961418 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktvxj\" (UniqueName: \"kubernetes.io/projected/6dd1b2ec-e2ee-4d27-9e5b-58c72201db10-kube-api-access-ktvxj\") pod \"swift-operator-controller-manager-68f46476f-pztlf\" (UID: \"6dd1b2ec-e2ee-4d27-9e5b-58c72201db10\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-pztlf" Feb 24 02:33:11.961636 master-0 kubenswrapper[31411]: I0224 02:33:11.961562 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsfpc\" (UniqueName: \"kubernetes.io/projected/7d5cd5c8-c10a-4aee-86ef-810478c8e513-kube-api-access-hsfpc\") pod \"ovn-operator-controller-manager-5955d8c787-55b7d\" (UID: \"7d5cd5c8-c10a-4aee-86ef-810478c8e513\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-55b7d" Feb 24 02:33:11.969020 master-0 kubenswrapper[31411]: I0224 02:33:11.968978 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd8pv\" (UniqueName: \"kubernetes.io/projected/54f77f9a-02de-4768-8102-ed59169bc9ed-kube-api-access-pd8pv\") pod \"test-operator-controller-manager-5dc6794d5b-4djnj\" (UID: \"54f77f9a-02de-4768-8102-ed59169bc9ed\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4djnj" Feb 24 02:33:12.000042 master-0 kubenswrapper[31411]: I0224 02:33:11.989587 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq"] Feb 24 02:33:12.000042 master-0 kubenswrapper[31411]: I0224 02:33:11.997440 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4m86\" (UniqueName: \"kubernetes.io/projected/9cb06807-1302-4179-b56e-08d8b40bb159-kube-api-access-d4m86\") pod \"telemetry-operator-controller-manager-589c568786-kwb4z\" (UID: \"9cb06807-1302-4179-b56e-08d8b40bb159\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-kwb4z" Feb 24 02:33:12.008117 master-0 kubenswrapper[31411]: I0224 02:33:12.008068 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsfpc\" (UniqueName: \"kubernetes.io/projected/7d5cd5c8-c10a-4aee-86ef-810478c8e513-kube-api-access-hsfpc\") pod \"ovn-operator-controller-manager-5955d8c787-55b7d\" (UID: \"7d5cd5c8-c10a-4aee-86ef-810478c8e513\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-55b7d" Feb 24 02:33:12.011387 master-0 kubenswrapper[31411]: I0224 02:33:12.009998 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs5vv\" (UniqueName: \"kubernetes.io/projected/8b4222ec-7c7a-4d15-9b19-75b8a88a722f-kube-api-access-xs5vv\") pod \"placement-operator-controller-manager-8497b45c89-nn47h\" (UID: \"8b4222ec-7c7a-4d15-9b19-75b8a88a722f\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-nn47h" Feb 24 02:33:12.016818 master-0 kubenswrapper[31411]: I0224 02:33:12.013197 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-55b7d" Feb 24 02:33:12.031028 master-0 kubenswrapper[31411]: I0224 02:33:12.030985 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-nn47h" Feb 24 02:33:12.035045 master-0 kubenswrapper[31411]: I0224 02:33:12.034425 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qxzpw"] Feb 24 02:33:12.036811 master-0 kubenswrapper[31411]: I0224 02:33:12.036782 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qxzpw" Feb 24 02:33:12.044989 master-0 kubenswrapper[31411]: I0224 02:33:12.044852 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-nffrm" Feb 24 02:33:12.050267 master-0 kubenswrapper[31411]: I0224 02:33:12.050226 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qxzpw"] Feb 24 02:33:12.067797 master-0 kubenswrapper[31411]: I0224 02:33:12.067755 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-kwb4z" Feb 24 02:33:12.073039 master-0 kubenswrapper[31411]: I0224 02:33:12.071181 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktvxj\" (UniqueName: \"kubernetes.io/projected/6dd1b2ec-e2ee-4d27-9e5b-58c72201db10-kube-api-access-ktvxj\") pod \"swift-operator-controller-manager-68f46476f-pztlf\" (UID: \"6dd1b2ec-e2ee-4d27-9e5b-58c72201db10\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-pztlf" Feb 24 02:33:12.073039 master-0 kubenswrapper[31411]: I0224 02:33:12.071221 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztsfk\" (UniqueName: \"kubernetes.io/projected/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-kube-api-access-ztsfk\") pod \"openstack-operator-controller-manager-5dc486cffc-q59hq\" (UID: \"e2268b3c-ccf6-4309-ab1e-6c083c1f78cf\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq" Feb 24 02:33:12.073039 master-0 kubenswrapper[31411]: I0224 02:33:12.071330 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd8pv\" (UniqueName: \"kubernetes.io/projected/54f77f9a-02de-4768-8102-ed59169bc9ed-kube-api-access-pd8pv\") pod \"test-operator-controller-manager-5dc6794d5b-4djnj\" (UID: \"54f77f9a-02de-4768-8102-ed59169bc9ed\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4djnj" Feb 24 02:33:12.073039 master-0 kubenswrapper[31411]: I0224 02:33:12.071374 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-q59hq\" (UID: \"e2268b3c-ccf6-4309-ab1e-6c083c1f78cf\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq" Feb 24 02:33:12.073039 master-0 kubenswrapper[31411]: I0224 02:33:12.071413 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvlpg\" (UniqueName: \"kubernetes.io/projected/0133c144-0458-420c-a0fa-a5f2874e918f-kube-api-access-rvlpg\") pod \"watcher-operator-controller-manager-bccc79885-4pjvq\" (UID: \"0133c144-0458-420c-a0fa-a5f2874e918f\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-4pjvq" Feb 24 02:33:12.073039 master-0 kubenswrapper[31411]: I0224 02:33:12.071438 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-q59hq\" (UID: \"e2268b3c-ccf6-4309-ab1e-6c083c1f78cf\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq" Feb 24 02:33:12.111141 master-0 kubenswrapper[31411]: I0224 02:33:12.111084 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktvxj\" (UniqueName: \"kubernetes.io/projected/6dd1b2ec-e2ee-4d27-9e5b-58c72201db10-kube-api-access-ktvxj\") pod \"swift-operator-controller-manager-68f46476f-pztlf\" (UID: \"6dd1b2ec-e2ee-4d27-9e5b-58c72201db10\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-pztlf" Feb 24 02:33:12.135668 master-0 kubenswrapper[31411]: I0224 02:33:12.127080 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd8pv\" (UniqueName: \"kubernetes.io/projected/54f77f9a-02de-4768-8102-ed59169bc9ed-kube-api-access-pd8pv\") pod \"test-operator-controller-manager-5dc6794d5b-4djnj\" (UID: \"54f77f9a-02de-4768-8102-ed59169bc9ed\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4djnj" Feb 24 02:33:12.135668 master-0 kubenswrapper[31411]: I0224 02:33:12.129891 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvlpg\" (UniqueName: \"kubernetes.io/projected/0133c144-0458-420c-a0fa-a5f2874e918f-kube-api-access-rvlpg\") pod \"watcher-operator-controller-manager-bccc79885-4pjvq\" (UID: \"0133c144-0458-420c-a0fa-a5f2874e918f\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-4pjvq" Feb 24 02:33:12.182269 master-0 kubenswrapper[31411]: I0224 02:33:12.182213 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-q59hq\" (UID: \"e2268b3c-ccf6-4309-ab1e-6c083c1f78cf\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq" Feb 24 02:33:12.182482 master-0 kubenswrapper[31411]: I0224 02:33:12.182286 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt87b\" (UniqueName: \"kubernetes.io/projected/91435ed0-9742-447d-b192-beb911f7782e-kube-api-access-mt87b\") pod \"rabbitmq-cluster-operator-manager-668c99d594-qxzpw\" (UID: \"91435ed0-9742-447d-b192-beb911f7782e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qxzpw" Feb 24 02:33:12.182482 master-0 kubenswrapper[31411]: I0224 02:33:12.182339 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-q59hq\" (UID: \"e2268b3c-ccf6-4309-ab1e-6c083c1f78cf\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq" Feb 24 02:33:12.182482 master-0 kubenswrapper[31411]: I0224 02:33:12.182409 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztsfk\" (UniqueName: \"kubernetes.io/projected/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-kube-api-access-ztsfk\") pod \"openstack-operator-controller-manager-5dc486cffc-q59hq\" (UID: \"e2268b3c-ccf6-4309-ab1e-6c083c1f78cf\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq" Feb 24 02:33:12.184739 master-0 kubenswrapper[31411]: E0224 02:33:12.182829 31411 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 24 02:33:12.184739 master-0 kubenswrapper[31411]: E0224 02:33:12.182932 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-webhook-certs podName:e2268b3c-ccf6-4309-ab1e-6c083c1f78cf nodeName:}" failed. No retries permitted until 2026-02-24 02:33:12.682909298 +0000 UTC m=+735.900107144 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-webhook-certs") pod "openstack-operator-controller-manager-5dc486cffc-q59hq" (UID: "e2268b3c-ccf6-4309-ab1e-6c083c1f78cf") : secret "webhook-server-cert" not found Feb 24 02:33:12.184739 master-0 kubenswrapper[31411]: E0224 02:33:12.184066 31411 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 24 02:33:12.184739 master-0 kubenswrapper[31411]: E0224 02:33:12.184134 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-metrics-certs podName:e2268b3c-ccf6-4309-ab1e-6c083c1f78cf nodeName:}" failed. No retries permitted until 2026-02-24 02:33:12.684116292 +0000 UTC m=+735.901314138 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-metrics-certs") pod "openstack-operator-controller-manager-5dc486cffc-q59hq" (UID: "e2268b3c-ccf6-4309-ab1e-6c083c1f78cf") : secret "metrics-server-cert" not found Feb 24 02:33:12.212667 master-0 kubenswrapper[31411]: I0224 02:33:12.208241 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztsfk\" (UniqueName: \"kubernetes.io/projected/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-kube-api-access-ztsfk\") pod \"openstack-operator-controller-manager-5dc486cffc-q59hq\" (UID: \"e2268b3c-ccf6-4309-ab1e-6c083c1f78cf\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq" Feb 24 02:33:12.234779 master-0 kubenswrapper[31411]: W0224 02:33:12.234525 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd4e8aac_8f11_4e85_ac94_2160ae3adf4c.slice/crio-0f1d84e4611a67a1d2036c087ac5ed16051c2816fbbdaa8ea31a96f77f6edecd WatchSource:0}: Error finding container 0f1d84e4611a67a1d2036c087ac5ed16051c2816fbbdaa8ea31a96f77f6edecd: Status 404 returned error can't find the container with id 0f1d84e4611a67a1d2036c087ac5ed16051c2816fbbdaa8ea31a96f77f6edecd Feb 24 02:33:12.272862 master-0 kubenswrapper[31411]: I0224 02:33:12.264501 31411 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 02:33:12.289718 master-0 kubenswrapper[31411]: I0224 02:33:12.284516 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt87b\" (UniqueName: \"kubernetes.io/projected/91435ed0-9742-447d-b192-beb911f7782e-kube-api-access-mt87b\") pod \"rabbitmq-cluster-operator-manager-668c99d594-qxzpw\" (UID: \"91435ed0-9742-447d-b192-beb911f7782e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qxzpw" Feb 24 02:33:12.289718 master-0 kubenswrapper[31411]: I0224 02:33:12.287665 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-b72xt"] Feb 24 02:33:12.327630 master-0 kubenswrapper[31411]: I0224 02:33:12.325662 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt87b\" (UniqueName: \"kubernetes.io/projected/91435ed0-9742-447d-b192-beb911f7782e-kube-api-access-mt87b\") pod \"rabbitmq-cluster-operator-manager-668c99d594-qxzpw\" (UID: \"91435ed0-9742-447d-b192-beb911f7782e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qxzpw" Feb 24 02:33:12.345629 master-0 kubenswrapper[31411]: I0224 02:33:12.344786 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-pztlf" Feb 24 02:33:12.386460 master-0 kubenswrapper[31411]: I0224 02:33:12.386395 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e812dec6-4f25-4ba5-b08b-c2c7db77b4b3-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b9tqsfz\" (UID: \"e812dec6-4f25-4ba5-b08b-c2c7db77b4b3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9tqsfz" Feb 24 02:33:12.387106 master-0 kubenswrapper[31411]: E0224 02:33:12.386700 31411 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 02:33:12.387106 master-0 kubenswrapper[31411]: E0224 02:33:12.386783 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e812dec6-4f25-4ba5-b08b-c2c7db77b4b3-cert podName:e812dec6-4f25-4ba5-b08b-c2c7db77b4b3 nodeName:}" failed. No retries permitted until 2026-02-24 02:33:13.386765021 +0000 UTC m=+736.603962867 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e812dec6-4f25-4ba5-b08b-c2c7db77b4b3-cert") pod "openstack-baremetal-operator-controller-manager-579b7786b9tqsfz" (UID: "e812dec6-4f25-4ba5-b08b-c2c7db77b4b3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 02:33:12.397926 master-0 kubenswrapper[31411]: I0224 02:33:12.397897 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4djnj" Feb 24 02:33:12.416327 master-0 kubenswrapper[31411]: I0224 02:33:12.412132 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-4pjvq" Feb 24 02:33:12.466940 master-0 kubenswrapper[31411]: I0224 02:33:12.460716 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-2ldv2"] Feb 24 02:33:12.478955 master-0 kubenswrapper[31411]: I0224 02:33:12.478889 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-dzbvc"] Feb 24 02:33:12.492657 master-0 kubenswrapper[31411]: W0224 02:33:12.491285 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cac8bf4_b1c9_4b3c_a536_8408f6ad8495.slice/crio-7f8a6f00b94f2a82dabd891e256d99ff2a68c222d9eeedeccbb398a3ca65939b WatchSource:0}: Error finding container 7f8a6f00b94f2a82dabd891e256d99ff2a68c222d9eeedeccbb398a3ca65939b: Status 404 returned error can't find the container with id 7f8a6f00b94f2a82dabd891e256d99ff2a68c222d9eeedeccbb398a3ca65939b Feb 24 02:33:12.613183 master-0 kubenswrapper[31411]: I0224 02:33:12.613025 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qxzpw" Feb 24 02:33:12.627119 master-0 kubenswrapper[31411]: I0224 02:33:12.626535 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-zfd69"] Feb 24 02:33:12.647878 master-0 kubenswrapper[31411]: W0224 02:33:12.647817 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44cfb629_0b50_4e8c_9b4c_e329a1b3c533.slice/crio-5642c1e8dc3e5c03b5f26b1304297d2b620d20e67a39c9497f13ac4ef824684f WatchSource:0}: Error finding container 5642c1e8dc3e5c03b5f26b1304297d2b620d20e67a39c9497f13ac4ef824684f: Status 404 returned error can't find the container with id 5642c1e8dc3e5c03b5f26b1304297d2b620d20e67a39c9497f13ac4ef824684f Feb 24 02:33:12.702028 master-0 kubenswrapper[31411]: I0224 02:33:12.701946 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-q59hq\" (UID: \"e2268b3c-ccf6-4309-ab1e-6c083c1f78cf\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq" Feb 24 02:33:12.702399 master-0 kubenswrapper[31411]: I0224 02:33:12.702063 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-q59hq\" (UID: \"e2268b3c-ccf6-4309-ab1e-6c083c1f78cf\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq" Feb 24 02:33:12.702399 master-0 kubenswrapper[31411]: E0224 02:33:12.702132 31411 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 24 02:33:12.702399 master-0 kubenswrapper[31411]: E0224 02:33:12.702223 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-webhook-certs podName:e2268b3c-ccf6-4309-ab1e-6c083c1f78cf nodeName:}" failed. No retries permitted until 2026-02-24 02:33:13.70220164 +0000 UTC m=+736.919399486 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-webhook-certs") pod "openstack-operator-controller-manager-5dc486cffc-q59hq" (UID: "e2268b3c-ccf6-4309-ab1e-6c083c1f78cf") : secret "webhook-server-cert" not found Feb 24 02:33:12.702399 master-0 kubenswrapper[31411]: E0224 02:33:12.702367 31411 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 24 02:33:12.702618 master-0 kubenswrapper[31411]: E0224 02:33:12.702454 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-metrics-certs podName:e2268b3c-ccf6-4309-ab1e-6c083c1f78cf nodeName:}" failed. No retries permitted until 2026-02-24 02:33:13.702433987 +0000 UTC m=+736.919631833 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-metrics-certs") pod "openstack-operator-controller-manager-5dc486cffc-q59hq" (UID: "e2268b3c-ccf6-4309-ab1e-6c083c1f78cf") : secret "metrics-server-cert" not found Feb 24 02:33:12.906874 master-0 kubenswrapper[31411]: I0224 02:33:12.906809 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4-cert\") pod \"infra-operator-controller-manager-5f879c76b6-2kk8t\" (UID: \"37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2kk8t" Feb 24 02:33:12.907672 master-0 kubenswrapper[31411]: E0224 02:33:12.907143 31411 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 24 02:33:12.907672 master-0 kubenswrapper[31411]: E0224 02:33:12.907341 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4-cert podName:37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4 nodeName:}" failed. No retries permitted until 2026-02-24 02:33:14.907302518 +0000 UTC m=+738.124500404 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4-cert") pod "infra-operator-controller-manager-5f879c76b6-2kk8t" (UID: "37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4") : secret "infra-operator-webhook-server-cert" not found Feb 24 02:33:13.265299 master-0 kubenswrapper[31411]: I0224 02:33:13.260620 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-74cdr"] Feb 24 02:33:13.275825 master-0 kubenswrapper[31411]: I0224 02:33:13.273139 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-lwlws"] Feb 24 02:33:13.285495 master-0 kubenswrapper[31411]: I0224 02:33:13.285443 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-zfd69" event={"ID":"44cfb629-0b50-4e8c-9b4c-e329a1b3c533","Type":"ContainerStarted","Data":"5642c1e8dc3e5c03b5f26b1304297d2b620d20e67a39c9497f13ac4ef824684f"} Feb 24 02:33:13.286908 master-0 kubenswrapper[31411]: I0224 02:33:13.286872 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2ldv2" event={"ID":"7cac8bf4-b1c9-4b3c-a536-8408f6ad8495","Type":"ContainerStarted","Data":"7f8a6f00b94f2a82dabd891e256d99ff2a68c222d9eeedeccbb398a3ca65939b"} Feb 24 02:33:13.286959 master-0 kubenswrapper[31411]: I0224 02:33:13.286908 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-5t6bt"] Feb 24 02:33:13.288349 master-0 kubenswrapper[31411]: I0224 02:33:13.288223 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-b72xt" event={"ID":"dd4e8aac-8f11-4e85-ac94-2160ae3adf4c","Type":"ContainerStarted","Data":"0f1d84e4611a67a1d2036c087ac5ed16051c2816fbbdaa8ea31a96f77f6edecd"} Feb 24 02:33:13.289474 master-0 kubenswrapper[31411]: I0224 02:33:13.289444 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dzbvc" event={"ID":"129f6086-7edd-41da-adf1-38c9b82e0932","Type":"ContainerStarted","Data":"41a2ab27a46b9a7998dbb4fc25c026c35aef44771ecade28cbc6ba3e376f62b1"} Feb 24 02:33:13.298125 master-0 kubenswrapper[31411]: I0224 02:33:13.298042 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-49gvb"] Feb 24 02:33:13.304991 master-0 kubenswrapper[31411]: I0224 02:33:13.304950 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-5xt4j"] Feb 24 02:33:13.311321 master-0 kubenswrapper[31411]: I0224 02:33:13.311263 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-74cdr" event={"ID":"0ec9ca9d-8f74-4018-970c-370187583fae","Type":"ContainerStarted","Data":"f547787968c738a218791f6a9a61fd90ebc24d89a6259ef240ead255810d61c2"} Feb 24 02:33:13.315428 master-0 kubenswrapper[31411]: I0224 02:33:13.313343 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-ws6cb"] Feb 24 02:33:13.318073 master-0 kubenswrapper[31411]: I0224 02:33:13.318049 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-hksp2"] Feb 24 02:33:13.318960 master-0 kubenswrapper[31411]: W0224 02:33:13.318614 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2da684b9_3acd_40d2_8562_f212bc136dc5.slice/crio-18687f1d4ec5685436bc4e4b7c6a3c4d0bf64489940ca70c9689340bec92367f WatchSource:0}: Error finding container 18687f1d4ec5685436bc4e4b7c6a3c4d0bf64489940ca70c9689340bec92367f: Status 404 returned error can't find the container with id 18687f1d4ec5685436bc4e4b7c6a3c4d0bf64489940ca70c9689340bec92367f Feb 24 02:33:13.337131 master-0 kubenswrapper[31411]: I0224 02:33:13.337076 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-psxsg"] Feb 24 02:33:13.420623 master-0 kubenswrapper[31411]: I0224 02:33:13.420460 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e812dec6-4f25-4ba5-b08b-c2c7db77b4b3-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b9tqsfz\" (UID: \"e812dec6-4f25-4ba5-b08b-c2c7db77b4b3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9tqsfz" Feb 24 02:33:13.421001 master-0 kubenswrapper[31411]: E0224 02:33:13.420956 31411 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 02:33:13.421182 master-0 kubenswrapper[31411]: E0224 02:33:13.421066 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e812dec6-4f25-4ba5-b08b-c2c7db77b4b3-cert podName:e812dec6-4f25-4ba5-b08b-c2c7db77b4b3 nodeName:}" failed. No retries permitted until 2026-02-24 02:33:15.421040915 +0000 UTC m=+738.638238761 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e812dec6-4f25-4ba5-b08b-c2c7db77b4b3-cert") pod "openstack-baremetal-operator-controller-manager-579b7786b9tqsfz" (UID: "e812dec6-4f25-4ba5-b08b-c2c7db77b4b3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 02:33:13.501090 master-0 kubenswrapper[31411]: I0224 02:33:13.501027 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-nffrm"] Feb 24 02:33:13.953648 master-0 kubenswrapper[31411]: I0224 02:33:13.942828 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-q59hq\" (UID: \"e2268b3c-ccf6-4309-ab1e-6c083c1f78cf\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq" Feb 24 02:33:13.953648 master-0 kubenswrapper[31411]: E0224 02:33:13.943188 31411 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 24 02:33:13.953648 master-0 kubenswrapper[31411]: E0224 02:33:13.943394 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-webhook-certs podName:e2268b3c-ccf6-4309-ab1e-6c083c1f78cf nodeName:}" failed. No retries permitted until 2026-02-24 02:33:15.943338001 +0000 UTC m=+739.160536007 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-webhook-certs") pod "openstack-operator-controller-manager-5dc486cffc-q59hq" (UID: "e2268b3c-ccf6-4309-ab1e-6c083c1f78cf") : secret "webhook-server-cert" not found Feb 24 02:33:13.953648 master-0 kubenswrapper[31411]: I0224 02:33:13.944153 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-q59hq\" (UID: \"e2268b3c-ccf6-4309-ab1e-6c083c1f78cf\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq" Feb 24 02:33:13.953648 master-0 kubenswrapper[31411]: E0224 02:33:13.944800 31411 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 24 02:33:13.953648 master-0 kubenswrapper[31411]: E0224 02:33:13.944915 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-metrics-certs podName:e2268b3c-ccf6-4309-ab1e-6c083c1f78cf nodeName:}" failed. No retries permitted until 2026-02-24 02:33:15.944883785 +0000 UTC m=+739.162081801 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-metrics-certs") pod "openstack-operator-controller-manager-5dc486cffc-q59hq" (UID: "e2268b3c-ccf6-4309-ab1e-6c083c1f78cf") : secret "metrics-server-cert" not found Feb 24 02:33:14.044344 master-0 kubenswrapper[31411]: W0224 02:33:14.044284 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cb06807_1302_4179_b56e_08d8b40bb159.slice/crio-760e3d61781b997a938a645b88a8989f535d072d918c34787fda636f70905dfc WatchSource:0}: Error finding container 760e3d61781b997a938a645b88a8989f535d072d918c34787fda636f70905dfc: Status 404 returned error can't find the container with id 760e3d61781b997a938a645b88a8989f535d072d918c34787fda636f70905dfc Feb 24 02:33:14.057741 master-0 kubenswrapper[31411]: W0224 02:33:14.057681 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b4222ec_7c7a_4d15_9b19_75b8a88a722f.slice/crio-ebf30e9cc9065e1c82726015465c38866f73c6d5c9286f1529c03679aa5a7fbc WatchSource:0}: Error finding container ebf30e9cc9065e1c82726015465c38866f73c6d5c9286f1529c03679aa5a7fbc: Status 404 returned error can't find the container with id ebf30e9cc9065e1c82726015465c38866f73c6d5c9286f1529c03679aa5a7fbc Feb 24 02:33:14.058129 master-0 kubenswrapper[31411]: I0224 02:33:14.058096 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-kwb4z"] Feb 24 02:33:14.067342 master-0 kubenswrapper[31411]: E0224 02:33:14.067274 31411 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d4m86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-589c568786-kwb4z_openstack-operators(9cb06807-1302-4179-b56e-08d8b40bb159): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 24 02:33:14.068656 master-0 kubenswrapper[31411]: E0224 02:33:14.068604 31411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-kwb4z" podUID="9cb06807-1302-4179-b56e-08d8b40bb159" Feb 24 02:33:14.075694 master-0 kubenswrapper[31411]: W0224 02:33:14.075580 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d5cd5c8_c10a_4aee_86ef_810478c8e513.slice/crio-478c55487b28e35479885380822b393e180c9c5223160baee3ffe6672dcefe2d WatchSource:0}: Error finding container 478c55487b28e35479885380822b393e180c9c5223160baee3ffe6672dcefe2d: Status 404 returned error can't find the container with id 478c55487b28e35479885380822b393e180c9c5223160baee3ffe6672dcefe2d Feb 24 02:33:14.076112 master-0 kubenswrapper[31411]: E0224 02:33:14.076050 31411 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hsfpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-5955d8c787-55b7d_openstack-operators(7d5cd5c8-c10a-4aee-86ef-810478c8e513): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 24 02:33:14.077385 master-0 kubenswrapper[31411]: E0224 02:33:14.077314 31411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-55b7d" podUID="7d5cd5c8-c10a-4aee-86ef-810478c8e513" Feb 24 02:33:14.096241 master-0 kubenswrapper[31411]: I0224 02:33:14.096173 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-4djnj"] Feb 24 02:33:14.105920 master-0 kubenswrapper[31411]: I0224 02:33:14.105779 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-pztlf"] Feb 24 02:33:14.114416 master-0 kubenswrapper[31411]: I0224 02:33:14.114173 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-55b7d"] Feb 24 02:33:14.121374 master-0 kubenswrapper[31411]: I0224 02:33:14.121277 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qxzpw"] Feb 24 02:33:14.141146 master-0 kubenswrapper[31411]: I0224 02:33:14.140958 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-4pjvq"] Feb 24 02:33:14.153434 master-0 kubenswrapper[31411]: I0224 02:33:14.153362 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-nn47h"] Feb 24 02:33:14.892848 master-0 kubenswrapper[31411]: I0224 02:33:14.892769 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-nffrm" event={"ID":"8cb468a5-7f35-4562-b24a-ee51dfb14055","Type":"ContainerStarted","Data":"fb75968e43a4a1d4b29f8717944b904e31ee42d33c379f25dcc701e1034dbeea"} Feb 24 02:33:14.897972 master-0 kubenswrapper[31411]: I0224 02:33:14.896219 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-psxsg" event={"ID":"3e724dd4-e900-4138-90c3-ee1fc4fc8350","Type":"ContainerStarted","Data":"43eb4d58a217d66a087cf91001b263e5e62d5fcdd2c7cbf368322771696cbba4"} Feb 24 02:33:14.912562 master-0 kubenswrapper[31411]: I0224 02:33:14.911342 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-kwb4z" event={"ID":"9cb06807-1302-4179-b56e-08d8b40bb159","Type":"ContainerStarted","Data":"760e3d61781b997a938a645b88a8989f535d072d918c34787fda636f70905dfc"} Feb 24 02:33:14.913958 master-0 kubenswrapper[31411]: E0224 02:33:14.913924 31411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-kwb4z" podUID="9cb06807-1302-4179-b56e-08d8b40bb159" Feb 24 02:33:14.914062 master-0 kubenswrapper[31411]: I0224 02:33:14.914006 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4djnj" event={"ID":"54f77f9a-02de-4768-8102-ed59169bc9ed","Type":"ContainerStarted","Data":"4563960913d245def949ee49b6febd76639ac918b881b1ca4d8fc27409f8d6a1"} Feb 24 02:33:14.915809 master-0 kubenswrapper[31411]: I0224 02:33:14.915713 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qxzpw" event={"ID":"91435ed0-9742-447d-b192-beb911f7782e","Type":"ContainerStarted","Data":"c5f4ea75007cef80a481d2e899df25fa2cbece81f99d25125e7136014154193e"} Feb 24 02:33:14.917774 master-0 kubenswrapper[31411]: I0224 02:33:14.917517 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5t6bt" event={"ID":"3bb72077-6f36-439c-8cc0-83bdbfcc3935","Type":"ContainerStarted","Data":"fecbcd16c12705cb71967c63294695f433e08cb0aa9c58847c0838a7045ccc0c"} Feb 24 02:33:14.919818 master-0 kubenswrapper[31411]: I0224 02:33:14.919397 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-49gvb" event={"ID":"6ae1849b-a4a6-4f60-bf3d-713c1f0df81f","Type":"ContainerStarted","Data":"0c0c8b8cc91ded8a6e5c2c6d60bd6f66eef32a0bb412aa649d63e2f495b47eee"} Feb 24 02:33:14.921199 master-0 kubenswrapper[31411]: I0224 02:33:14.921159 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hksp2" event={"ID":"2da684b9-3acd-40d2-8562-f212bc136dc5","Type":"ContainerStarted","Data":"18687f1d4ec5685436bc4e4b7c6a3c4d0bf64489940ca70c9689340bec92367f"} Feb 24 02:33:14.925643 master-0 kubenswrapper[31411]: I0224 02:33:14.925603 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-55b7d" event={"ID":"7d5cd5c8-c10a-4aee-86ef-810478c8e513","Type":"ContainerStarted","Data":"478c55487b28e35479885380822b393e180c9c5223160baee3ffe6672dcefe2d"} Feb 24 02:33:14.931142 master-0 kubenswrapper[31411]: E0224 02:33:14.930072 31411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-55b7d" podUID="7d5cd5c8-c10a-4aee-86ef-810478c8e513" Feb 24 02:33:14.934416 master-0 kubenswrapper[31411]: I0224 02:33:14.933780 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-nn47h" event={"ID":"8b4222ec-7c7a-4d15-9b19-75b8a88a722f","Type":"ContainerStarted","Data":"ebf30e9cc9065e1c82726015465c38866f73c6d5c9286f1529c03679aa5a7fbc"} Feb 24 02:33:14.935559 master-0 kubenswrapper[31411]: I0224 02:33:14.935511 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5xt4j" event={"ID":"7211589b-d7b6-48c3-b3f2-d74d133733b0","Type":"ContainerStarted","Data":"e8b15093b5cc3057cba77f6be27946604e4ca4b303d140d408edb73bcb406666"} Feb 24 02:33:14.937300 master-0 kubenswrapper[31411]: I0224 02:33:14.937262 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-pztlf" event={"ID":"6dd1b2ec-e2ee-4d27-9e5b-58c72201db10","Type":"ContainerStarted","Data":"1faba17ae611e2b2e957c8c1997948b8c9320755e7a7d69fc68d0eb5cd007679"} Feb 24 02:33:14.938178 master-0 kubenswrapper[31411]: I0224 02:33:14.938132 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-lwlws" event={"ID":"98db9e0e-7186-41ca-af3e-d192ec846273","Type":"ContainerStarted","Data":"e0eedaff6f189449a8ded1674c86051f9c09bf15ebdbfa1ce6f172a0e7e67bcc"} Feb 24 02:33:14.938952 master-0 kubenswrapper[31411]: I0224 02:33:14.938932 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ws6cb" event={"ID":"70a415e4-fc72-4449-87a5-67a04c4ee4aa","Type":"ContainerStarted","Data":"5a62a26957b39539f025c5a1c511a296b1dde85ed2bc3df4183d9f621ac50e83"} Feb 24 02:33:14.943447 master-0 kubenswrapper[31411]: I0224 02:33:14.943419 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-4pjvq" event={"ID":"0133c144-0458-420c-a0fa-a5f2874e918f","Type":"ContainerStarted","Data":"72d95b053401ae2cce21103cefb1ade571bd290274377bca198ab85a8ab9c9ed"} Feb 24 02:33:14.963625 master-0 kubenswrapper[31411]: I0224 02:33:14.963545 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4-cert\") pod \"infra-operator-controller-manager-5f879c76b6-2kk8t\" (UID: \"37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2kk8t" Feb 24 02:33:14.964090 master-0 kubenswrapper[31411]: E0224 02:33:14.963831 31411 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 24 02:33:14.964090 master-0 kubenswrapper[31411]: E0224 02:33:14.963949 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4-cert podName:37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4 nodeName:}" failed. No retries permitted until 2026-02-24 02:33:18.963921223 +0000 UTC m=+742.181119069 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4-cert") pod "infra-operator-controller-manager-5f879c76b6-2kk8t" (UID: "37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4") : secret "infra-operator-webhook-server-cert" not found Feb 24 02:33:15.482740 master-0 kubenswrapper[31411]: I0224 02:33:15.482638 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e812dec6-4f25-4ba5-b08b-c2c7db77b4b3-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b9tqsfz\" (UID: \"e812dec6-4f25-4ba5-b08b-c2c7db77b4b3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9tqsfz" Feb 24 02:33:15.483201 master-0 kubenswrapper[31411]: E0224 02:33:15.482957 31411 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 02:33:15.483201 master-0 kubenswrapper[31411]: E0224 02:33:15.483079 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e812dec6-4f25-4ba5-b08b-c2c7db77b4b3-cert podName:e812dec6-4f25-4ba5-b08b-c2c7db77b4b3 nodeName:}" failed. No retries permitted until 2026-02-24 02:33:19.483020839 +0000 UTC m=+742.700218725 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e812dec6-4f25-4ba5-b08b-c2c7db77b4b3-cert") pod "openstack-baremetal-operator-controller-manager-579b7786b9tqsfz" (UID: "e812dec6-4f25-4ba5-b08b-c2c7db77b4b3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 02:33:15.972161 master-0 kubenswrapper[31411]: E0224 02:33:15.972077 31411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-kwb4z" podUID="9cb06807-1302-4179-b56e-08d8b40bb159" Feb 24 02:33:15.972566 master-0 kubenswrapper[31411]: E0224 02:33:15.972172 31411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-55b7d" podUID="7d5cd5c8-c10a-4aee-86ef-810478c8e513" Feb 24 02:33:15.992989 master-0 kubenswrapper[31411]: I0224 02:33:15.992955 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-q59hq\" (UID: \"e2268b3c-ccf6-4309-ab1e-6c083c1f78cf\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq" Feb 24 02:33:15.993065 master-0 kubenswrapper[31411]: I0224 02:33:15.993015 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-q59hq\" (UID: \"e2268b3c-ccf6-4309-ab1e-6c083c1f78cf\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq" Feb 24 02:33:15.993161 master-0 kubenswrapper[31411]: E0224 02:33:15.993139 31411 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 24 02:33:15.993225 master-0 kubenswrapper[31411]: E0224 02:33:15.993187 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-metrics-certs podName:e2268b3c-ccf6-4309-ab1e-6c083c1f78cf nodeName:}" failed. No retries permitted until 2026-02-24 02:33:19.993171136 +0000 UTC m=+743.210368982 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-metrics-certs") pod "openstack-operator-controller-manager-5dc486cffc-q59hq" (UID: "e2268b3c-ccf6-4309-ab1e-6c083c1f78cf") : secret "metrics-server-cert" not found Feb 24 02:33:15.993274 master-0 kubenswrapper[31411]: E0224 02:33:15.993194 31411 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 24 02:33:15.993358 master-0 kubenswrapper[31411]: E0224 02:33:15.993324 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-webhook-certs podName:e2268b3c-ccf6-4309-ab1e-6c083c1f78cf nodeName:}" failed. No retries permitted until 2026-02-24 02:33:19.99328619 +0000 UTC m=+743.210484026 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-webhook-certs") pod "openstack-operator-controller-manager-5dc486cffc-q59hq" (UID: "e2268b3c-ccf6-4309-ab1e-6c083c1f78cf") : secret "webhook-server-cert" not found Feb 24 02:33:19.012275 master-0 kubenswrapper[31411]: I0224 02:33:19.012199 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4-cert\") pod \"infra-operator-controller-manager-5f879c76b6-2kk8t\" (UID: \"37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2kk8t" Feb 24 02:33:19.012942 master-0 kubenswrapper[31411]: E0224 02:33:19.012424 31411 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 24 02:33:19.012942 master-0 kubenswrapper[31411]: E0224 02:33:19.012538 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4-cert podName:37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4 nodeName:}" failed. No retries permitted until 2026-02-24 02:33:27.012511337 +0000 UTC m=+750.229709183 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4-cert") pod "infra-operator-controller-manager-5f879c76b6-2kk8t" (UID: "37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4") : secret "infra-operator-webhook-server-cert" not found Feb 24 02:33:19.525230 master-0 kubenswrapper[31411]: I0224 02:33:19.525088 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e812dec6-4f25-4ba5-b08b-c2c7db77b4b3-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b9tqsfz\" (UID: \"e812dec6-4f25-4ba5-b08b-c2c7db77b4b3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9tqsfz" Feb 24 02:33:19.525667 master-0 kubenswrapper[31411]: E0224 02:33:19.525387 31411 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 02:33:19.525667 master-0 kubenswrapper[31411]: E0224 02:33:19.525520 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e812dec6-4f25-4ba5-b08b-c2c7db77b4b3-cert podName:e812dec6-4f25-4ba5-b08b-c2c7db77b4b3 nodeName:}" failed. No retries permitted until 2026-02-24 02:33:27.525486872 +0000 UTC m=+750.742684758 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e812dec6-4f25-4ba5-b08b-c2c7db77b4b3-cert") pod "openstack-baremetal-operator-controller-manager-579b7786b9tqsfz" (UID: "e812dec6-4f25-4ba5-b08b-c2c7db77b4b3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 02:33:20.038375 master-0 kubenswrapper[31411]: I0224 02:33:20.038268 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-q59hq\" (UID: \"e2268b3c-ccf6-4309-ab1e-6c083c1f78cf\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq" Feb 24 02:33:20.039041 master-0 kubenswrapper[31411]: E0224 02:33:20.038639 31411 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 24 02:33:20.039041 master-0 kubenswrapper[31411]: E0224 02:33:20.038778 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-metrics-certs podName:e2268b3c-ccf6-4309-ab1e-6c083c1f78cf nodeName:}" failed. No retries permitted until 2026-02-24 02:33:28.038744606 +0000 UTC m=+751.255942492 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-metrics-certs") pod "openstack-operator-controller-manager-5dc486cffc-q59hq" (UID: "e2268b3c-ccf6-4309-ab1e-6c083c1f78cf") : secret "metrics-server-cert" not found Feb 24 02:33:20.039041 master-0 kubenswrapper[31411]: I0224 02:33:20.038840 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-q59hq\" (UID: \"e2268b3c-ccf6-4309-ab1e-6c083c1f78cf\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq" Feb 24 02:33:20.039201 master-0 kubenswrapper[31411]: E0224 02:33:20.039117 31411 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 24 02:33:20.039318 master-0 kubenswrapper[31411]: E0224 02:33:20.039293 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-webhook-certs podName:e2268b3c-ccf6-4309-ab1e-6c083c1f78cf nodeName:}" failed. No retries permitted until 2026-02-24 02:33:28.03925139 +0000 UTC m=+751.256449266 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-webhook-certs") pod "openstack-operator-controller-manager-5dc486cffc-q59hq" (UID: "e2268b3c-ccf6-4309-ab1e-6c083c1f78cf") : secret "webhook-server-cert" not found Feb 24 02:33:27.048019 master-0 kubenswrapper[31411]: I0224 02:33:27.047923 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4-cert\") pod \"infra-operator-controller-manager-5f879c76b6-2kk8t\" (UID: \"37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2kk8t" Feb 24 02:33:27.049392 master-0 kubenswrapper[31411]: E0224 02:33:27.048231 31411 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 24 02:33:27.049392 master-0 kubenswrapper[31411]: E0224 02:33:27.048378 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4-cert podName:37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4 nodeName:}" failed. No retries permitted until 2026-02-24 02:33:43.04834047 +0000 UTC m=+766.265538356 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4-cert") pod "infra-operator-controller-manager-5f879c76b6-2kk8t" (UID: "37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4") : secret "infra-operator-webhook-server-cert" not found Feb 24 02:33:27.559561 master-0 kubenswrapper[31411]: I0224 02:33:27.559485 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e812dec6-4f25-4ba5-b08b-c2c7db77b4b3-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b9tqsfz\" (UID: \"e812dec6-4f25-4ba5-b08b-c2c7db77b4b3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9tqsfz" Feb 24 02:33:27.559868 master-0 kubenswrapper[31411]: E0224 02:33:27.559787 31411 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 02:33:27.559947 master-0 kubenswrapper[31411]: E0224 02:33:27.559905 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e812dec6-4f25-4ba5-b08b-c2c7db77b4b3-cert podName:e812dec6-4f25-4ba5-b08b-c2c7db77b4b3 nodeName:}" failed. No retries permitted until 2026-02-24 02:33:43.559874666 +0000 UTC m=+766.777072542 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e812dec6-4f25-4ba5-b08b-c2c7db77b4b3-cert") pod "openstack-baremetal-operator-controller-manager-579b7786b9tqsfz" (UID: "e812dec6-4f25-4ba5-b08b-c2c7db77b4b3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 02:33:28.073236 master-0 kubenswrapper[31411]: I0224 02:33:28.073133 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-q59hq\" (UID: \"e2268b3c-ccf6-4309-ab1e-6c083c1f78cf\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq" Feb 24 02:33:28.074357 master-0 kubenswrapper[31411]: I0224 02:33:28.073289 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-q59hq\" (UID: \"e2268b3c-ccf6-4309-ab1e-6c083c1f78cf\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq" Feb 24 02:33:28.074357 master-0 kubenswrapper[31411]: E0224 02:33:28.073692 31411 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 24 02:33:28.074357 master-0 kubenswrapper[31411]: E0224 02:33:28.073808 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-webhook-certs podName:e2268b3c-ccf6-4309-ab1e-6c083c1f78cf nodeName:}" failed. No retries permitted until 2026-02-24 02:33:44.073782438 +0000 UTC m=+767.290980324 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-webhook-certs") pod "openstack-operator-controller-manager-5dc486cffc-q59hq" (UID: "e2268b3c-ccf6-4309-ab1e-6c083c1f78cf") : secret "webhook-server-cert" not found Feb 24 02:33:28.074357 master-0 kubenswrapper[31411]: E0224 02:33:28.073697 31411 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 24 02:33:28.074357 master-0 kubenswrapper[31411]: E0224 02:33:28.073933 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-metrics-certs podName:e2268b3c-ccf6-4309-ab1e-6c083c1f78cf nodeName:}" failed. No retries permitted until 2026-02-24 02:33:44.073905541 +0000 UTC m=+767.291103427 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-metrics-certs") pod "openstack-operator-controller-manager-5dc486cffc-q59hq" (UID: "e2268b3c-ccf6-4309-ab1e-6c083c1f78cf") : secret "metrics-server-cert" not found Feb 24 02:33:35.578070 master-0 kubenswrapper[31411]: I0224 02:33:35.576974 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5xt4j" event={"ID":"7211589b-d7b6-48c3-b3f2-d74d133733b0","Type":"ContainerStarted","Data":"d978fca1fe5ed32f3081ba760295cc1af71c77a92e294b282ee2b6fead9169f5"} Feb 24 02:33:35.578070 master-0 kubenswrapper[31411]: I0224 02:33:35.577198 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5xt4j" Feb 24 02:33:35.647601 master-0 kubenswrapper[31411]: I0224 02:33:35.647374 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5xt4j" podStartSLOduration=4.783244034 podStartE2EDuration="24.647340412s" podCreationTimestamp="2026-02-24 02:33:11 +0000 UTC" firstStartedPulling="2026-02-24 02:33:13.320816796 +0000 UTC m=+736.538014632" lastFinishedPulling="2026-02-24 02:33:33.184913164 +0000 UTC m=+756.402111010" observedRunningTime="2026-02-24 02:33:35.606241664 +0000 UTC m=+758.823439550" watchObservedRunningTime="2026-02-24 02:33:35.647340412 +0000 UTC m=+758.864538268" Feb 24 02:33:36.588457 master-0 kubenswrapper[31411]: I0224 02:33:36.588334 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-b72xt" event={"ID":"dd4e8aac-8f11-4e85-ac94-2160ae3adf4c","Type":"ContainerStarted","Data":"d8ef2f59c51554299b7f7e49c18f41ce7072f42b0799087dd1f31a8948c6065f"} Feb 24 02:33:36.589415 master-0 kubenswrapper[31411]: I0224 02:33:36.589374 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-b72xt" Feb 24 02:33:36.592863 master-0 kubenswrapper[31411]: I0224 02:33:36.592828 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dzbvc" event={"ID":"129f6086-7edd-41da-adf1-38c9b82e0932","Type":"ContainerStarted","Data":"410470054c30d320498743d4add6958308baf7b03161a88a2a98704be1107cc9"} Feb 24 02:33:36.593089 master-0 kubenswrapper[31411]: I0224 02:33:36.593039 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dzbvc" Feb 24 02:33:36.595419 master-0 kubenswrapper[31411]: I0224 02:33:36.595393 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-pztlf" event={"ID":"6dd1b2ec-e2ee-4d27-9e5b-58c72201db10","Type":"ContainerStarted","Data":"deb2ad776b11191fb040b09b5f6986920518c6d40398f2bbf442f2a545f0fefa"} Feb 24 02:33:36.595923 master-0 kubenswrapper[31411]: I0224 02:33:36.595901 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-pztlf" Feb 24 02:33:36.601486 master-0 kubenswrapper[31411]: I0224 02:33:36.601429 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qxzpw" event={"ID":"91435ed0-9742-447d-b192-beb911f7782e","Type":"ContainerStarted","Data":"1999273616ab4ccacdb35d2bfab266377849d05e13f29595bc2479fe0ccc6b87"} Feb 24 02:33:36.607172 master-0 kubenswrapper[31411]: I0224 02:33:36.607142 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-74cdr" event={"ID":"0ec9ca9d-8f74-4018-970c-370187583fae","Type":"ContainerStarted","Data":"59a5bccd624b96672ced7f196cbab47af61b822f229419a5501d2f6646cd870b"} Feb 24 02:33:36.607346 master-0 kubenswrapper[31411]: I0224 02:33:36.607309 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-74cdr" Feb 24 02:33:36.609113 master-0 kubenswrapper[31411]: I0224 02:33:36.609076 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-nn47h" event={"ID":"8b4222ec-7c7a-4d15-9b19-75b8a88a722f","Type":"ContainerStarted","Data":"dc0d29c9974a75aa5707200c428e542d2912ed377c8a7d10d85969e33c00a1ca"} Feb 24 02:33:36.609200 master-0 kubenswrapper[31411]: I0224 02:33:36.609183 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-nn47h" Feb 24 02:33:36.610896 master-0 kubenswrapper[31411]: I0224 02:33:36.610822 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2ldv2" event={"ID":"7cac8bf4-b1c9-4b3c-a536-8408f6ad8495","Type":"ContainerStarted","Data":"feaf63daf7dd8de31a74812afc7bede2ec7c0c65d66fc129c2a5f8e39f912887"} Feb 24 02:33:36.611073 master-0 kubenswrapper[31411]: I0224 02:33:36.611053 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2ldv2" Feb 24 02:33:36.613425 master-0 kubenswrapper[31411]: I0224 02:33:36.613370 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-b72xt" podStartSLOduration=6.299564843 podStartE2EDuration="26.61335435s" podCreationTimestamp="2026-02-24 02:33:10 +0000 UTC" firstStartedPulling="2026-02-24 02:33:12.264423545 +0000 UTC m=+735.481621391" lastFinishedPulling="2026-02-24 02:33:32.578213052 +0000 UTC m=+755.795410898" observedRunningTime="2026-02-24 02:33:36.611851338 +0000 UTC m=+759.829049184" watchObservedRunningTime="2026-02-24 02:33:36.61335435 +0000 UTC m=+759.830552196" Feb 24 02:33:36.613999 master-0 kubenswrapper[31411]: I0224 02:33:36.613846 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4djnj" event={"ID":"54f77f9a-02de-4768-8102-ed59169bc9ed","Type":"ContainerStarted","Data":"c8f3f0c78730162bb33aa76c2ccf3b24b6030c9bc4e0755c7857ec58e11bffb1"} Feb 24 02:33:36.617000 master-0 kubenswrapper[31411]: I0224 02:33:36.616964 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-55b7d" event={"ID":"7d5cd5c8-c10a-4aee-86ef-810478c8e513","Type":"ContainerStarted","Data":"07c4fa97bb26690948723c245f3d5ec2bab51b7b0b0cc21ca07141e54fd115ed"} Feb 24 02:33:36.617429 master-0 kubenswrapper[31411]: I0224 02:33:36.617397 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-55b7d" Feb 24 02:33:36.627295 master-0 kubenswrapper[31411]: I0224 02:33:36.624714 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-lwlws" event={"ID":"98db9e0e-7186-41ca-af3e-d192ec846273","Type":"ContainerStarted","Data":"4bc2ffe6486dc890f571d198a1757f1d9e12cb06c0e16300294a7086c1fca932"} Feb 24 02:33:36.627295 master-0 kubenswrapper[31411]: I0224 02:33:36.625174 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-lwlws" Feb 24 02:33:36.642680 master-0 kubenswrapper[31411]: I0224 02:33:36.639767 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-nffrm" event={"ID":"8cb468a5-7f35-4562-b24a-ee51dfb14055","Type":"ContainerStarted","Data":"6f7deb8a01a529cc1f244df70a8c273b9c7886d8143b6c60e8beb3074f524ab3"} Feb 24 02:33:36.642680 master-0 kubenswrapper[31411]: I0224 02:33:36.640845 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-nffrm" Feb 24 02:33:36.659596 master-0 kubenswrapper[31411]: I0224 02:33:36.659444 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-kwb4z" event={"ID":"9cb06807-1302-4179-b56e-08d8b40bb159","Type":"ContainerStarted","Data":"90cceee35f4bed6e68b31c4fc568c8c3ab91e52dfe24ef763a5f0855defff3de"} Feb 24 02:33:36.659856 master-0 kubenswrapper[31411]: I0224 02:33:36.659727 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-kwb4z" Feb 24 02:33:36.662960 master-0 kubenswrapper[31411]: I0224 02:33:36.660276 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-74cdr" podStartSLOduration=3.835128387 podStartE2EDuration="25.66026448s" podCreationTimestamp="2026-02-24 02:33:11 +0000 UTC" firstStartedPulling="2026-02-24 02:33:13.255481222 +0000 UTC m=+736.472679078" lastFinishedPulling="2026-02-24 02:33:35.080617315 +0000 UTC m=+758.297815171" observedRunningTime="2026-02-24 02:33:36.644988273 +0000 UTC m=+759.862186119" watchObservedRunningTime="2026-02-24 02:33:36.66026448 +0000 UTC m=+759.877462326" Feb 24 02:33:36.684388 master-0 kubenswrapper[31411]: I0224 02:33:36.683964 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-zfd69" event={"ID":"44cfb629-0b50-4e8c-9b4c-e329a1b3c533","Type":"ContainerStarted","Data":"a444e7b270d85bc3e684c4161313b0effa37ef0c4e5670b8fef5c279244e1d48"} Feb 24 02:33:36.685287 master-0 kubenswrapper[31411]: I0224 02:33:36.685037 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-zfd69" Feb 24 02:33:36.693694 master-0 kubenswrapper[31411]: I0224 02:33:36.688052 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-49gvb" event={"ID":"6ae1849b-a4a6-4f60-bf3d-713c1f0df81f","Type":"ContainerStarted","Data":"e61fe0b1d0bcafbd6c86ae4ac284f3e6bc5655988b0db575ce35e2355122f475"} Feb 24 02:33:36.693694 master-0 kubenswrapper[31411]: I0224 02:33:36.688486 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-49gvb" Feb 24 02:33:36.704548 master-0 kubenswrapper[31411]: I0224 02:33:36.700343 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hksp2" event={"ID":"2da684b9-3acd-40d2-8562-f212bc136dc5","Type":"ContainerStarted","Data":"715d75650fc038d2e66b214e0ee32f439782253ddb81eb63c0a48464e6e08188"} Feb 24 02:33:36.704548 master-0 kubenswrapper[31411]: I0224 02:33:36.702917 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ws6cb" event={"ID":"70a415e4-fc72-4449-87a5-67a04c4ee4aa","Type":"ContainerStarted","Data":"633ec5e9602ca68197cfca1aa6c6cb1d2c4f093ba8f32fd9bae3a24e27d2f6b1"} Feb 24 02:33:36.704548 master-0 kubenswrapper[31411]: I0224 02:33:36.703882 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hksp2" Feb 24 02:33:36.704548 master-0 kubenswrapper[31411]: I0224 02:33:36.703931 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ws6cb" Feb 24 02:33:36.710973 master-0 kubenswrapper[31411]: I0224 02:33:36.707650 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-psxsg" event={"ID":"3e724dd4-e900-4138-90c3-ee1fc4fc8350","Type":"ContainerStarted","Data":"fb0274e70b26c9e119a21944b7c63cbce9f71006016759b832050e73a5c4e9eb"} Feb 24 02:33:36.710973 master-0 kubenswrapper[31411]: I0224 02:33:36.708241 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-67d996989d-psxsg" Feb 24 02:33:36.710973 master-0 kubenswrapper[31411]: I0224 02:33:36.709190 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-nn47h" podStartSLOduration=5.968380981 podStartE2EDuration="25.709176935s" podCreationTimestamp="2026-02-24 02:33:11 +0000 UTC" firstStartedPulling="2026-02-24 02:33:14.062263103 +0000 UTC m=+737.279460949" lastFinishedPulling="2026-02-24 02:33:33.803059027 +0000 UTC m=+757.020256903" observedRunningTime="2026-02-24 02:33:36.708390413 +0000 UTC m=+759.925588259" watchObservedRunningTime="2026-02-24 02:33:36.709176935 +0000 UTC m=+759.926374781" Feb 24 02:33:36.725240 master-0 kubenswrapper[31411]: I0224 02:33:36.721315 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-4pjvq" event={"ID":"0133c144-0458-420c-a0fa-a5f2874e918f","Type":"ContainerStarted","Data":"45a6c41b5e4a0ef356272acef03cbb4f7f7e34ac21c9deeb93fbbe5482f25c3f"} Feb 24 02:33:36.725240 master-0 kubenswrapper[31411]: I0224 02:33:36.722165 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-4pjvq" Feb 24 02:33:36.733688 master-0 kubenswrapper[31411]: I0224 02:33:36.730648 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5t6bt" event={"ID":"3bb72077-6f36-439c-8cc0-83bdbfcc3935","Type":"ContainerStarted","Data":"167ba41c7aa811fec6697ef64a76c643e79eec3c1444de25b508cf0da71fb408"} Feb 24 02:33:36.767596 master-0 kubenswrapper[31411]: I0224 02:33:36.765554 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dzbvc" podStartSLOduration=6.096945414 podStartE2EDuration="26.765534329s" podCreationTimestamp="2026-02-24 02:33:10 +0000 UTC" firstStartedPulling="2026-02-24 02:33:12.515874957 +0000 UTC m=+735.733072803" lastFinishedPulling="2026-02-24 02:33:33.184463862 +0000 UTC m=+756.401661718" observedRunningTime="2026-02-24 02:33:36.760715994 +0000 UTC m=+759.977913840" watchObservedRunningTime="2026-02-24 02:33:36.765534329 +0000 UTC m=+759.982732175" Feb 24 02:33:36.859791 master-0 kubenswrapper[31411]: I0224 02:33:36.859712 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qxzpw" podStartSLOduration=4.81860636 podStartE2EDuration="25.859686518s" podCreationTimestamp="2026-02-24 02:33:11 +0000 UTC" firstStartedPulling="2026-02-24 02:33:14.063217639 +0000 UTC m=+737.280415485" lastFinishedPulling="2026-02-24 02:33:35.104297757 +0000 UTC m=+758.321495643" observedRunningTime="2026-02-24 02:33:36.81678986 +0000 UTC m=+760.033987706" watchObservedRunningTime="2026-02-24 02:33:36.859686518 +0000 UTC m=+760.076884364" Feb 24 02:33:36.868559 master-0 kubenswrapper[31411]: I0224 02:33:36.868509 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2ldv2" podStartSLOduration=5.012691754 podStartE2EDuration="26.868498944s" podCreationTimestamp="2026-02-24 02:33:10 +0000 UTC" firstStartedPulling="2026-02-24 02:33:12.51061903 +0000 UTC m=+735.727816876" lastFinishedPulling="2026-02-24 02:33:34.36642618 +0000 UTC m=+757.583624066" observedRunningTime="2026-02-24 02:33:36.852474177 +0000 UTC m=+760.069672023" watchObservedRunningTime="2026-02-24 02:33:36.868498944 +0000 UTC m=+760.085696790" Feb 24 02:33:36.916480 master-0 kubenswrapper[31411]: I0224 02:33:36.916215 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-pztlf" podStartSLOduration=4.8987891900000005 podStartE2EDuration="25.916190996s" podCreationTimestamp="2026-02-24 02:33:11 +0000 UTC" firstStartedPulling="2026-02-24 02:33:14.063206839 +0000 UTC m=+737.280404685" lastFinishedPulling="2026-02-24 02:33:35.080608605 +0000 UTC m=+758.297806491" observedRunningTime="2026-02-24 02:33:36.913997835 +0000 UTC m=+760.131195681" watchObservedRunningTime="2026-02-24 02:33:36.916190996 +0000 UTC m=+760.133388842" Feb 24 02:33:36.992305 master-0 kubenswrapper[31411]: I0224 02:33:36.992216 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hksp2" podStartSLOduration=6.131414304 podStartE2EDuration="25.992190549s" podCreationTimestamp="2026-02-24 02:33:11 +0000 UTC" firstStartedPulling="2026-02-24 02:33:13.323888462 +0000 UTC m=+736.541086308" lastFinishedPulling="2026-02-24 02:33:33.184664687 +0000 UTC m=+756.401862553" observedRunningTime="2026-02-24 02:33:36.948295763 +0000 UTC m=+760.165493609" watchObservedRunningTime="2026-02-24 02:33:36.992190549 +0000 UTC m=+760.209388395" Feb 24 02:33:37.043544 master-0 kubenswrapper[31411]: I0224 02:33:37.042166 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-lwlws" podStartSLOduration=5.552807155 podStartE2EDuration="26.042139324s" podCreationTimestamp="2026-02-24 02:33:11 +0000 UTC" firstStartedPulling="2026-02-24 02:33:13.314531141 +0000 UTC m=+736.531728987" lastFinishedPulling="2026-02-24 02:33:33.8038633 +0000 UTC m=+757.021061156" observedRunningTime="2026-02-24 02:33:36.981287394 +0000 UTC m=+760.198485240" watchObservedRunningTime="2026-02-24 02:33:37.042139324 +0000 UTC m=+760.259337170" Feb 24 02:33:37.046064 master-0 kubenswrapper[31411]: I0224 02:33:37.046023 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-zfd69" podStartSLOduration=5.893348888 podStartE2EDuration="27.046013002s" podCreationTimestamp="2026-02-24 02:33:10 +0000 UTC" firstStartedPulling="2026-02-24 02:33:12.651201786 +0000 UTC m=+735.868399632" lastFinishedPulling="2026-02-24 02:33:33.8038659 +0000 UTC m=+757.021063746" observedRunningTime="2026-02-24 02:33:37.025919941 +0000 UTC m=+760.243117787" watchObservedRunningTime="2026-02-24 02:33:37.046013002 +0000 UTC m=+760.263210848" Feb 24 02:33:37.083597 master-0 kubenswrapper[31411]: I0224 02:33:37.082543 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5t6bt" podStartSLOduration=6.001292632 podStartE2EDuration="27.082521371s" podCreationTimestamp="2026-02-24 02:33:10 +0000 UTC" firstStartedPulling="2026-02-24 02:33:13.284544723 +0000 UTC m=+736.501742569" lastFinishedPulling="2026-02-24 02:33:34.365773432 +0000 UTC m=+757.582971308" observedRunningTime="2026-02-24 02:33:37.068920232 +0000 UTC m=+760.286118078" watchObservedRunningTime="2026-02-24 02:33:37.082521371 +0000 UTC m=+760.299719217" Feb 24 02:33:37.114590 master-0 kubenswrapper[31411]: I0224 02:33:37.114373 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-49gvb" podStartSLOduration=6.596355791 podStartE2EDuration="27.11435364s" podCreationTimestamp="2026-02-24 02:33:10 +0000 UTC" firstStartedPulling="2026-02-24 02:33:13.284876873 +0000 UTC m=+736.502074719" lastFinishedPulling="2026-02-24 02:33:33.802874682 +0000 UTC m=+757.020072568" observedRunningTime="2026-02-24 02:33:37.112735385 +0000 UTC m=+760.329933231" watchObservedRunningTime="2026-02-24 02:33:37.11435364 +0000 UTC m=+760.331551486" Feb 24 02:33:37.144590 master-0 kubenswrapper[31411]: I0224 02:33:37.143556 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-nffrm" podStartSLOduration=4.446382756 podStartE2EDuration="26.143536025s" podCreationTimestamp="2026-02-24 02:33:11 +0000 UTC" firstStartedPulling="2026-02-24 02:33:13.492712937 +0000 UTC m=+736.709910783" lastFinishedPulling="2026-02-24 02:33:35.189866216 +0000 UTC m=+758.407064052" observedRunningTime="2026-02-24 02:33:37.142684272 +0000 UTC m=+760.359882118" watchObservedRunningTime="2026-02-24 02:33:37.143536025 +0000 UTC m=+760.360733871" Feb 24 02:33:37.180700 master-0 kubenswrapper[31411]: I0224 02:33:37.180524 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-67d996989d-psxsg" podStartSLOduration=4.414417474 podStartE2EDuration="26.180495858s" podCreationTimestamp="2026-02-24 02:33:11 +0000 UTC" firstStartedPulling="2026-02-24 02:33:13.317601987 +0000 UTC m=+736.534799833" lastFinishedPulling="2026-02-24 02:33:35.083680361 +0000 UTC m=+758.300878217" observedRunningTime="2026-02-24 02:33:37.1723494 +0000 UTC m=+760.389547246" watchObservedRunningTime="2026-02-24 02:33:37.180495858 +0000 UTC m=+760.397693704" Feb 24 02:33:37.206273 master-0 kubenswrapper[31411]: I0224 02:33:37.205704 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ws6cb" podStartSLOduration=4.373868871 podStartE2EDuration="26.205681851s" podCreationTimestamp="2026-02-24 02:33:11 +0000 UTC" firstStartedPulling="2026-02-24 02:33:13.314294164 +0000 UTC m=+736.531492010" lastFinishedPulling="2026-02-24 02:33:35.146107104 +0000 UTC m=+758.363304990" observedRunningTime="2026-02-24 02:33:37.197887953 +0000 UTC m=+760.415085799" watchObservedRunningTime="2026-02-24 02:33:37.205681851 +0000 UTC m=+760.422879697" Feb 24 02:33:37.220457 master-0 kubenswrapper[31411]: I0224 02:33:37.220393 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-kwb4z" podStartSLOduration=5.095575106 podStartE2EDuration="26.220374821s" podCreationTimestamp="2026-02-24 02:33:11 +0000 UTC" firstStartedPulling="2026-02-24 02:33:14.066346537 +0000 UTC m=+737.283544373" lastFinishedPulling="2026-02-24 02:33:35.191146202 +0000 UTC m=+758.408344088" observedRunningTime="2026-02-24 02:33:37.217900932 +0000 UTC m=+760.435098768" watchObservedRunningTime="2026-02-24 02:33:37.220374821 +0000 UTC m=+760.437572667" Feb 24 02:33:37.265709 master-0 kubenswrapper[31411]: I0224 02:33:37.265642 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-4pjvq" podStartSLOduration=6.525277053 podStartE2EDuration="26.265626095s" podCreationTimestamp="2026-02-24 02:33:11 +0000 UTC" firstStartedPulling="2026-02-24 02:33:14.063506487 +0000 UTC m=+737.280704333" lastFinishedPulling="2026-02-24 02:33:33.803855509 +0000 UTC m=+757.021053375" observedRunningTime="2026-02-24 02:33:37.258153576 +0000 UTC m=+760.475351422" watchObservedRunningTime="2026-02-24 02:33:37.265626095 +0000 UTC m=+760.482823941" Feb 24 02:33:37.290930 master-0 kubenswrapper[31411]: I0224 02:33:37.290884 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4djnj" podStartSLOduration=5.176020943 podStartE2EDuration="26.29087518s" podCreationTimestamp="2026-02-24 02:33:11 +0000 UTC" firstStartedPulling="2026-02-24 02:33:14.02992246 +0000 UTC m=+737.247120306" lastFinishedPulling="2026-02-24 02:33:35.144776657 +0000 UTC m=+758.361974543" observedRunningTime="2026-02-24 02:33:37.289103691 +0000 UTC m=+760.506301537" watchObservedRunningTime="2026-02-24 02:33:37.29087518 +0000 UTC m=+760.508073026" Feb 24 02:33:37.745601 master-0 kubenswrapper[31411]: I0224 02:33:37.744832 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5t6bt" Feb 24 02:33:37.745601 master-0 kubenswrapper[31411]: I0224 02:33:37.744884 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4djnj" Feb 24 02:33:41.247380 master-0 kubenswrapper[31411]: I0224 02:33:41.247309 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-b72xt" Feb 24 02:33:41.260088 master-0 kubenswrapper[31411]: I0224 02:33:41.260032 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-2ldv2" Feb 24 02:33:41.287026 master-0 kubenswrapper[31411]: I0224 02:33:41.286908 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-55b7d" podStartSLOduration=9.246071905 podStartE2EDuration="30.286883345s" podCreationTimestamp="2026-02-24 02:33:11 +0000 UTC" firstStartedPulling="2026-02-24 02:33:14.075913914 +0000 UTC m=+737.293111760" lastFinishedPulling="2026-02-24 02:33:35.116725314 +0000 UTC m=+758.333923200" observedRunningTime="2026-02-24 02:33:37.328682646 +0000 UTC m=+760.545880492" watchObservedRunningTime="2026-02-24 02:33:41.286883345 +0000 UTC m=+764.504081191" Feb 24 02:33:41.293207 master-0 kubenswrapper[31411]: I0224 02:33:41.293069 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-zfd69" Feb 24 02:33:41.331855 master-0 kubenswrapper[31411]: I0224 02:33:41.331752 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dzbvc" Feb 24 02:33:41.385672 master-0 kubenswrapper[31411]: I0224 02:33:41.370090 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5t6bt" Feb 24 02:33:41.481429 master-0 kubenswrapper[31411]: I0224 02:33:41.481360 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ws6cb" Feb 24 02:33:41.494892 master-0 kubenswrapper[31411]: I0224 02:33:41.494836 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hksp2" Feb 24 02:33:41.667465 master-0 kubenswrapper[31411]: I0224 02:33:41.667300 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-49gvb" Feb 24 02:33:41.707611 master-0 kubenswrapper[31411]: I0224 02:33:41.704258 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-67d996989d-psxsg" Feb 24 02:33:41.755312 master-0 kubenswrapper[31411]: I0224 02:33:41.752968 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5xt4j" Feb 24 02:33:41.788623 master-0 kubenswrapper[31411]: I0224 02:33:41.788537 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-lwlws" Feb 24 02:33:41.907020 master-0 kubenswrapper[31411]: I0224 02:33:41.906938 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-74cdr" Feb 24 02:33:42.016343 master-0 kubenswrapper[31411]: I0224 02:33:42.016289 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-55b7d" Feb 24 02:33:42.034178 master-0 kubenswrapper[31411]: I0224 02:33:42.034080 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-nn47h" Feb 24 02:33:42.053494 master-0 kubenswrapper[31411]: I0224 02:33:42.053427 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-nffrm" Feb 24 02:33:42.072043 master-0 kubenswrapper[31411]: I0224 02:33:42.071072 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-kwb4z" Feb 24 02:33:42.355228 master-0 kubenswrapper[31411]: I0224 02:33:42.355029 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-pztlf" Feb 24 02:33:42.410199 master-0 kubenswrapper[31411]: I0224 02:33:42.410064 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4djnj" Feb 24 02:33:42.422353 master-0 kubenswrapper[31411]: I0224 02:33:42.422304 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-4pjvq" Feb 24 02:33:43.132040 master-0 kubenswrapper[31411]: I0224 02:33:43.131960 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4-cert\") pod \"infra-operator-controller-manager-5f879c76b6-2kk8t\" (UID: \"37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2kk8t" Feb 24 02:33:43.135743 master-0 kubenswrapper[31411]: I0224 02:33:43.135681 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4-cert\") pod \"infra-operator-controller-manager-5f879c76b6-2kk8t\" (UID: \"37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2kk8t" Feb 24 02:33:43.184602 master-0 kubenswrapper[31411]: I0224 02:33:43.184044 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2kk8t" Feb 24 02:33:43.645065 master-0 kubenswrapper[31411]: I0224 02:33:43.644961 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e812dec6-4f25-4ba5-b08b-c2c7db77b4b3-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b9tqsfz\" (UID: \"e812dec6-4f25-4ba5-b08b-c2c7db77b4b3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9tqsfz" Feb 24 02:33:43.652399 master-0 kubenswrapper[31411]: I0224 02:33:43.652325 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e812dec6-4f25-4ba5-b08b-c2c7db77b4b3-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b9tqsfz\" (UID: \"e812dec6-4f25-4ba5-b08b-c2c7db77b4b3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9tqsfz" Feb 24 02:33:43.688121 master-0 kubenswrapper[31411]: W0224 02:33:43.686109 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37d3adf3_e0cc_4f32_94ee_f89a8f4f49b4.slice/crio-1ef511de2180b96204ad384bd4fd05f244ce0bfc6d24c45490ad240c88c205e1 WatchSource:0}: Error finding container 1ef511de2180b96204ad384bd4fd05f244ce0bfc6d24c45490ad240c88c205e1: Status 404 returned error can't find the container with id 1ef511de2180b96204ad384bd4fd05f244ce0bfc6d24c45490ad240c88c205e1 Feb 24 02:33:43.690226 master-0 kubenswrapper[31411]: I0224 02:33:43.688503 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f879c76b6-2kk8t"] Feb 24 02:33:43.734119 master-0 kubenswrapper[31411]: I0224 02:33:43.734054 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9tqsfz" Feb 24 02:33:43.849170 master-0 kubenswrapper[31411]: I0224 02:33:43.849093 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2kk8t" event={"ID":"37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4","Type":"ContainerStarted","Data":"1ef511de2180b96204ad384bd4fd05f244ce0bfc6d24c45490ad240c88c205e1"} Feb 24 02:33:44.164761 master-0 kubenswrapper[31411]: I0224 02:33:44.163213 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-q59hq\" (UID: \"e2268b3c-ccf6-4309-ab1e-6c083c1f78cf\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq" Feb 24 02:33:44.164761 master-0 kubenswrapper[31411]: I0224 02:33:44.163351 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-q59hq\" (UID: \"e2268b3c-ccf6-4309-ab1e-6c083c1f78cf\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq" Feb 24 02:33:44.170334 master-0 kubenswrapper[31411]: I0224 02:33:44.170276 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-q59hq\" (UID: \"e2268b3c-ccf6-4309-ab1e-6c083c1f78cf\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq" Feb 24 02:33:44.170664 master-0 kubenswrapper[31411]: I0224 02:33:44.170605 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e2268b3c-ccf6-4309-ab1e-6c083c1f78cf-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-q59hq\" (UID: \"e2268b3c-ccf6-4309-ab1e-6c083c1f78cf\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq" Feb 24 02:33:44.331835 master-0 kubenswrapper[31411]: I0224 02:33:44.331719 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9tqsfz"] Feb 24 02:33:44.407917 master-0 kubenswrapper[31411]: I0224 02:33:44.407823 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq" Feb 24 02:33:44.865244 master-0 kubenswrapper[31411]: I0224 02:33:44.865046 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9tqsfz" event={"ID":"e812dec6-4f25-4ba5-b08b-c2c7db77b4b3","Type":"ContainerStarted","Data":"224e84db8b6772f727494ab1101c216e786bcaab074daf3d6f3de3a1e4012304"} Feb 24 02:33:45.058759 master-0 kubenswrapper[31411]: I0224 02:33:45.057919 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq"] Feb 24 02:33:45.059363 master-0 kubenswrapper[31411]: W0224 02:33:45.058859 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2268b3c_ccf6_4309_ab1e_6c083c1f78cf.slice/crio-d7d67cc8de1c7e260b4bc339d3f42fdb42ac0c67701170237eaf5d20b094b073 WatchSource:0}: Error finding container d7d67cc8de1c7e260b4bc339d3f42fdb42ac0c67701170237eaf5d20b094b073: Status 404 returned error can't find the container with id d7d67cc8de1c7e260b4bc339d3f42fdb42ac0c67701170237eaf5d20b094b073 Feb 24 02:33:45.879969 master-0 kubenswrapper[31411]: I0224 02:33:45.879839 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq" event={"ID":"e2268b3c-ccf6-4309-ab1e-6c083c1f78cf","Type":"ContainerStarted","Data":"c28e22ef23bfc89195a4f13121322b4bc3403655e8c12639effb9581781d9fc1"} Feb 24 02:33:45.880726 master-0 kubenswrapper[31411]: I0224 02:33:45.880002 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq" event={"ID":"e2268b3c-ccf6-4309-ab1e-6c083c1f78cf","Type":"ContainerStarted","Data":"d7d67cc8de1c7e260b4bc339d3f42fdb42ac0c67701170237eaf5d20b094b073"} Feb 24 02:33:45.881067 master-0 kubenswrapper[31411]: I0224 02:33:45.881019 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq" Feb 24 02:33:45.944090 master-0 kubenswrapper[31411]: I0224 02:33:45.934304 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq" podStartSLOduration=34.934272222 podStartE2EDuration="34.934272222s" podCreationTimestamp="2026-02-24 02:33:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:33:45.918627885 +0000 UTC m=+769.135825771" watchObservedRunningTime="2026-02-24 02:33:45.934272222 +0000 UTC m=+769.151470108" Feb 24 02:33:47.953594 master-0 kubenswrapper[31411]: I0224 02:33:47.949883 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9tqsfz" event={"ID":"e812dec6-4f25-4ba5-b08b-c2c7db77b4b3","Type":"ContainerStarted","Data":"204dbd12658070b2c0f7f700873811af2d2d8f96be034afc4329fba86ad7799f"} Feb 24 02:33:47.953594 master-0 kubenswrapper[31411]: I0224 02:33:47.951372 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9tqsfz" Feb 24 02:33:47.972740 master-0 kubenswrapper[31411]: I0224 02:33:47.970785 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2kk8t" event={"ID":"37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4","Type":"ContainerStarted","Data":"1da015baf3cd9706fa62c18774c6249d8282a5ec6e2f0d2f53fce6df397dc5bb"} Feb 24 02:33:47.972740 master-0 kubenswrapper[31411]: I0224 02:33:47.971972 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2kk8t" Feb 24 02:33:48.002134 master-0 kubenswrapper[31411]: I0224 02:33:48.000328 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9tqsfz" podStartSLOduration=33.855532756 podStartE2EDuration="37.000305969s" podCreationTimestamp="2026-02-24 02:33:11 +0000 UTC" firstStartedPulling="2026-02-24 02:33:44.344627088 +0000 UTC m=+767.561824974" lastFinishedPulling="2026-02-24 02:33:47.489400301 +0000 UTC m=+770.706598187" observedRunningTime="2026-02-24 02:33:47.996034429 +0000 UTC m=+771.213232275" watchObservedRunningTime="2026-02-24 02:33:48.000305969 +0000 UTC m=+771.217503815" Feb 24 02:33:48.042933 master-0 kubenswrapper[31411]: I0224 02:33:48.042770 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2kk8t" podStartSLOduration=34.268099191 podStartE2EDuration="38.042741244s" podCreationTimestamp="2026-02-24 02:33:10 +0000 UTC" firstStartedPulling="2026-02-24 02:33:43.689800151 +0000 UTC m=+766.906998027" lastFinishedPulling="2026-02-24 02:33:47.464442194 +0000 UTC m=+770.681640080" observedRunningTime="2026-02-24 02:33:48.025504463 +0000 UTC m=+771.242702309" watchObservedRunningTime="2026-02-24 02:33:48.042741244 +0000 UTC m=+771.259939100" Feb 24 02:33:53.196459 master-0 kubenswrapper[31411]: I0224 02:33:53.196367 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-2kk8t" Feb 24 02:33:53.745167 master-0 kubenswrapper[31411]: I0224 02:33:53.745080 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9tqsfz" Feb 24 02:33:54.415845 master-0 kubenswrapper[31411]: I0224 02:33:54.415742 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq" Feb 24 02:34:40.783604 master-0 kubenswrapper[31411]: I0224 02:34:40.783111 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bc7f9869-4kmll"] Feb 24 02:34:40.811180 master-0 kubenswrapper[31411]: I0224 02:34:40.811099 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bc7f9869-4kmll"] Feb 24 02:34:40.811459 master-0 kubenswrapper[31411]: I0224 02:34:40.811264 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bc7f9869-4kmll" Feb 24 02:34:40.814587 master-0 kubenswrapper[31411]: I0224 02:34:40.814512 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 24 02:34:40.814698 master-0 kubenswrapper[31411]: I0224 02:34:40.814646 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 24 02:34:40.814799 master-0 kubenswrapper[31411]: I0224 02:34:40.814660 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 24 02:34:40.925075 master-0 kubenswrapper[31411]: I0224 02:34:40.925010 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d4c486879-cr468"] Feb 24 02:34:40.927000 master-0 kubenswrapper[31411]: I0224 02:34:40.926972 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d4c486879-cr468" Feb 24 02:34:40.935977 master-0 kubenswrapper[31411]: I0224 02:34:40.935907 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d4c486879-cr468"] Feb 24 02:34:40.939347 master-0 kubenswrapper[31411]: I0224 02:34:40.938430 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 24 02:34:40.975606 master-0 kubenswrapper[31411]: I0224 02:34:40.957016 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk4mc\" (UniqueName: \"kubernetes.io/projected/9d42f804-aae5-4adf-84cf-6887d203342f-kube-api-access-jk4mc\") pod \"dnsmasq-dns-7d4c486879-cr468\" (UID: \"9d42f804-aae5-4adf-84cf-6887d203342f\") " pod="openstack/dnsmasq-dns-7d4c486879-cr468" Feb 24 02:34:40.976244 master-0 kubenswrapper[31411]: I0224 02:34:40.976183 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a-config\") pod \"dnsmasq-dns-bc7f9869-4kmll\" (UID: \"8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a\") " pod="openstack/dnsmasq-dns-bc7f9869-4kmll" Feb 24 02:34:40.987773 master-0 kubenswrapper[31411]: I0224 02:34:40.976534 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkvwn\" (UniqueName: \"kubernetes.io/projected/8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a-kube-api-access-jkvwn\") pod \"dnsmasq-dns-bc7f9869-4kmll\" (UID: \"8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a\") " pod="openstack/dnsmasq-dns-bc7f9869-4kmll" Feb 24 02:34:40.988485 master-0 kubenswrapper[31411]: I0224 02:34:40.988443 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d42f804-aae5-4adf-84cf-6887d203342f-config\") pod \"dnsmasq-dns-7d4c486879-cr468\" (UID: \"9d42f804-aae5-4adf-84cf-6887d203342f\") " pod="openstack/dnsmasq-dns-7d4c486879-cr468" Feb 24 02:34:40.988725 master-0 kubenswrapper[31411]: I0224 02:34:40.988710 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d42f804-aae5-4adf-84cf-6887d203342f-dns-svc\") pod \"dnsmasq-dns-7d4c486879-cr468\" (UID: \"9d42f804-aae5-4adf-84cf-6887d203342f\") " pod="openstack/dnsmasq-dns-7d4c486879-cr468" Feb 24 02:34:41.094517 master-0 kubenswrapper[31411]: I0224 02:34:41.092433 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkvwn\" (UniqueName: \"kubernetes.io/projected/8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a-kube-api-access-jkvwn\") pod \"dnsmasq-dns-bc7f9869-4kmll\" (UID: \"8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a\") " pod="openstack/dnsmasq-dns-bc7f9869-4kmll" Feb 24 02:34:41.094517 master-0 kubenswrapper[31411]: I0224 02:34:41.092595 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d42f804-aae5-4adf-84cf-6887d203342f-config\") pod \"dnsmasq-dns-7d4c486879-cr468\" (UID: \"9d42f804-aae5-4adf-84cf-6887d203342f\") " pod="openstack/dnsmasq-dns-7d4c486879-cr468" Feb 24 02:34:41.094517 master-0 kubenswrapper[31411]: I0224 02:34:41.092647 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d42f804-aae5-4adf-84cf-6887d203342f-dns-svc\") pod \"dnsmasq-dns-7d4c486879-cr468\" (UID: \"9d42f804-aae5-4adf-84cf-6887d203342f\") " pod="openstack/dnsmasq-dns-7d4c486879-cr468" Feb 24 02:34:41.094517 master-0 kubenswrapper[31411]: I0224 02:34:41.092686 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk4mc\" (UniqueName: \"kubernetes.io/projected/9d42f804-aae5-4adf-84cf-6887d203342f-kube-api-access-jk4mc\") pod \"dnsmasq-dns-7d4c486879-cr468\" (UID: \"9d42f804-aae5-4adf-84cf-6887d203342f\") " pod="openstack/dnsmasq-dns-7d4c486879-cr468" Feb 24 02:34:41.094517 master-0 kubenswrapper[31411]: I0224 02:34:41.092710 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a-config\") pod \"dnsmasq-dns-bc7f9869-4kmll\" (UID: \"8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a\") " pod="openstack/dnsmasq-dns-bc7f9869-4kmll" Feb 24 02:34:41.094517 master-0 kubenswrapper[31411]: I0224 02:34:41.093764 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a-config\") pod \"dnsmasq-dns-bc7f9869-4kmll\" (UID: \"8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a\") " pod="openstack/dnsmasq-dns-bc7f9869-4kmll" Feb 24 02:34:41.094517 master-0 kubenswrapper[31411]: I0224 02:34:41.094174 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d42f804-aae5-4adf-84cf-6887d203342f-config\") pod \"dnsmasq-dns-7d4c486879-cr468\" (UID: \"9d42f804-aae5-4adf-84cf-6887d203342f\") " pod="openstack/dnsmasq-dns-7d4c486879-cr468" Feb 24 02:34:41.094517 master-0 kubenswrapper[31411]: I0224 02:34:41.094407 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d42f804-aae5-4adf-84cf-6887d203342f-dns-svc\") pod \"dnsmasq-dns-7d4c486879-cr468\" (UID: \"9d42f804-aae5-4adf-84cf-6887d203342f\") " pod="openstack/dnsmasq-dns-7d4c486879-cr468" Feb 24 02:34:41.110462 master-0 kubenswrapper[31411]: I0224 02:34:41.110387 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkvwn\" (UniqueName: \"kubernetes.io/projected/8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a-kube-api-access-jkvwn\") pod \"dnsmasq-dns-bc7f9869-4kmll\" (UID: \"8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a\") " pod="openstack/dnsmasq-dns-bc7f9869-4kmll" Feb 24 02:34:41.114864 master-0 kubenswrapper[31411]: I0224 02:34:41.112080 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk4mc\" (UniqueName: \"kubernetes.io/projected/9d42f804-aae5-4adf-84cf-6887d203342f-kube-api-access-jk4mc\") pod \"dnsmasq-dns-7d4c486879-cr468\" (UID: \"9d42f804-aae5-4adf-84cf-6887d203342f\") " pod="openstack/dnsmasq-dns-7d4c486879-cr468" Feb 24 02:34:41.163604 master-0 kubenswrapper[31411]: I0224 02:34:41.163366 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bc7f9869-4kmll" Feb 24 02:34:41.262537 master-0 kubenswrapper[31411]: I0224 02:34:41.262423 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d4c486879-cr468" Feb 24 02:34:41.785029 master-0 kubenswrapper[31411]: I0224 02:34:41.784599 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bc7f9869-4kmll"] Feb 24 02:34:41.818933 master-0 kubenswrapper[31411]: I0224 02:34:41.818867 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bc7f9869-4kmll" event={"ID":"8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a","Type":"ContainerStarted","Data":"39addfa10000bc6b3ad84dcc6d47354d6e94e38399750e630bcbb6d2154f967c"} Feb 24 02:34:41.882906 master-0 kubenswrapper[31411]: I0224 02:34:41.882832 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d4c486879-cr468"] Feb 24 02:34:41.887312 master-0 kubenswrapper[31411]: W0224 02:34:41.887255 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d42f804_aae5_4adf_84cf_6887d203342f.slice/crio-03d9f3337f71c3356ead72ba5723c3886292e40cc87e98e6e358dd51b57ee05a WatchSource:0}: Error finding container 03d9f3337f71c3356ead72ba5723c3886292e40cc87e98e6e358dd51b57ee05a: Status 404 returned error can't find the container with id 03d9f3337f71c3356ead72ba5723c3886292e40cc87e98e6e358dd51b57ee05a Feb 24 02:34:42.149085 master-0 kubenswrapper[31411]: I0224 02:34:42.148953 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d4c486879-cr468"] Feb 24 02:34:42.182091 master-0 kubenswrapper[31411]: I0224 02:34:42.182017 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6974cff98c-qbhgh"] Feb 24 02:34:42.184021 master-0 kubenswrapper[31411]: I0224 02:34:42.183989 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6974cff98c-qbhgh" Feb 24 02:34:42.199935 master-0 kubenswrapper[31411]: I0224 02:34:42.199880 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6974cff98c-qbhgh"] Feb 24 02:34:42.224533 master-0 kubenswrapper[31411]: I0224 02:34:42.224454 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pq4r\" (UniqueName: \"kubernetes.io/projected/713f4764-f8a7-4867-bd77-54c68933ca65-kube-api-access-6pq4r\") pod \"dnsmasq-dns-6974cff98c-qbhgh\" (UID: \"713f4764-f8a7-4867-bd77-54c68933ca65\") " pod="openstack/dnsmasq-dns-6974cff98c-qbhgh" Feb 24 02:34:42.225028 master-0 kubenswrapper[31411]: I0224 02:34:42.224983 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/713f4764-f8a7-4867-bd77-54c68933ca65-dns-svc\") pod \"dnsmasq-dns-6974cff98c-qbhgh\" (UID: \"713f4764-f8a7-4867-bd77-54c68933ca65\") " pod="openstack/dnsmasq-dns-6974cff98c-qbhgh" Feb 24 02:34:42.225110 master-0 kubenswrapper[31411]: I0224 02:34:42.225080 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/713f4764-f8a7-4867-bd77-54c68933ca65-config\") pod \"dnsmasq-dns-6974cff98c-qbhgh\" (UID: \"713f4764-f8a7-4867-bd77-54c68933ca65\") " pod="openstack/dnsmasq-dns-6974cff98c-qbhgh" Feb 24 02:34:42.329610 master-0 kubenswrapper[31411]: I0224 02:34:42.329252 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pq4r\" (UniqueName: \"kubernetes.io/projected/713f4764-f8a7-4867-bd77-54c68933ca65-kube-api-access-6pq4r\") pod \"dnsmasq-dns-6974cff98c-qbhgh\" (UID: \"713f4764-f8a7-4867-bd77-54c68933ca65\") " pod="openstack/dnsmasq-dns-6974cff98c-qbhgh" Feb 24 02:34:42.329610 master-0 kubenswrapper[31411]: I0224 02:34:42.329496 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/713f4764-f8a7-4867-bd77-54c68933ca65-dns-svc\") pod \"dnsmasq-dns-6974cff98c-qbhgh\" (UID: \"713f4764-f8a7-4867-bd77-54c68933ca65\") " pod="openstack/dnsmasq-dns-6974cff98c-qbhgh" Feb 24 02:34:42.329610 master-0 kubenswrapper[31411]: I0224 02:34:42.329567 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/713f4764-f8a7-4867-bd77-54c68933ca65-config\") pod \"dnsmasq-dns-6974cff98c-qbhgh\" (UID: \"713f4764-f8a7-4867-bd77-54c68933ca65\") " pod="openstack/dnsmasq-dns-6974cff98c-qbhgh" Feb 24 02:34:42.331035 master-0 kubenswrapper[31411]: I0224 02:34:42.331000 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/713f4764-f8a7-4867-bd77-54c68933ca65-config\") pod \"dnsmasq-dns-6974cff98c-qbhgh\" (UID: \"713f4764-f8a7-4867-bd77-54c68933ca65\") " pod="openstack/dnsmasq-dns-6974cff98c-qbhgh" Feb 24 02:34:42.331724 master-0 kubenswrapper[31411]: I0224 02:34:42.331691 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/713f4764-f8a7-4867-bd77-54c68933ca65-dns-svc\") pod \"dnsmasq-dns-6974cff98c-qbhgh\" (UID: \"713f4764-f8a7-4867-bd77-54c68933ca65\") " pod="openstack/dnsmasq-dns-6974cff98c-qbhgh" Feb 24 02:34:42.348677 master-0 kubenswrapper[31411]: I0224 02:34:42.348616 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pq4r\" (UniqueName: \"kubernetes.io/projected/713f4764-f8a7-4867-bd77-54c68933ca65-kube-api-access-6pq4r\") pod \"dnsmasq-dns-6974cff98c-qbhgh\" (UID: \"713f4764-f8a7-4867-bd77-54c68933ca65\") " pod="openstack/dnsmasq-dns-6974cff98c-qbhgh" Feb 24 02:34:42.520765 master-0 kubenswrapper[31411]: I0224 02:34:42.520460 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6974cff98c-qbhgh" Feb 24 02:34:42.833999 master-0 kubenswrapper[31411]: I0224 02:34:42.833821 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d4c486879-cr468" event={"ID":"9d42f804-aae5-4adf-84cf-6887d203342f","Type":"ContainerStarted","Data":"03d9f3337f71c3356ead72ba5723c3886292e40cc87e98e6e358dd51b57ee05a"} Feb 24 02:34:43.382552 master-0 kubenswrapper[31411]: I0224 02:34:43.382416 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bc7f9869-4kmll"] Feb 24 02:34:43.411268 master-0 kubenswrapper[31411]: I0224 02:34:43.411196 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c45d57b9c-jf69p"] Feb 24 02:34:43.420934 master-0 kubenswrapper[31411]: I0224 02:34:43.419980 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c45d57b9c-jf69p" Feb 24 02:34:43.467658 master-0 kubenswrapper[31411]: I0224 02:34:43.467504 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c45d57b9c-jf69p"] Feb 24 02:34:43.509167 master-0 kubenswrapper[31411]: I0224 02:34:43.506616 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bef074ce-14e6-4258-8172-bd2e640ae24b-dns-svc\") pod \"dnsmasq-dns-7c45d57b9c-jf69p\" (UID: \"bef074ce-14e6-4258-8172-bd2e640ae24b\") " pod="openstack/dnsmasq-dns-7c45d57b9c-jf69p" Feb 24 02:34:43.510930 master-0 kubenswrapper[31411]: I0224 02:34:43.510900 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bef074ce-14e6-4258-8172-bd2e640ae24b-config\") pod \"dnsmasq-dns-7c45d57b9c-jf69p\" (UID: \"bef074ce-14e6-4258-8172-bd2e640ae24b\") " pod="openstack/dnsmasq-dns-7c45d57b9c-jf69p" Feb 24 02:34:43.511024 master-0 kubenswrapper[31411]: I0224 02:34:43.510987 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxfhw\" (UniqueName: \"kubernetes.io/projected/bef074ce-14e6-4258-8172-bd2e640ae24b-kube-api-access-fxfhw\") pod \"dnsmasq-dns-7c45d57b9c-jf69p\" (UID: \"bef074ce-14e6-4258-8172-bd2e640ae24b\") " pod="openstack/dnsmasq-dns-7c45d57b9c-jf69p" Feb 24 02:34:43.620822 master-0 kubenswrapper[31411]: I0224 02:34:43.620745 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bef074ce-14e6-4258-8172-bd2e640ae24b-dns-svc\") pod \"dnsmasq-dns-7c45d57b9c-jf69p\" (UID: \"bef074ce-14e6-4258-8172-bd2e640ae24b\") " pod="openstack/dnsmasq-dns-7c45d57b9c-jf69p" Feb 24 02:34:43.620983 master-0 kubenswrapper[31411]: I0224 02:34:43.620842 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bef074ce-14e6-4258-8172-bd2e640ae24b-config\") pod \"dnsmasq-dns-7c45d57b9c-jf69p\" (UID: \"bef074ce-14e6-4258-8172-bd2e640ae24b\") " pod="openstack/dnsmasq-dns-7c45d57b9c-jf69p" Feb 24 02:34:43.620983 master-0 kubenswrapper[31411]: I0224 02:34:43.620887 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxfhw\" (UniqueName: \"kubernetes.io/projected/bef074ce-14e6-4258-8172-bd2e640ae24b-kube-api-access-fxfhw\") pod \"dnsmasq-dns-7c45d57b9c-jf69p\" (UID: \"bef074ce-14e6-4258-8172-bd2e640ae24b\") " pod="openstack/dnsmasq-dns-7c45d57b9c-jf69p" Feb 24 02:34:43.625223 master-0 kubenswrapper[31411]: I0224 02:34:43.625190 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bef074ce-14e6-4258-8172-bd2e640ae24b-dns-svc\") pod \"dnsmasq-dns-7c45d57b9c-jf69p\" (UID: \"bef074ce-14e6-4258-8172-bd2e640ae24b\") " pod="openstack/dnsmasq-dns-7c45d57b9c-jf69p" Feb 24 02:34:43.633105 master-0 kubenswrapper[31411]: I0224 02:34:43.628384 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bef074ce-14e6-4258-8172-bd2e640ae24b-config\") pod \"dnsmasq-dns-7c45d57b9c-jf69p\" (UID: \"bef074ce-14e6-4258-8172-bd2e640ae24b\") " pod="openstack/dnsmasq-dns-7c45d57b9c-jf69p" Feb 24 02:34:43.651307 master-0 kubenswrapper[31411]: I0224 02:34:43.651260 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxfhw\" (UniqueName: \"kubernetes.io/projected/bef074ce-14e6-4258-8172-bd2e640ae24b-kube-api-access-fxfhw\") pod \"dnsmasq-dns-7c45d57b9c-jf69p\" (UID: \"bef074ce-14e6-4258-8172-bd2e640ae24b\") " pod="openstack/dnsmasq-dns-7c45d57b9c-jf69p" Feb 24 02:34:43.654914 master-0 kubenswrapper[31411]: I0224 02:34:43.654858 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6974cff98c-qbhgh"] Feb 24 02:34:43.795675 master-0 kubenswrapper[31411]: I0224 02:34:43.795531 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c45d57b9c-jf69p" Feb 24 02:34:43.854168 master-0 kubenswrapper[31411]: I0224 02:34:43.854097 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6974cff98c-qbhgh" event={"ID":"713f4764-f8a7-4867-bd77-54c68933ca65","Type":"ContainerStarted","Data":"5be2d96d71995489bdd5e62083675ae8722bc95bc08e2345594eac5e45d11a61"} Feb 24 02:34:44.498454 master-0 kubenswrapper[31411]: I0224 02:34:44.498347 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c45d57b9c-jf69p"] Feb 24 02:34:44.866056 master-0 kubenswrapper[31411]: I0224 02:34:44.865905 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c45d57b9c-jf69p" event={"ID":"bef074ce-14e6-4258-8172-bd2e640ae24b","Type":"ContainerStarted","Data":"41fa347412359539880b9ddb0e1e83dfe92ff1f425c7d7ad7e7f1df699b7f38d"} Feb 24 02:34:46.353882 master-0 kubenswrapper[31411]: I0224 02:34:46.346478 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 24 02:34:46.353882 master-0 kubenswrapper[31411]: I0224 02:34:46.348846 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.353882 master-0 kubenswrapper[31411]: I0224 02:34:46.352364 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 24 02:34:46.353882 master-0 kubenswrapper[31411]: I0224 02:34:46.352933 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 24 02:34:46.353882 master-0 kubenswrapper[31411]: I0224 02:34:46.353224 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 24 02:34:46.365912 master-0 kubenswrapper[31411]: I0224 02:34:46.365820 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 24 02:34:46.366117 master-0 kubenswrapper[31411]: I0224 02:34:46.366095 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 24 02:34:46.366284 master-0 kubenswrapper[31411]: I0224 02:34:46.366223 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 24 02:34:46.473831 master-0 kubenswrapper[31411]: I0224 02:34:46.473235 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 24 02:34:46.489647 master-0 kubenswrapper[31411]: I0224 02:34:46.477607 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5680b3af-dae8-4617-80b2-30c0a9818130-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.489647 master-0 kubenswrapper[31411]: I0224 02:34:46.477653 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5680b3af-dae8-4617-80b2-30c0a9818130-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.489647 master-0 kubenswrapper[31411]: I0224 02:34:46.477748 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5680b3af-dae8-4617-80b2-30c0a9818130-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.489647 master-0 kubenswrapper[31411]: I0224 02:34:46.477775 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5680b3af-dae8-4617-80b2-30c0a9818130-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.489647 master-0 kubenswrapper[31411]: I0224 02:34:46.477821 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5680b3af-dae8-4617-80b2-30c0a9818130-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.489647 master-0 kubenswrapper[31411]: I0224 02:34:46.477892 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-73225840-35e7-4008-8fed-f5170a782266\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ed6882af-2232-46d2-b108-cb681d8188f4\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.489647 master-0 kubenswrapper[31411]: I0224 02:34:46.477927 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5680b3af-dae8-4617-80b2-30c0a9818130-config-data\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.489647 master-0 kubenswrapper[31411]: I0224 02:34:46.477945 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5680b3af-dae8-4617-80b2-30c0a9818130-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.489647 master-0 kubenswrapper[31411]: I0224 02:34:46.477986 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5680b3af-dae8-4617-80b2-30c0a9818130-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.489647 master-0 kubenswrapper[31411]: I0224 02:34:46.478013 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5680b3af-dae8-4617-80b2-30c0a9818130-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.489647 master-0 kubenswrapper[31411]: I0224 02:34:46.478050 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dn78\" (UniqueName: \"kubernetes.io/projected/5680b3af-dae8-4617-80b2-30c0a9818130-kube-api-access-2dn78\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.586717 master-0 kubenswrapper[31411]: I0224 02:34:46.583307 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5680b3af-dae8-4617-80b2-30c0a9818130-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.586717 master-0 kubenswrapper[31411]: I0224 02:34:46.583372 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5680b3af-dae8-4617-80b2-30c0a9818130-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.586717 master-0 kubenswrapper[31411]: I0224 02:34:46.583424 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5680b3af-dae8-4617-80b2-30c0a9818130-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.586717 master-0 kubenswrapper[31411]: I0224 02:34:46.583455 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-73225840-35e7-4008-8fed-f5170a782266\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ed6882af-2232-46d2-b108-cb681d8188f4\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.586717 master-0 kubenswrapper[31411]: I0224 02:34:46.583484 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5680b3af-dae8-4617-80b2-30c0a9818130-config-data\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.586717 master-0 kubenswrapper[31411]: I0224 02:34:46.583508 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5680b3af-dae8-4617-80b2-30c0a9818130-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.586717 master-0 kubenswrapper[31411]: I0224 02:34:46.583549 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5680b3af-dae8-4617-80b2-30c0a9818130-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.586717 master-0 kubenswrapper[31411]: I0224 02:34:46.586446 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5680b3af-dae8-4617-80b2-30c0a9818130-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.586717 master-0 kubenswrapper[31411]: I0224 02:34:46.586491 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dn78\" (UniqueName: \"kubernetes.io/projected/5680b3af-dae8-4617-80b2-30c0a9818130-kube-api-access-2dn78\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.586717 master-0 kubenswrapper[31411]: I0224 02:34:46.586546 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5680b3af-dae8-4617-80b2-30c0a9818130-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.586717 master-0 kubenswrapper[31411]: I0224 02:34:46.586565 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5680b3af-dae8-4617-80b2-30c0a9818130-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.588692 master-0 kubenswrapper[31411]: I0224 02:34:46.588664 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5680b3af-dae8-4617-80b2-30c0a9818130-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.589997 master-0 kubenswrapper[31411]: I0224 02:34:46.589976 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5680b3af-dae8-4617-80b2-30c0a9818130-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.605492 master-0 kubenswrapper[31411]: I0224 02:34:46.598258 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5680b3af-dae8-4617-80b2-30c0a9818130-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.605748 master-0 kubenswrapper[31411]: I0224 02:34:46.605659 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5680b3af-dae8-4617-80b2-30c0a9818130-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.606647 master-0 kubenswrapper[31411]: I0224 02:34:46.606623 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5680b3af-dae8-4617-80b2-30c0a9818130-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.606892 master-0 kubenswrapper[31411]: I0224 02:34:46.606865 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5680b3af-dae8-4617-80b2-30c0a9818130-config-data\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.620048 master-0 kubenswrapper[31411]: I0224 02:34:46.611075 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5680b3af-dae8-4617-80b2-30c0a9818130-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.620048 master-0 kubenswrapper[31411]: I0224 02:34:46.619737 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5680b3af-dae8-4617-80b2-30c0a9818130-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.621028 master-0 kubenswrapper[31411]: I0224 02:34:46.620763 31411 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 02:34:46.621028 master-0 kubenswrapper[31411]: I0224 02:34:46.620821 31411 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-73225840-35e7-4008-8fed-f5170a782266\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ed6882af-2232-46d2-b108-cb681d8188f4\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/2bd39f3d233950462f3fae7814a30adf979ebb0dbba32cf227177d934ea828d5/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.633691 master-0 kubenswrapper[31411]: I0224 02:34:46.633654 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5680b3af-dae8-4617-80b2-30c0a9818130-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:46.723290 master-0 kubenswrapper[31411]: I0224 02:34:46.723192 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dn78\" (UniqueName: \"kubernetes.io/projected/5680b3af-dae8-4617-80b2-30c0a9818130-kube-api-access-2dn78\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:50.393606 master-0 kubenswrapper[31411]: I0224 02:34:50.387786 31411 patch_prober.go:28] interesting pod/metrics-server-67ddc7b799-zlnvf container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.128.0.102:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 02:34:50.393606 master-0 kubenswrapper[31411]: I0224 02:34:50.387887 31411 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" podUID="02fc214d-8c40-4ed5-9f18-8bf5863d8d70" containerName="metrics-server" probeResult="failure" output="Get \"https://10.128.0.102:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 02:34:50.393606 master-0 kubenswrapper[31411]: I0224 02:34:50.389494 31411 patch_prober.go:28] interesting pod/metrics-server-67ddc7b799-zlnvf container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.128.0.102:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 02:34:50.393606 master-0 kubenswrapper[31411]: I0224 02:34:50.390300 31411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-67ddc7b799-zlnvf" podUID="02fc214d-8c40-4ed5-9f18-8bf5863d8d70" containerName="metrics-server" probeResult="failure" output="Get \"https://10.128.0.102:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:34:50.525473 master-0 kubenswrapper[31411]: I0224 02:34:50.507300 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 24 02:34:50.525473 master-0 kubenswrapper[31411]: I0224 02:34:50.508837 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 24 02:34:50.525473 master-0 kubenswrapper[31411]: I0224 02:34:50.508926 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.527511 master-0 kubenswrapper[31411]: I0224 02:34:50.527471 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 24 02:34:50.528167 master-0 kubenswrapper[31411]: I0224 02:34:50.528146 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 24 02:34:50.528920 master-0 kubenswrapper[31411]: I0224 02:34:50.528900 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 24 02:34:50.529272 master-0 kubenswrapper[31411]: I0224 02:34:50.529252 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 24 02:34:50.529531 master-0 kubenswrapper[31411]: I0224 02:34:50.529513 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 24 02:34:50.529818 master-0 kubenswrapper[31411]: I0224 02:34:50.529800 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 24 02:34:50.638092 master-0 kubenswrapper[31411]: I0224 02:34:50.638032 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fc47c58d-5bd1-4cb0-942f-6a048792da9a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.640139 master-0 kubenswrapper[31411]: I0224 02:34:50.640115 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fc47c58d-5bd1-4cb0-942f-6a048792da9a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.641672 master-0 kubenswrapper[31411]: I0224 02:34:50.641595 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fc47c58d-5bd1-4cb0-942f-6a048792da9a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.641802 master-0 kubenswrapper[31411]: I0224 02:34:50.641724 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh9h6\" (UniqueName: \"kubernetes.io/projected/fc47c58d-5bd1-4cb0-942f-6a048792da9a-kube-api-access-kh9h6\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.642443 master-0 kubenswrapper[31411]: I0224 02:34:50.641902 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fc47c58d-5bd1-4cb0-942f-6a048792da9a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.643565 master-0 kubenswrapper[31411]: I0224 02:34:50.643511 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-436e1541-7987-4390-8405-eaf459b61a91\" (UniqueName: \"kubernetes.io/csi/topolvm.io^7fa5489b-8891-46d0-afb9-c1b806ac2d60\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.643565 master-0 kubenswrapper[31411]: I0224 02:34:50.643617 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fc47c58d-5bd1-4cb0-942f-6a048792da9a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.644060 master-0 kubenswrapper[31411]: I0224 02:34:50.643758 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fc47c58d-5bd1-4cb0-942f-6a048792da9a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.644233 master-0 kubenswrapper[31411]: I0224 02:34:50.644102 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fc47c58d-5bd1-4cb0-942f-6a048792da9a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.644233 master-0 kubenswrapper[31411]: I0224 02:34:50.644206 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fc47c58d-5bd1-4cb0-942f-6a048792da9a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.644352 master-0 kubenswrapper[31411]: I0224 02:34:50.644292 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fc47c58d-5bd1-4cb0-942f-6a048792da9a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.747190 master-0 kubenswrapper[31411]: I0224 02:34:50.747122 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fc47c58d-5bd1-4cb0-942f-6a048792da9a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.747190 master-0 kubenswrapper[31411]: I0224 02:34:50.747185 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh9h6\" (UniqueName: \"kubernetes.io/projected/fc47c58d-5bd1-4cb0-942f-6a048792da9a-kube-api-access-kh9h6\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.747604 master-0 kubenswrapper[31411]: I0224 02:34:50.747219 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fc47c58d-5bd1-4cb0-942f-6a048792da9a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.747604 master-0 kubenswrapper[31411]: I0224 02:34:50.747263 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-436e1541-7987-4390-8405-eaf459b61a91\" (UniqueName: \"kubernetes.io/csi/topolvm.io^7fa5489b-8891-46d0-afb9-c1b806ac2d60\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.747604 master-0 kubenswrapper[31411]: I0224 02:34:50.747289 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fc47c58d-5bd1-4cb0-942f-6a048792da9a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.747604 master-0 kubenswrapper[31411]: I0224 02:34:50.747319 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fc47c58d-5bd1-4cb0-942f-6a048792da9a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.747604 master-0 kubenswrapper[31411]: I0224 02:34:50.747366 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fc47c58d-5bd1-4cb0-942f-6a048792da9a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.747604 master-0 kubenswrapper[31411]: I0224 02:34:50.747393 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fc47c58d-5bd1-4cb0-942f-6a048792da9a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.747604 master-0 kubenswrapper[31411]: I0224 02:34:50.747425 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fc47c58d-5bd1-4cb0-942f-6a048792da9a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.747604 master-0 kubenswrapper[31411]: I0224 02:34:50.747476 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fc47c58d-5bd1-4cb0-942f-6a048792da9a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.747604 master-0 kubenswrapper[31411]: I0224 02:34:50.747552 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fc47c58d-5bd1-4cb0-942f-6a048792da9a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.748676 master-0 kubenswrapper[31411]: I0224 02:34:50.748648 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fc47c58d-5bd1-4cb0-942f-6a048792da9a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.748877 master-0 kubenswrapper[31411]: I0224 02:34:50.748828 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fc47c58d-5bd1-4cb0-942f-6a048792da9a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.749504 master-0 kubenswrapper[31411]: I0224 02:34:50.749460 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fc47c58d-5bd1-4cb0-942f-6a048792da9a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.750719 master-0 kubenswrapper[31411]: I0224 02:34:50.750678 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fc47c58d-5bd1-4cb0-942f-6a048792da9a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.751827 master-0 kubenswrapper[31411]: I0224 02:34:50.751776 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fc47c58d-5bd1-4cb0-942f-6a048792da9a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.757178 master-0 kubenswrapper[31411]: I0224 02:34:50.753836 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fc47c58d-5bd1-4cb0-942f-6a048792da9a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.757178 master-0 kubenswrapper[31411]: I0224 02:34:50.754471 31411 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 02:34:50.757178 master-0 kubenswrapper[31411]: I0224 02:34:50.754529 31411 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-436e1541-7987-4390-8405-eaf459b61a91\" (UniqueName: \"kubernetes.io/csi/topolvm.io^7fa5489b-8891-46d0-afb9-c1b806ac2d60\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/970cd09ad224d489f05ca82f416e4202572ea95bb4783c2afd5073bfe4db44ef/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.763225 master-0 kubenswrapper[31411]: I0224 02:34:50.763043 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fc47c58d-5bd1-4cb0-942f-6a048792da9a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.763621 master-0 kubenswrapper[31411]: I0224 02:34:50.763597 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fc47c58d-5bd1-4cb0-942f-6a048792da9a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.768051 master-0 kubenswrapper[31411]: I0224 02:34:50.768001 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fc47c58d-5bd1-4cb0-942f-6a048792da9a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.770233 master-0 kubenswrapper[31411]: I0224 02:34:50.770211 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 24 02:34:50.771461 master-0 kubenswrapper[31411]: I0224 02:34:50.771430 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh9h6\" (UniqueName: \"kubernetes.io/projected/fc47c58d-5bd1-4cb0-942f-6a048792da9a-kube-api-access-kh9h6\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:50.772344 master-0 kubenswrapper[31411]: I0224 02:34:50.772327 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 24 02:34:50.775708 master-0 kubenswrapper[31411]: I0224 02:34:50.775664 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 24 02:34:50.776091 master-0 kubenswrapper[31411]: I0224 02:34:50.776066 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 24 02:34:50.782212 master-0 kubenswrapper[31411]: I0224 02:34:50.781910 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 24 02:34:50.784487 master-0 kubenswrapper[31411]: I0224 02:34:50.784456 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 24 02:34:50.850771 master-0 kubenswrapper[31411]: I0224 02:34:50.850717 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cca46d62-e09a-45a1-ae65-0465747dc0a7-combined-ca-bundle\") pod \"memcached-0\" (UID: \"cca46d62-e09a-45a1-ae65-0465747dc0a7\") " pod="openstack/memcached-0" Feb 24 02:34:50.850918 master-0 kubenswrapper[31411]: I0224 02:34:50.850775 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cca46d62-e09a-45a1-ae65-0465747dc0a7-config-data\") pod \"memcached-0\" (UID: \"cca46d62-e09a-45a1-ae65-0465747dc0a7\") " pod="openstack/memcached-0" Feb 24 02:34:50.850918 master-0 kubenswrapper[31411]: I0224 02:34:50.850849 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h52x\" (UniqueName: \"kubernetes.io/projected/cca46d62-e09a-45a1-ae65-0465747dc0a7-kube-api-access-5h52x\") pod \"memcached-0\" (UID: \"cca46d62-e09a-45a1-ae65-0465747dc0a7\") " pod="openstack/memcached-0" Feb 24 02:34:50.851027 master-0 kubenswrapper[31411]: I0224 02:34:50.850928 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/cca46d62-e09a-45a1-ae65-0465747dc0a7-memcached-tls-certs\") pod \"memcached-0\" (UID: \"cca46d62-e09a-45a1-ae65-0465747dc0a7\") " pod="openstack/memcached-0" Feb 24 02:34:50.851027 master-0 kubenswrapper[31411]: I0224 02:34:50.850960 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cca46d62-e09a-45a1-ae65-0465747dc0a7-kolla-config\") pod \"memcached-0\" (UID: \"cca46d62-e09a-45a1-ae65-0465747dc0a7\") " pod="openstack/memcached-0" Feb 24 02:34:50.953444 master-0 kubenswrapper[31411]: I0224 02:34:50.953405 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/cca46d62-e09a-45a1-ae65-0465747dc0a7-memcached-tls-certs\") pod \"memcached-0\" (UID: \"cca46d62-e09a-45a1-ae65-0465747dc0a7\") " pod="openstack/memcached-0" Feb 24 02:34:50.953824 master-0 kubenswrapper[31411]: I0224 02:34:50.953791 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cca46d62-e09a-45a1-ae65-0465747dc0a7-kolla-config\") pod \"memcached-0\" (UID: \"cca46d62-e09a-45a1-ae65-0465747dc0a7\") " pod="openstack/memcached-0" Feb 24 02:34:50.954031 master-0 kubenswrapper[31411]: I0224 02:34:50.954010 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cca46d62-e09a-45a1-ae65-0465747dc0a7-combined-ca-bundle\") pod \"memcached-0\" (UID: \"cca46d62-e09a-45a1-ae65-0465747dc0a7\") " pod="openstack/memcached-0" Feb 24 02:34:50.954172 master-0 kubenswrapper[31411]: I0224 02:34:50.954151 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cca46d62-e09a-45a1-ae65-0465747dc0a7-config-data\") pod \"memcached-0\" (UID: \"cca46d62-e09a-45a1-ae65-0465747dc0a7\") " pod="openstack/memcached-0" Feb 24 02:34:50.954488 master-0 kubenswrapper[31411]: I0224 02:34:50.954463 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h52x\" (UniqueName: \"kubernetes.io/projected/cca46d62-e09a-45a1-ae65-0465747dc0a7-kube-api-access-5h52x\") pod \"memcached-0\" (UID: \"cca46d62-e09a-45a1-ae65-0465747dc0a7\") " pod="openstack/memcached-0" Feb 24 02:34:50.954949 master-0 kubenswrapper[31411]: I0224 02:34:50.954894 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cca46d62-e09a-45a1-ae65-0465747dc0a7-kolla-config\") pod \"memcached-0\" (UID: \"cca46d62-e09a-45a1-ae65-0465747dc0a7\") " pod="openstack/memcached-0" Feb 24 02:34:50.955825 master-0 kubenswrapper[31411]: I0224 02:34:50.955769 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cca46d62-e09a-45a1-ae65-0465747dc0a7-config-data\") pod \"memcached-0\" (UID: \"cca46d62-e09a-45a1-ae65-0465747dc0a7\") " pod="openstack/memcached-0" Feb 24 02:34:50.959605 master-0 kubenswrapper[31411]: I0224 02:34:50.959538 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/cca46d62-e09a-45a1-ae65-0465747dc0a7-memcached-tls-certs\") pod \"memcached-0\" (UID: \"cca46d62-e09a-45a1-ae65-0465747dc0a7\") " pod="openstack/memcached-0" Feb 24 02:34:50.960403 master-0 kubenswrapper[31411]: I0224 02:34:50.960351 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cca46d62-e09a-45a1-ae65-0465747dc0a7-combined-ca-bundle\") pod \"memcached-0\" (UID: \"cca46d62-e09a-45a1-ae65-0465747dc0a7\") " pod="openstack/memcached-0" Feb 24 02:34:50.974346 master-0 kubenswrapper[31411]: I0224 02:34:50.974287 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h52x\" (UniqueName: \"kubernetes.io/projected/cca46d62-e09a-45a1-ae65-0465747dc0a7-kube-api-access-5h52x\") pod \"memcached-0\" (UID: \"cca46d62-e09a-45a1-ae65-0465747dc0a7\") " pod="openstack/memcached-0" Feb 24 02:34:51.131773 master-0 kubenswrapper[31411]: I0224 02:34:51.131681 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 24 02:34:51.669555 master-0 kubenswrapper[31411]: I0224 02:34:51.669490 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 24 02:34:51.806831 master-0 kubenswrapper[31411]: I0224 02:34:51.806747 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-73225840-35e7-4008-8fed-f5170a782266\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ed6882af-2232-46d2-b108-cb681d8188f4\") pod \"rabbitmq-server-0\" (UID: \"5680b3af-dae8-4617-80b2-30c0a9818130\") " pod="openstack/rabbitmq-server-0" Feb 24 02:34:51.833087 master-0 kubenswrapper[31411]: I0224 02:34:51.833058 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 24 02:34:52.113949 master-0 kubenswrapper[31411]: I0224 02:34:52.113867 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"cca46d62-e09a-45a1-ae65-0465747dc0a7","Type":"ContainerStarted","Data":"706f47258e8d365db46c464ab6abfc8d043db55ebd335c16ef1e971d66eeafe4"} Feb 24 02:34:52.489194 master-0 kubenswrapper[31411]: I0224 02:34:52.489125 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 24 02:34:52.512117 master-0 kubenswrapper[31411]: W0224 02:34:52.511758 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5680b3af_dae8_4617_80b2_30c0a9818130.slice/crio-86a60e91187690159a34d1e34f6edd926d67e65bd67c3a4a718e8f1d004e86ab WatchSource:0}: Error finding container 86a60e91187690159a34d1e34f6edd926d67e65bd67c3a4a718e8f1d004e86ab: Status 404 returned error can't find the container with id 86a60e91187690159a34d1e34f6edd926d67e65bd67c3a4a718e8f1d004e86ab Feb 24 02:34:53.128560 master-0 kubenswrapper[31411]: I0224 02:34:53.128489 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5680b3af-dae8-4617-80b2-30c0a9818130","Type":"ContainerStarted","Data":"86a60e91187690159a34d1e34f6edd926d67e65bd67c3a4a718e8f1d004e86ab"} Feb 24 02:34:53.221309 master-0 kubenswrapper[31411]: I0224 02:34:53.221249 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-436e1541-7987-4390-8405-eaf459b61a91\" (UniqueName: \"kubernetes.io/csi/topolvm.io^7fa5489b-8891-46d0-afb9-c1b806ac2d60\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc47c58d-5bd1-4cb0-942f-6a048792da9a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:53.288677 master-0 kubenswrapper[31411]: I0224 02:34:53.288626 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:34:54.372755 master-0 kubenswrapper[31411]: I0224 02:34:54.372367 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 24 02:34:55.355401 master-0 kubenswrapper[31411]: I0224 02:34:55.355339 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 24 02:34:55.357453 master-0 kubenswrapper[31411]: I0224 02:34:55.357419 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 24 02:34:55.360445 master-0 kubenswrapper[31411]: I0224 02:34:55.360333 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 24 02:34:55.360843 master-0 kubenswrapper[31411]: I0224 02:34:55.360806 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 24 02:34:55.361448 master-0 kubenswrapper[31411]: I0224 02:34:55.361420 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 24 02:34:55.368735 master-0 kubenswrapper[31411]: I0224 02:34:55.368694 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 24 02:34:55.450838 master-0 kubenswrapper[31411]: I0224 02:34:55.450766 31411 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq" podUID="e2268b3c-ccf6-4309-ab1e-6c083c1f78cf" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.163:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 02:34:55.641235 master-0 kubenswrapper[31411]: I0224 02:34:55.641073 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93374608-d6a1-4e71-8682-3a86e5815f29-operator-scripts\") pod \"openstack-galera-0\" (UID: \"93374608-d6a1-4e71-8682-3a86e5815f29\") " pod="openstack/openstack-galera-0" Feb 24 02:34:55.641495 master-0 kubenswrapper[31411]: I0224 02:34:55.641218 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/93374608-d6a1-4e71-8682-3a86e5815f29-kolla-config\") pod \"openstack-galera-0\" (UID: \"93374608-d6a1-4e71-8682-3a86e5815f29\") " pod="openstack/openstack-galera-0" Feb 24 02:34:55.641495 master-0 kubenswrapper[31411]: I0224 02:34:55.641364 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l87b\" (UniqueName: \"kubernetes.io/projected/93374608-d6a1-4e71-8682-3a86e5815f29-kube-api-access-6l87b\") pod \"openstack-galera-0\" (UID: \"93374608-d6a1-4e71-8682-3a86e5815f29\") " pod="openstack/openstack-galera-0" Feb 24 02:34:55.641495 master-0 kubenswrapper[31411]: I0224 02:34:55.641398 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e505a6ea-7c17-4298-9b54-895fbaced559\" (UniqueName: \"kubernetes.io/csi/topolvm.io^15b7402c-8426-44c3-a6df-b37fecd956ef\") pod \"openstack-galera-0\" (UID: \"93374608-d6a1-4e71-8682-3a86e5815f29\") " pod="openstack/openstack-galera-0" Feb 24 02:34:55.641495 master-0 kubenswrapper[31411]: I0224 02:34:55.641430 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93374608-d6a1-4e71-8682-3a86e5815f29-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"93374608-d6a1-4e71-8682-3a86e5815f29\") " pod="openstack/openstack-galera-0" Feb 24 02:34:55.641495 master-0 kubenswrapper[31411]: I0224 02:34:55.641490 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/93374608-d6a1-4e71-8682-3a86e5815f29-config-data-default\") pod \"openstack-galera-0\" (UID: \"93374608-d6a1-4e71-8682-3a86e5815f29\") " pod="openstack/openstack-galera-0" Feb 24 02:34:55.641679 master-0 kubenswrapper[31411]: I0224 02:34:55.641508 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/93374608-d6a1-4e71-8682-3a86e5815f29-config-data-generated\") pod \"openstack-galera-0\" (UID: \"93374608-d6a1-4e71-8682-3a86e5815f29\") " pod="openstack/openstack-galera-0" Feb 24 02:34:55.641679 master-0 kubenswrapper[31411]: I0224 02:34:55.641557 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/93374608-d6a1-4e71-8682-3a86e5815f29-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"93374608-d6a1-4e71-8682-3a86e5815f29\") " pod="openstack/openstack-galera-0" Feb 24 02:34:55.743194 master-0 kubenswrapper[31411]: I0224 02:34:55.743146 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93374608-d6a1-4e71-8682-3a86e5815f29-operator-scripts\") pod \"openstack-galera-0\" (UID: \"93374608-d6a1-4e71-8682-3a86e5815f29\") " pod="openstack/openstack-galera-0" Feb 24 02:34:55.743194 master-0 kubenswrapper[31411]: I0224 02:34:55.743196 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/93374608-d6a1-4e71-8682-3a86e5815f29-kolla-config\") pod \"openstack-galera-0\" (UID: \"93374608-d6a1-4e71-8682-3a86e5815f29\") " pod="openstack/openstack-galera-0" Feb 24 02:34:55.743309 master-0 kubenswrapper[31411]: I0224 02:34:55.743250 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l87b\" (UniqueName: \"kubernetes.io/projected/93374608-d6a1-4e71-8682-3a86e5815f29-kube-api-access-6l87b\") pod \"openstack-galera-0\" (UID: \"93374608-d6a1-4e71-8682-3a86e5815f29\") " pod="openstack/openstack-galera-0" Feb 24 02:34:55.743309 master-0 kubenswrapper[31411]: I0224 02:34:55.743278 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e505a6ea-7c17-4298-9b54-895fbaced559\" (UniqueName: \"kubernetes.io/csi/topolvm.io^15b7402c-8426-44c3-a6df-b37fecd956ef\") pod \"openstack-galera-0\" (UID: \"93374608-d6a1-4e71-8682-3a86e5815f29\") " pod="openstack/openstack-galera-0" Feb 24 02:34:55.743309 master-0 kubenswrapper[31411]: I0224 02:34:55.743305 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93374608-d6a1-4e71-8682-3a86e5815f29-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"93374608-d6a1-4e71-8682-3a86e5815f29\") " pod="openstack/openstack-galera-0" Feb 24 02:34:55.743433 master-0 kubenswrapper[31411]: I0224 02:34:55.743359 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/93374608-d6a1-4e71-8682-3a86e5815f29-config-data-default\") pod \"openstack-galera-0\" (UID: \"93374608-d6a1-4e71-8682-3a86e5815f29\") " pod="openstack/openstack-galera-0" Feb 24 02:34:55.743433 master-0 kubenswrapper[31411]: I0224 02:34:55.743384 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/93374608-d6a1-4e71-8682-3a86e5815f29-config-data-generated\") pod \"openstack-galera-0\" (UID: \"93374608-d6a1-4e71-8682-3a86e5815f29\") " pod="openstack/openstack-galera-0" Feb 24 02:34:55.743507 master-0 kubenswrapper[31411]: I0224 02:34:55.743440 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/93374608-d6a1-4e71-8682-3a86e5815f29-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"93374608-d6a1-4e71-8682-3a86e5815f29\") " pod="openstack/openstack-galera-0" Feb 24 02:34:55.745200 master-0 kubenswrapper[31411]: I0224 02:34:55.745168 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/93374608-d6a1-4e71-8682-3a86e5815f29-kolla-config\") pod \"openstack-galera-0\" (UID: \"93374608-d6a1-4e71-8682-3a86e5815f29\") " pod="openstack/openstack-galera-0" Feb 24 02:34:55.745323 master-0 kubenswrapper[31411]: I0224 02:34:55.745286 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93374608-d6a1-4e71-8682-3a86e5815f29-operator-scripts\") pod \"openstack-galera-0\" (UID: \"93374608-d6a1-4e71-8682-3a86e5815f29\") " pod="openstack/openstack-galera-0" Feb 24 02:34:55.745988 master-0 kubenswrapper[31411]: I0224 02:34:55.745960 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/93374608-d6a1-4e71-8682-3a86e5815f29-config-data-default\") pod \"openstack-galera-0\" (UID: \"93374608-d6a1-4e71-8682-3a86e5815f29\") " pod="openstack/openstack-galera-0" Feb 24 02:34:55.746379 master-0 kubenswrapper[31411]: I0224 02:34:55.746361 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/93374608-d6a1-4e71-8682-3a86e5815f29-config-data-generated\") pod \"openstack-galera-0\" (UID: \"93374608-d6a1-4e71-8682-3a86e5815f29\") " pod="openstack/openstack-galera-0" Feb 24 02:34:55.747192 master-0 kubenswrapper[31411]: I0224 02:34:55.747156 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/93374608-d6a1-4e71-8682-3a86e5815f29-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"93374608-d6a1-4e71-8682-3a86e5815f29\") " pod="openstack/openstack-galera-0" Feb 24 02:34:55.747293 master-0 kubenswrapper[31411]: I0224 02:34:55.747267 31411 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 02:34:55.747368 master-0 kubenswrapper[31411]: I0224 02:34:55.747331 31411 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e505a6ea-7c17-4298-9b54-895fbaced559\" (UniqueName: \"kubernetes.io/csi/topolvm.io^15b7402c-8426-44c3-a6df-b37fecd956ef\") pod \"openstack-galera-0\" (UID: \"93374608-d6a1-4e71-8682-3a86e5815f29\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/ea8e15fb13cac4ac9ffa0c2af0c893821e1d7d4f7a78f2f6f8db3d0e73025c9b/globalmount\"" pod="openstack/openstack-galera-0" Feb 24 02:34:55.749678 master-0 kubenswrapper[31411]: I0224 02:34:55.749644 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93374608-d6a1-4e71-8682-3a86e5815f29-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"93374608-d6a1-4e71-8682-3a86e5815f29\") " pod="openstack/openstack-galera-0" Feb 24 02:34:55.766126 master-0 kubenswrapper[31411]: I0224 02:34:55.766106 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l87b\" (UniqueName: \"kubernetes.io/projected/93374608-d6a1-4e71-8682-3a86e5815f29-kube-api-access-6l87b\") pod \"openstack-galera-0\" (UID: \"93374608-d6a1-4e71-8682-3a86e5815f29\") " pod="openstack/openstack-galera-0" Feb 24 02:34:57.001437 master-0 kubenswrapper[31411]: I0224 02:34:57.001343 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 24 02:34:57.008293 master-0 kubenswrapper[31411]: I0224 02:34:57.008233 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 24 02:34:57.013087 master-0 kubenswrapper[31411]: I0224 02:34:57.013034 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 24 02:34:57.016446 master-0 kubenswrapper[31411]: I0224 02:34:57.016380 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 24 02:34:57.031409 master-0 kubenswrapper[31411]: I0224 02:34:57.031348 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 24 02:34:57.036666 master-0 kubenswrapper[31411]: I0224 02:34:57.036637 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 24 02:34:57.082350 master-0 kubenswrapper[31411]: I0224 02:34:57.082283 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/86b07869-3ccf-46a9-9ca3-9954a1508cff-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"86b07869-3ccf-46a9-9ca3-9954a1508cff\") " pod="openstack/openstack-cell1-galera-0" Feb 24 02:34:57.082927 master-0 kubenswrapper[31411]: I0224 02:34:57.082907 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b07869-3ccf-46a9-9ca3-9954a1508cff-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"86b07869-3ccf-46a9-9ca3-9954a1508cff\") " pod="openstack/openstack-cell1-galera-0" Feb 24 02:34:57.083090 master-0 kubenswrapper[31411]: I0224 02:34:57.083063 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/86b07869-3ccf-46a9-9ca3-9954a1508cff-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"86b07869-3ccf-46a9-9ca3-9954a1508cff\") " pod="openstack/openstack-cell1-galera-0" Feb 24 02:34:57.083219 master-0 kubenswrapper[31411]: I0224 02:34:57.083202 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4c6k\" (UniqueName: \"kubernetes.io/projected/86b07869-3ccf-46a9-9ca3-9954a1508cff-kube-api-access-z4c6k\") pod \"openstack-cell1-galera-0\" (UID: \"86b07869-3ccf-46a9-9ca3-9954a1508cff\") " pod="openstack/openstack-cell1-galera-0" Feb 24 02:34:57.083384 master-0 kubenswrapper[31411]: I0224 02:34:57.083365 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-55ed44a5-d0eb-48d0-a59a-001d0b7a79dc\" (UniqueName: \"kubernetes.io/csi/topolvm.io^eef5410c-8b72-4787-ba7e-e44737b21fc3\") pod \"openstack-cell1-galera-0\" (UID: \"86b07869-3ccf-46a9-9ca3-9954a1508cff\") " pod="openstack/openstack-cell1-galera-0" Feb 24 02:34:57.083525 master-0 kubenswrapper[31411]: I0224 02:34:57.083506 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/86b07869-3ccf-46a9-9ca3-9954a1508cff-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"86b07869-3ccf-46a9-9ca3-9954a1508cff\") " pod="openstack/openstack-cell1-galera-0" Feb 24 02:34:57.083747 master-0 kubenswrapper[31411]: I0224 02:34:57.083725 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86b07869-3ccf-46a9-9ca3-9954a1508cff-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"86b07869-3ccf-46a9-9ca3-9954a1508cff\") " pod="openstack/openstack-cell1-galera-0" Feb 24 02:34:57.083927 master-0 kubenswrapper[31411]: I0224 02:34:57.083906 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/86b07869-3ccf-46a9-9ca3-9954a1508cff-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"86b07869-3ccf-46a9-9ca3-9954a1508cff\") " pod="openstack/openstack-cell1-galera-0" Feb 24 02:34:57.187830 master-0 kubenswrapper[31411]: I0224 02:34:57.186484 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86b07869-3ccf-46a9-9ca3-9954a1508cff-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"86b07869-3ccf-46a9-9ca3-9954a1508cff\") " pod="openstack/openstack-cell1-galera-0" Feb 24 02:34:57.187830 master-0 kubenswrapper[31411]: I0224 02:34:57.186723 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/86b07869-3ccf-46a9-9ca3-9954a1508cff-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"86b07869-3ccf-46a9-9ca3-9954a1508cff\") " pod="openstack/openstack-cell1-galera-0" Feb 24 02:34:57.187830 master-0 kubenswrapper[31411]: I0224 02:34:57.187076 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/86b07869-3ccf-46a9-9ca3-9954a1508cff-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"86b07869-3ccf-46a9-9ca3-9954a1508cff\") " pod="openstack/openstack-cell1-galera-0" Feb 24 02:34:57.187830 master-0 kubenswrapper[31411]: I0224 02:34:57.187347 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b07869-3ccf-46a9-9ca3-9954a1508cff-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"86b07869-3ccf-46a9-9ca3-9954a1508cff\") " pod="openstack/openstack-cell1-galera-0" Feb 24 02:34:57.187830 master-0 kubenswrapper[31411]: I0224 02:34:57.187511 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4c6k\" (UniqueName: \"kubernetes.io/projected/86b07869-3ccf-46a9-9ca3-9954a1508cff-kube-api-access-z4c6k\") pod \"openstack-cell1-galera-0\" (UID: \"86b07869-3ccf-46a9-9ca3-9954a1508cff\") " pod="openstack/openstack-cell1-galera-0" Feb 24 02:34:57.187830 master-0 kubenswrapper[31411]: I0224 02:34:57.187606 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/86b07869-3ccf-46a9-9ca3-9954a1508cff-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"86b07869-3ccf-46a9-9ca3-9954a1508cff\") " pod="openstack/openstack-cell1-galera-0" Feb 24 02:34:57.187830 master-0 kubenswrapper[31411]: I0224 02:34:57.187649 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/86b07869-3ccf-46a9-9ca3-9954a1508cff-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"86b07869-3ccf-46a9-9ca3-9954a1508cff\") " pod="openstack/openstack-cell1-galera-0" Feb 24 02:34:57.188292 master-0 kubenswrapper[31411]: I0224 02:34:57.187886 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-55ed44a5-d0eb-48d0-a59a-001d0b7a79dc\" (UniqueName: \"kubernetes.io/csi/topolvm.io^eef5410c-8b72-4787-ba7e-e44737b21fc3\") pod \"openstack-cell1-galera-0\" (UID: \"86b07869-3ccf-46a9-9ca3-9954a1508cff\") " pod="openstack/openstack-cell1-galera-0" Feb 24 02:34:57.188292 master-0 kubenswrapper[31411]: I0224 02:34:57.188005 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/86b07869-3ccf-46a9-9ca3-9954a1508cff-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"86b07869-3ccf-46a9-9ca3-9954a1508cff\") " pod="openstack/openstack-cell1-galera-0" Feb 24 02:34:57.188292 master-0 kubenswrapper[31411]: I0224 02:34:57.188218 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/86b07869-3ccf-46a9-9ca3-9954a1508cff-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"86b07869-3ccf-46a9-9ca3-9954a1508cff\") " pod="openstack/openstack-cell1-galera-0" Feb 24 02:34:57.189734 master-0 kubenswrapper[31411]: I0224 02:34:57.189680 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/86b07869-3ccf-46a9-9ca3-9954a1508cff-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"86b07869-3ccf-46a9-9ca3-9954a1508cff\") " pod="openstack/openstack-cell1-galera-0" Feb 24 02:34:57.189827 master-0 kubenswrapper[31411]: I0224 02:34:57.189771 31411 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 02:34:57.189827 master-0 kubenswrapper[31411]: I0224 02:34:57.189793 31411 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-55ed44a5-d0eb-48d0-a59a-001d0b7a79dc\" (UniqueName: \"kubernetes.io/csi/topolvm.io^eef5410c-8b72-4787-ba7e-e44737b21fc3\") pod \"openstack-cell1-galera-0\" (UID: \"86b07869-3ccf-46a9-9ca3-9954a1508cff\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/f0420d5ae89279c253276f114b04f689587a3df5670402955128d1b5d098152b/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 24 02:34:57.190185 master-0 kubenswrapper[31411]: I0224 02:34:57.190143 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 24 02:34:57.193133 master-0 kubenswrapper[31411]: I0224 02:34:57.193094 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86b07869-3ccf-46a9-9ca3-9954a1508cff-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"86b07869-3ccf-46a9-9ca3-9954a1508cff\") " pod="openstack/openstack-cell1-galera-0" Feb 24 02:34:57.202473 master-0 kubenswrapper[31411]: I0224 02:34:57.202436 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b07869-3ccf-46a9-9ca3-9954a1508cff-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"86b07869-3ccf-46a9-9ca3-9954a1508cff\") " pod="openstack/openstack-cell1-galera-0" Feb 24 02:34:57.204407 master-0 kubenswrapper[31411]: I0224 02:34:57.204371 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/86b07869-3ccf-46a9-9ca3-9954a1508cff-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"86b07869-3ccf-46a9-9ca3-9954a1508cff\") " pod="openstack/openstack-cell1-galera-0" Feb 24 02:34:57.210748 master-0 kubenswrapper[31411]: W0224 02:34:57.210656 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc47c58d_5bd1_4cb0_942f_6a048792da9a.slice/crio-bab0a401844fad4ba7618b22ea11962b3d61e5c376a783790b2dfa1de3f29ff9 WatchSource:0}: Error finding container bab0a401844fad4ba7618b22ea11962b3d61e5c376a783790b2dfa1de3f29ff9: Status 404 returned error can't find the container with id bab0a401844fad4ba7618b22ea11962b3d61e5c376a783790b2dfa1de3f29ff9 Feb 24 02:34:57.230198 master-0 kubenswrapper[31411]: I0224 02:34:57.230158 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4c6k\" (UniqueName: \"kubernetes.io/projected/86b07869-3ccf-46a9-9ca3-9954a1508cff-kube-api-access-z4c6k\") pod \"openstack-cell1-galera-0\" (UID: \"86b07869-3ccf-46a9-9ca3-9954a1508cff\") " pod="openstack/openstack-cell1-galera-0" Feb 24 02:34:57.334520 master-0 kubenswrapper[31411]: I0224 02:34:57.334365 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fc47c58d-5bd1-4cb0-942f-6a048792da9a","Type":"ContainerStarted","Data":"bab0a401844fad4ba7618b22ea11962b3d61e5c376a783790b2dfa1de3f29ff9"} Feb 24 02:34:57.518165 master-0 kubenswrapper[31411]: I0224 02:34:57.518082 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e505a6ea-7c17-4298-9b54-895fbaced559\" (UniqueName: \"kubernetes.io/csi/topolvm.io^15b7402c-8426-44c3-a6df-b37fecd956ef\") pod \"openstack-galera-0\" (UID: \"93374608-d6a1-4e71-8682-3a86e5815f29\") " pod="openstack/openstack-galera-0" Feb 24 02:34:57.632263 master-0 kubenswrapper[31411]: I0224 02:34:57.632195 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 24 02:34:58.987555 master-0 kubenswrapper[31411]: I0224 02:34:58.987420 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-55ed44a5-d0eb-48d0-a59a-001d0b7a79dc\" (UniqueName: \"kubernetes.io/csi/topolvm.io^eef5410c-8b72-4787-ba7e-e44737b21fc3\") pod \"openstack-cell1-galera-0\" (UID: \"86b07869-3ccf-46a9-9ca3-9954a1508cff\") " pod="openstack/openstack-cell1-galera-0" Feb 24 02:34:59.065450 master-0 kubenswrapper[31411]: I0224 02:34:59.065390 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-hjmv9"] Feb 24 02:34:59.098335 master-0 kubenswrapper[31411]: I0224 02:34:59.098270 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hjmv9"] Feb 24 02:34:59.099979 master-0 kubenswrapper[31411]: I0224 02:34:59.099883 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hjmv9" Feb 24 02:34:59.102171 master-0 kubenswrapper[31411]: I0224 02:34:59.102140 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 24 02:34:59.102736 master-0 kubenswrapper[31411]: I0224 02:34:59.102721 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 24 02:34:59.130106 master-0 kubenswrapper[31411]: I0224 02:34:59.130007 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 24 02:34:59.135091 master-0 kubenswrapper[31411]: I0224 02:34:59.134434 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-lp2wm"] Feb 24 02:34:59.137124 master-0 kubenswrapper[31411]: I0224 02:34:59.137089 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-lp2wm" Feb 24 02:34:59.151368 master-0 kubenswrapper[31411]: I0224 02:34:59.150131 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-lp2wm"] Feb 24 02:34:59.159413 master-0 kubenswrapper[31411]: I0224 02:34:59.159084 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/76252167-d1e5-4ee1-b26f-853eb9e161a7-var-log-ovn\") pod \"ovn-controller-hjmv9\" (UID: \"76252167-d1e5-4ee1-b26f-853eb9e161a7\") " pod="openstack/ovn-controller-hjmv9" Feb 24 02:34:59.159413 master-0 kubenswrapper[31411]: I0224 02:34:59.159316 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/76252167-d1e5-4ee1-b26f-853eb9e161a7-ovn-controller-tls-certs\") pod \"ovn-controller-hjmv9\" (UID: \"76252167-d1e5-4ee1-b26f-853eb9e161a7\") " pod="openstack/ovn-controller-hjmv9" Feb 24 02:34:59.159663 master-0 kubenswrapper[31411]: I0224 02:34:59.159448 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76252167-d1e5-4ee1-b26f-853eb9e161a7-combined-ca-bundle\") pod \"ovn-controller-hjmv9\" (UID: \"76252167-d1e5-4ee1-b26f-853eb9e161a7\") " pod="openstack/ovn-controller-hjmv9" Feb 24 02:34:59.159663 master-0 kubenswrapper[31411]: I0224 02:34:59.159497 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/76252167-d1e5-4ee1-b26f-853eb9e161a7-var-run\") pod \"ovn-controller-hjmv9\" (UID: \"76252167-d1e5-4ee1-b26f-853eb9e161a7\") " pod="openstack/ovn-controller-hjmv9" Feb 24 02:34:59.159663 master-0 kubenswrapper[31411]: I0224 02:34:59.159620 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjd75\" (UniqueName: \"kubernetes.io/projected/76252167-d1e5-4ee1-b26f-853eb9e161a7-kube-api-access-vjd75\") pod \"ovn-controller-hjmv9\" (UID: \"76252167-d1e5-4ee1-b26f-853eb9e161a7\") " pod="openstack/ovn-controller-hjmv9" Feb 24 02:34:59.161541 master-0 kubenswrapper[31411]: I0224 02:34:59.159878 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/76252167-d1e5-4ee1-b26f-853eb9e161a7-var-run-ovn\") pod \"ovn-controller-hjmv9\" (UID: \"76252167-d1e5-4ee1-b26f-853eb9e161a7\") " pod="openstack/ovn-controller-hjmv9" Feb 24 02:34:59.161541 master-0 kubenswrapper[31411]: I0224 02:34:59.160031 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76252167-d1e5-4ee1-b26f-853eb9e161a7-scripts\") pod \"ovn-controller-hjmv9\" (UID: \"76252167-d1e5-4ee1-b26f-853eb9e161a7\") " pod="openstack/ovn-controller-hjmv9" Feb 24 02:34:59.265890 master-0 kubenswrapper[31411]: I0224 02:34:59.263098 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/67e9af05-4de5-4257-b103-4af520af6fec-etc-ovs\") pod \"ovn-controller-ovs-lp2wm\" (UID: \"67e9af05-4de5-4257-b103-4af520af6fec\") " pod="openstack/ovn-controller-ovs-lp2wm" Feb 24 02:34:59.265890 master-0 kubenswrapper[31411]: I0224 02:34:59.263326 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/67e9af05-4de5-4257-b103-4af520af6fec-var-lib\") pod \"ovn-controller-ovs-lp2wm\" (UID: \"67e9af05-4de5-4257-b103-4af520af6fec\") " pod="openstack/ovn-controller-ovs-lp2wm" Feb 24 02:34:59.265890 master-0 kubenswrapper[31411]: I0224 02:34:59.263364 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76252167-d1e5-4ee1-b26f-853eb9e161a7-combined-ca-bundle\") pod \"ovn-controller-hjmv9\" (UID: \"76252167-d1e5-4ee1-b26f-853eb9e161a7\") " pod="openstack/ovn-controller-hjmv9" Feb 24 02:34:59.265890 master-0 kubenswrapper[31411]: I0224 02:34:59.263527 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/76252167-d1e5-4ee1-b26f-853eb9e161a7-var-run\") pod \"ovn-controller-hjmv9\" (UID: \"76252167-d1e5-4ee1-b26f-853eb9e161a7\") " pod="openstack/ovn-controller-hjmv9" Feb 24 02:34:59.265890 master-0 kubenswrapper[31411]: I0224 02:34:59.263606 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/67e9af05-4de5-4257-b103-4af520af6fec-var-log\") pod \"ovn-controller-ovs-lp2wm\" (UID: \"67e9af05-4de5-4257-b103-4af520af6fec\") " pod="openstack/ovn-controller-ovs-lp2wm" Feb 24 02:34:59.265890 master-0 kubenswrapper[31411]: I0224 02:34:59.263680 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjd75\" (UniqueName: \"kubernetes.io/projected/76252167-d1e5-4ee1-b26f-853eb9e161a7-kube-api-access-vjd75\") pod \"ovn-controller-hjmv9\" (UID: \"76252167-d1e5-4ee1-b26f-853eb9e161a7\") " pod="openstack/ovn-controller-hjmv9" Feb 24 02:34:59.265890 master-0 kubenswrapper[31411]: I0224 02:34:59.263734 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/76252167-d1e5-4ee1-b26f-853eb9e161a7-var-run-ovn\") pod \"ovn-controller-hjmv9\" (UID: \"76252167-d1e5-4ee1-b26f-853eb9e161a7\") " pod="openstack/ovn-controller-hjmv9" Feb 24 02:34:59.265890 master-0 kubenswrapper[31411]: I0224 02:34:59.263783 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/67e9af05-4de5-4257-b103-4af520af6fec-var-run\") pod \"ovn-controller-ovs-lp2wm\" (UID: \"67e9af05-4de5-4257-b103-4af520af6fec\") " pod="openstack/ovn-controller-ovs-lp2wm" Feb 24 02:34:59.265890 master-0 kubenswrapper[31411]: I0224 02:34:59.263919 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76252167-d1e5-4ee1-b26f-853eb9e161a7-scripts\") pod \"ovn-controller-hjmv9\" (UID: \"76252167-d1e5-4ee1-b26f-853eb9e161a7\") " pod="openstack/ovn-controller-hjmv9" Feb 24 02:34:59.265890 master-0 kubenswrapper[31411]: I0224 02:34:59.264102 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/76252167-d1e5-4ee1-b26f-853eb9e161a7-var-log-ovn\") pod \"ovn-controller-hjmv9\" (UID: \"76252167-d1e5-4ee1-b26f-853eb9e161a7\") " pod="openstack/ovn-controller-hjmv9" Feb 24 02:34:59.265890 master-0 kubenswrapper[31411]: I0224 02:34:59.264165 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8t2r\" (UniqueName: \"kubernetes.io/projected/67e9af05-4de5-4257-b103-4af520af6fec-kube-api-access-b8t2r\") pod \"ovn-controller-ovs-lp2wm\" (UID: \"67e9af05-4de5-4257-b103-4af520af6fec\") " pod="openstack/ovn-controller-ovs-lp2wm" Feb 24 02:34:59.265890 master-0 kubenswrapper[31411]: I0224 02:34:59.264229 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/76252167-d1e5-4ee1-b26f-853eb9e161a7-var-run\") pod \"ovn-controller-hjmv9\" (UID: \"76252167-d1e5-4ee1-b26f-853eb9e161a7\") " pod="openstack/ovn-controller-hjmv9" Feb 24 02:34:59.265890 master-0 kubenswrapper[31411]: I0224 02:34:59.264406 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67e9af05-4de5-4257-b103-4af520af6fec-scripts\") pod \"ovn-controller-ovs-lp2wm\" (UID: \"67e9af05-4de5-4257-b103-4af520af6fec\") " pod="openstack/ovn-controller-ovs-lp2wm" Feb 24 02:34:59.265890 master-0 kubenswrapper[31411]: I0224 02:34:59.264507 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/76252167-d1e5-4ee1-b26f-853eb9e161a7-ovn-controller-tls-certs\") pod \"ovn-controller-hjmv9\" (UID: \"76252167-d1e5-4ee1-b26f-853eb9e161a7\") " pod="openstack/ovn-controller-hjmv9" Feb 24 02:34:59.265890 master-0 kubenswrapper[31411]: I0224 02:34:59.264395 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/76252167-d1e5-4ee1-b26f-853eb9e161a7-var-run-ovn\") pod \"ovn-controller-hjmv9\" (UID: \"76252167-d1e5-4ee1-b26f-853eb9e161a7\") " pod="openstack/ovn-controller-hjmv9" Feb 24 02:34:59.265890 master-0 kubenswrapper[31411]: I0224 02:34:59.265042 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/76252167-d1e5-4ee1-b26f-853eb9e161a7-var-log-ovn\") pod \"ovn-controller-hjmv9\" (UID: \"76252167-d1e5-4ee1-b26f-853eb9e161a7\") " pod="openstack/ovn-controller-hjmv9" Feb 24 02:34:59.267996 master-0 kubenswrapper[31411]: I0224 02:34:59.267956 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76252167-d1e5-4ee1-b26f-853eb9e161a7-combined-ca-bundle\") pod \"ovn-controller-hjmv9\" (UID: \"76252167-d1e5-4ee1-b26f-853eb9e161a7\") " pod="openstack/ovn-controller-hjmv9" Feb 24 02:34:59.268869 master-0 kubenswrapper[31411]: I0224 02:34:59.268839 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/76252167-d1e5-4ee1-b26f-853eb9e161a7-ovn-controller-tls-certs\") pod \"ovn-controller-hjmv9\" (UID: \"76252167-d1e5-4ee1-b26f-853eb9e161a7\") " pod="openstack/ovn-controller-hjmv9" Feb 24 02:34:59.270956 master-0 kubenswrapper[31411]: I0224 02:34:59.270885 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76252167-d1e5-4ee1-b26f-853eb9e161a7-scripts\") pod \"ovn-controller-hjmv9\" (UID: \"76252167-d1e5-4ee1-b26f-853eb9e161a7\") " pod="openstack/ovn-controller-hjmv9" Feb 24 02:34:59.279974 master-0 kubenswrapper[31411]: I0224 02:34:59.279923 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjd75\" (UniqueName: \"kubernetes.io/projected/76252167-d1e5-4ee1-b26f-853eb9e161a7-kube-api-access-vjd75\") pod \"ovn-controller-hjmv9\" (UID: \"76252167-d1e5-4ee1-b26f-853eb9e161a7\") " pod="openstack/ovn-controller-hjmv9" Feb 24 02:34:59.366403 master-0 kubenswrapper[31411]: I0224 02:34:59.366307 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8t2r\" (UniqueName: \"kubernetes.io/projected/67e9af05-4de5-4257-b103-4af520af6fec-kube-api-access-b8t2r\") pod \"ovn-controller-ovs-lp2wm\" (UID: \"67e9af05-4de5-4257-b103-4af520af6fec\") " pod="openstack/ovn-controller-ovs-lp2wm" Feb 24 02:34:59.368659 master-0 kubenswrapper[31411]: I0224 02:34:59.366818 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67e9af05-4de5-4257-b103-4af520af6fec-scripts\") pod \"ovn-controller-ovs-lp2wm\" (UID: \"67e9af05-4de5-4257-b103-4af520af6fec\") " pod="openstack/ovn-controller-ovs-lp2wm" Feb 24 02:34:59.368951 master-0 kubenswrapper[31411]: I0224 02:34:59.368668 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/67e9af05-4de5-4257-b103-4af520af6fec-etc-ovs\") pod \"ovn-controller-ovs-lp2wm\" (UID: \"67e9af05-4de5-4257-b103-4af520af6fec\") " pod="openstack/ovn-controller-ovs-lp2wm" Feb 24 02:34:59.369242 master-0 kubenswrapper[31411]: I0224 02:34:59.369222 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/67e9af05-4de5-4257-b103-4af520af6fec-etc-ovs\") pod \"ovn-controller-ovs-lp2wm\" (UID: \"67e9af05-4de5-4257-b103-4af520af6fec\") " pod="openstack/ovn-controller-ovs-lp2wm" Feb 24 02:34:59.369723 master-0 kubenswrapper[31411]: I0224 02:34:59.369646 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/67e9af05-4de5-4257-b103-4af520af6fec-var-lib\") pod \"ovn-controller-ovs-lp2wm\" (UID: \"67e9af05-4de5-4257-b103-4af520af6fec\") " pod="openstack/ovn-controller-ovs-lp2wm" Feb 24 02:34:59.369850 master-0 kubenswrapper[31411]: I0224 02:34:59.369778 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/67e9af05-4de5-4257-b103-4af520af6fec-var-log\") pod \"ovn-controller-ovs-lp2wm\" (UID: \"67e9af05-4de5-4257-b103-4af520af6fec\") " pod="openstack/ovn-controller-ovs-lp2wm" Feb 24 02:34:59.369970 master-0 kubenswrapper[31411]: I0224 02:34:59.369928 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/67e9af05-4de5-4257-b103-4af520af6fec-var-run\") pod \"ovn-controller-ovs-lp2wm\" (UID: \"67e9af05-4de5-4257-b103-4af520af6fec\") " pod="openstack/ovn-controller-ovs-lp2wm" Feb 24 02:34:59.370075 master-0 kubenswrapper[31411]: I0224 02:34:59.370035 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/67e9af05-4de5-4257-b103-4af520af6fec-var-run\") pod \"ovn-controller-ovs-lp2wm\" (UID: \"67e9af05-4de5-4257-b103-4af520af6fec\") " pod="openstack/ovn-controller-ovs-lp2wm" Feb 24 02:34:59.370075 master-0 kubenswrapper[31411]: I0224 02:34:59.370042 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/67e9af05-4de5-4257-b103-4af520af6fec-var-lib\") pod \"ovn-controller-ovs-lp2wm\" (UID: \"67e9af05-4de5-4257-b103-4af520af6fec\") " pod="openstack/ovn-controller-ovs-lp2wm" Feb 24 02:34:59.370220 master-0 kubenswrapper[31411]: I0224 02:34:59.370161 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/67e9af05-4de5-4257-b103-4af520af6fec-var-log\") pod \"ovn-controller-ovs-lp2wm\" (UID: \"67e9af05-4de5-4257-b103-4af520af6fec\") " pod="openstack/ovn-controller-ovs-lp2wm" Feb 24 02:34:59.372937 master-0 kubenswrapper[31411]: I0224 02:34:59.372820 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67e9af05-4de5-4257-b103-4af520af6fec-scripts\") pod \"ovn-controller-ovs-lp2wm\" (UID: \"67e9af05-4de5-4257-b103-4af520af6fec\") " pod="openstack/ovn-controller-ovs-lp2wm" Feb 24 02:34:59.391921 master-0 kubenswrapper[31411]: I0224 02:34:59.391857 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8t2r\" (UniqueName: \"kubernetes.io/projected/67e9af05-4de5-4257-b103-4af520af6fec-kube-api-access-b8t2r\") pod \"ovn-controller-ovs-lp2wm\" (UID: \"67e9af05-4de5-4257-b103-4af520af6fec\") " pod="openstack/ovn-controller-ovs-lp2wm" Feb 24 02:34:59.424750 master-0 kubenswrapper[31411]: I0224 02:34:59.424512 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hjmv9" Feb 24 02:34:59.456472 master-0 kubenswrapper[31411]: I0224 02:34:59.456398 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-lp2wm" Feb 24 02:35:00.990742 master-0 kubenswrapper[31411]: I0224 02:35:00.990639 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 24 02:35:00.994072 master-0 kubenswrapper[31411]: I0224 02:35:00.993788 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 24 02:35:01.001278 master-0 kubenswrapper[31411]: I0224 02:35:01.001212 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 24 02:35:01.001459 master-0 kubenswrapper[31411]: I0224 02:35:01.001339 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 24 02:35:01.001459 master-0 kubenswrapper[31411]: I0224 02:35:01.001440 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 24 02:35:01.004126 master-0 kubenswrapper[31411]: I0224 02:35:01.003257 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 24 02:35:01.042485 master-0 kubenswrapper[31411]: I0224 02:35:01.042304 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 24 02:35:01.124931 master-0 kubenswrapper[31411]: I0224 02:35:01.124757 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d2e5cfb6-e3cd-428c-9efe-8d23b1f289df-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d2e5cfb6-e3cd-428c-9efe-8d23b1f289df\") " pod="openstack/ovsdbserver-nb-0" Feb 24 02:35:01.124931 master-0 kubenswrapper[31411]: I0224 02:35:01.124866 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2e5cfb6-e3cd-428c-9efe-8d23b1f289df-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d2e5cfb6-e3cd-428c-9efe-8d23b1f289df\") " pod="openstack/ovsdbserver-nb-0" Feb 24 02:35:01.125266 master-0 kubenswrapper[31411]: I0224 02:35:01.124960 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e5cfb6-e3cd-428c-9efe-8d23b1f289df-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d2e5cfb6-e3cd-428c-9efe-8d23b1f289df\") " pod="openstack/ovsdbserver-nb-0" Feb 24 02:35:01.132277 master-0 kubenswrapper[31411]: I0224 02:35:01.127116 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2e5cfb6-e3cd-428c-9efe-8d23b1f289df-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d2e5cfb6-e3cd-428c-9efe-8d23b1f289df\") " pod="openstack/ovsdbserver-nb-0" Feb 24 02:35:01.132277 master-0 kubenswrapper[31411]: I0224 02:35:01.127246 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6d9616cd-d188-43df-aab6-e0353beab110\" (UniqueName: \"kubernetes.io/csi/topolvm.io^bd3c911f-390b-4b9f-9fdf-4fc13da87c26\") pod \"ovsdbserver-nb-0\" (UID: \"d2e5cfb6-e3cd-428c-9efe-8d23b1f289df\") " pod="openstack/ovsdbserver-nb-0" Feb 24 02:35:01.132277 master-0 kubenswrapper[31411]: I0224 02:35:01.127284 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lmxk\" (UniqueName: \"kubernetes.io/projected/d2e5cfb6-e3cd-428c-9efe-8d23b1f289df-kube-api-access-7lmxk\") pod \"ovsdbserver-nb-0\" (UID: \"d2e5cfb6-e3cd-428c-9efe-8d23b1f289df\") " pod="openstack/ovsdbserver-nb-0" Feb 24 02:35:01.132277 master-0 kubenswrapper[31411]: I0224 02:35:01.127371 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2e5cfb6-e3cd-428c-9efe-8d23b1f289df-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d2e5cfb6-e3cd-428c-9efe-8d23b1f289df\") " pod="openstack/ovsdbserver-nb-0" Feb 24 02:35:01.132277 master-0 kubenswrapper[31411]: I0224 02:35:01.131286 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2e5cfb6-e3cd-428c-9efe-8d23b1f289df-config\") pod \"ovsdbserver-nb-0\" (UID: \"d2e5cfb6-e3cd-428c-9efe-8d23b1f289df\") " pod="openstack/ovsdbserver-nb-0" Feb 24 02:35:01.234246 master-0 kubenswrapper[31411]: I0224 02:35:01.234068 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d2e5cfb6-e3cd-428c-9efe-8d23b1f289df-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d2e5cfb6-e3cd-428c-9efe-8d23b1f289df\") " pod="openstack/ovsdbserver-nb-0" Feb 24 02:35:01.234679 master-0 kubenswrapper[31411]: I0224 02:35:01.234291 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2e5cfb6-e3cd-428c-9efe-8d23b1f289df-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d2e5cfb6-e3cd-428c-9efe-8d23b1f289df\") " pod="openstack/ovsdbserver-nb-0" Feb 24 02:35:01.234679 master-0 kubenswrapper[31411]: I0224 02:35:01.234566 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e5cfb6-e3cd-428c-9efe-8d23b1f289df-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d2e5cfb6-e3cd-428c-9efe-8d23b1f289df\") " pod="openstack/ovsdbserver-nb-0" Feb 24 02:35:01.235038 master-0 kubenswrapper[31411]: I0224 02:35:01.234963 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d2e5cfb6-e3cd-428c-9efe-8d23b1f289df-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d2e5cfb6-e3cd-428c-9efe-8d23b1f289df\") " pod="openstack/ovsdbserver-nb-0" Feb 24 02:35:01.235564 master-0 kubenswrapper[31411]: I0224 02:35:01.235460 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2e5cfb6-e3cd-428c-9efe-8d23b1f289df-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d2e5cfb6-e3cd-428c-9efe-8d23b1f289df\") " pod="openstack/ovsdbserver-nb-0" Feb 24 02:35:01.236074 master-0 kubenswrapper[31411]: I0224 02:35:01.236029 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6d9616cd-d188-43df-aab6-e0353beab110\" (UniqueName: \"kubernetes.io/csi/topolvm.io^bd3c911f-390b-4b9f-9fdf-4fc13da87c26\") pod \"ovsdbserver-nb-0\" (UID: \"d2e5cfb6-e3cd-428c-9efe-8d23b1f289df\") " pod="openstack/ovsdbserver-nb-0" Feb 24 02:35:01.236230 master-0 kubenswrapper[31411]: I0224 02:35:01.236188 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lmxk\" (UniqueName: \"kubernetes.io/projected/d2e5cfb6-e3cd-428c-9efe-8d23b1f289df-kube-api-access-7lmxk\") pod \"ovsdbserver-nb-0\" (UID: \"d2e5cfb6-e3cd-428c-9efe-8d23b1f289df\") " pod="openstack/ovsdbserver-nb-0" Feb 24 02:35:01.236345 master-0 kubenswrapper[31411]: I0224 02:35:01.236225 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2e5cfb6-e3cd-428c-9efe-8d23b1f289df-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d2e5cfb6-e3cd-428c-9efe-8d23b1f289df\") " pod="openstack/ovsdbserver-nb-0" Feb 24 02:35:01.236683 master-0 kubenswrapper[31411]: I0224 02:35:01.236648 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2e5cfb6-e3cd-428c-9efe-8d23b1f289df-config\") pod \"ovsdbserver-nb-0\" (UID: \"d2e5cfb6-e3cd-428c-9efe-8d23b1f289df\") " pod="openstack/ovsdbserver-nb-0" Feb 24 02:35:01.236922 master-0 kubenswrapper[31411]: I0224 02:35:01.236642 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2e5cfb6-e3cd-428c-9efe-8d23b1f289df-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d2e5cfb6-e3cd-428c-9efe-8d23b1f289df\") " pod="openstack/ovsdbserver-nb-0" Feb 24 02:35:01.239857 master-0 kubenswrapper[31411]: I0224 02:35:01.238757 31411 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 02:35:01.239857 master-0 kubenswrapper[31411]: I0224 02:35:01.238816 31411 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6d9616cd-d188-43df-aab6-e0353beab110\" (UniqueName: \"kubernetes.io/csi/topolvm.io^bd3c911f-390b-4b9f-9fdf-4fc13da87c26\") pod \"ovsdbserver-nb-0\" (UID: \"d2e5cfb6-e3cd-428c-9efe-8d23b1f289df\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/a2c854ba23fc659a1e4d3cc6c67cd7bf309453a7dbc0dacf25df12e742c2a7c9/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 24 02:35:01.239857 master-0 kubenswrapper[31411]: I0224 02:35:01.239047 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2e5cfb6-e3cd-428c-9efe-8d23b1f289df-config\") pod \"ovsdbserver-nb-0\" (UID: \"d2e5cfb6-e3cd-428c-9efe-8d23b1f289df\") " pod="openstack/ovsdbserver-nb-0" Feb 24 02:35:01.243262 master-0 kubenswrapper[31411]: I0224 02:35:01.243134 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2e5cfb6-e3cd-428c-9efe-8d23b1f289df-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d2e5cfb6-e3cd-428c-9efe-8d23b1f289df\") " pod="openstack/ovsdbserver-nb-0" Feb 24 02:35:01.243670 master-0 kubenswrapper[31411]: I0224 02:35:01.243614 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e5cfb6-e3cd-428c-9efe-8d23b1f289df-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d2e5cfb6-e3cd-428c-9efe-8d23b1f289df\") " pod="openstack/ovsdbserver-nb-0" Feb 24 02:35:01.244328 master-0 kubenswrapper[31411]: I0224 02:35:01.243760 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2e5cfb6-e3cd-428c-9efe-8d23b1f289df-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d2e5cfb6-e3cd-428c-9efe-8d23b1f289df\") " pod="openstack/ovsdbserver-nb-0" Feb 24 02:35:01.272848 master-0 kubenswrapper[31411]: I0224 02:35:01.267963 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lmxk\" (UniqueName: \"kubernetes.io/projected/d2e5cfb6-e3cd-428c-9efe-8d23b1f289df-kube-api-access-7lmxk\") pod \"ovsdbserver-nb-0\" (UID: \"d2e5cfb6-e3cd-428c-9efe-8d23b1f289df\") " pod="openstack/ovsdbserver-nb-0" Feb 24 02:35:02.708416 master-0 kubenswrapper[31411]: I0224 02:35:02.708355 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6d9616cd-d188-43df-aab6-e0353beab110\" (UniqueName: \"kubernetes.io/csi/topolvm.io^bd3c911f-390b-4b9f-9fdf-4fc13da87c26\") pod \"ovsdbserver-nb-0\" (UID: \"d2e5cfb6-e3cd-428c-9efe-8d23b1f289df\") " pod="openstack/ovsdbserver-nb-0" Feb 24 02:35:02.842490 master-0 kubenswrapper[31411]: I0224 02:35:02.842362 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 24 02:35:04.094843 master-0 kubenswrapper[31411]: I0224 02:35:04.094748 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 24 02:35:04.096730 master-0 kubenswrapper[31411]: I0224 02:35:04.096701 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 24 02:35:04.105157 master-0 kubenswrapper[31411]: I0224 02:35:04.104460 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 24 02:35:04.105157 master-0 kubenswrapper[31411]: I0224 02:35:04.104595 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 24 02:35:04.108387 master-0 kubenswrapper[31411]: I0224 02:35:04.107917 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 24 02:35:04.118439 master-0 kubenswrapper[31411]: I0224 02:35:04.118356 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 24 02:35:04.239667 master-0 kubenswrapper[31411]: I0224 02:35:04.236284 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa44386-3634-42fe-b2fc-6cdd257a8b1e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"faa44386-3634-42fe-b2fc-6cdd257a8b1e\") " pod="openstack/ovsdbserver-sb-0" Feb 24 02:35:04.239667 master-0 kubenswrapper[31411]: I0224 02:35:04.236408 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/faa44386-3634-42fe-b2fc-6cdd257a8b1e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"faa44386-3634-42fe-b2fc-6cdd257a8b1e\") " pod="openstack/ovsdbserver-sb-0" Feb 24 02:35:04.239667 master-0 kubenswrapper[31411]: I0224 02:35:04.236722 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3b312a42-b48e-4c1f-857b-7a51612d6280\" (UniqueName: \"kubernetes.io/csi/topolvm.io^15fd9f9c-ca02-4684-8d26-b411b8b86b43\") pod \"ovsdbserver-sb-0\" (UID: \"faa44386-3634-42fe-b2fc-6cdd257a8b1e\") " pod="openstack/ovsdbserver-sb-0" Feb 24 02:35:04.239667 master-0 kubenswrapper[31411]: I0224 02:35:04.236860 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/faa44386-3634-42fe-b2fc-6cdd257a8b1e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"faa44386-3634-42fe-b2fc-6cdd257a8b1e\") " pod="openstack/ovsdbserver-sb-0" Feb 24 02:35:04.239667 master-0 kubenswrapper[31411]: I0224 02:35:04.237292 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsfzn\" (UniqueName: \"kubernetes.io/projected/faa44386-3634-42fe-b2fc-6cdd257a8b1e-kube-api-access-fsfzn\") pod \"ovsdbserver-sb-0\" (UID: \"faa44386-3634-42fe-b2fc-6cdd257a8b1e\") " pod="openstack/ovsdbserver-sb-0" Feb 24 02:35:04.239667 master-0 kubenswrapper[31411]: I0224 02:35:04.237385 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/faa44386-3634-42fe-b2fc-6cdd257a8b1e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"faa44386-3634-42fe-b2fc-6cdd257a8b1e\") " pod="openstack/ovsdbserver-sb-0" Feb 24 02:35:04.239667 master-0 kubenswrapper[31411]: I0224 02:35:04.237472 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faa44386-3634-42fe-b2fc-6cdd257a8b1e-config\") pod \"ovsdbserver-sb-0\" (UID: \"faa44386-3634-42fe-b2fc-6cdd257a8b1e\") " pod="openstack/ovsdbserver-sb-0" Feb 24 02:35:04.239667 master-0 kubenswrapper[31411]: I0224 02:35:04.237548 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/faa44386-3634-42fe-b2fc-6cdd257a8b1e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"faa44386-3634-42fe-b2fc-6cdd257a8b1e\") " pod="openstack/ovsdbserver-sb-0" Feb 24 02:35:04.340628 master-0 kubenswrapper[31411]: I0224 02:35:04.340521 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsfzn\" (UniqueName: \"kubernetes.io/projected/faa44386-3634-42fe-b2fc-6cdd257a8b1e-kube-api-access-fsfzn\") pod \"ovsdbserver-sb-0\" (UID: \"faa44386-3634-42fe-b2fc-6cdd257a8b1e\") " pod="openstack/ovsdbserver-sb-0" Feb 24 02:35:04.340927 master-0 kubenswrapper[31411]: I0224 02:35:04.340659 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/faa44386-3634-42fe-b2fc-6cdd257a8b1e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"faa44386-3634-42fe-b2fc-6cdd257a8b1e\") " pod="openstack/ovsdbserver-sb-0" Feb 24 02:35:04.340927 master-0 kubenswrapper[31411]: I0224 02:35:04.340706 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faa44386-3634-42fe-b2fc-6cdd257a8b1e-config\") pod \"ovsdbserver-sb-0\" (UID: \"faa44386-3634-42fe-b2fc-6cdd257a8b1e\") " pod="openstack/ovsdbserver-sb-0" Feb 24 02:35:04.340927 master-0 kubenswrapper[31411]: I0224 02:35:04.340747 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/faa44386-3634-42fe-b2fc-6cdd257a8b1e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"faa44386-3634-42fe-b2fc-6cdd257a8b1e\") " pod="openstack/ovsdbserver-sb-0" Feb 24 02:35:04.340927 master-0 kubenswrapper[31411]: I0224 02:35:04.340833 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa44386-3634-42fe-b2fc-6cdd257a8b1e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"faa44386-3634-42fe-b2fc-6cdd257a8b1e\") " pod="openstack/ovsdbserver-sb-0" Feb 24 02:35:04.340927 master-0 kubenswrapper[31411]: I0224 02:35:04.340908 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/faa44386-3634-42fe-b2fc-6cdd257a8b1e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"faa44386-3634-42fe-b2fc-6cdd257a8b1e\") " pod="openstack/ovsdbserver-sb-0" Feb 24 02:35:04.341161 master-0 kubenswrapper[31411]: I0224 02:35:04.340959 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3b312a42-b48e-4c1f-857b-7a51612d6280\" (UniqueName: \"kubernetes.io/csi/topolvm.io^15fd9f9c-ca02-4684-8d26-b411b8b86b43\") pod \"ovsdbserver-sb-0\" (UID: \"faa44386-3634-42fe-b2fc-6cdd257a8b1e\") " pod="openstack/ovsdbserver-sb-0" Feb 24 02:35:04.342024 master-0 kubenswrapper[31411]: I0224 02:35:04.341975 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/faa44386-3634-42fe-b2fc-6cdd257a8b1e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"faa44386-3634-42fe-b2fc-6cdd257a8b1e\") " pod="openstack/ovsdbserver-sb-0" Feb 24 02:35:04.342173 master-0 kubenswrapper[31411]: I0224 02:35:04.342139 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/faa44386-3634-42fe-b2fc-6cdd257a8b1e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"faa44386-3634-42fe-b2fc-6cdd257a8b1e\") " pod="openstack/ovsdbserver-sb-0" Feb 24 02:35:04.343022 master-0 kubenswrapper[31411]: I0224 02:35:04.342988 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faa44386-3634-42fe-b2fc-6cdd257a8b1e-config\") pod \"ovsdbserver-sb-0\" (UID: \"faa44386-3634-42fe-b2fc-6cdd257a8b1e\") " pod="openstack/ovsdbserver-sb-0" Feb 24 02:35:04.343862 master-0 kubenswrapper[31411]: I0224 02:35:04.343799 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/faa44386-3634-42fe-b2fc-6cdd257a8b1e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"faa44386-3634-42fe-b2fc-6cdd257a8b1e\") " pod="openstack/ovsdbserver-sb-0" Feb 24 02:35:04.347225 master-0 kubenswrapper[31411]: I0224 02:35:04.346039 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/faa44386-3634-42fe-b2fc-6cdd257a8b1e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"faa44386-3634-42fe-b2fc-6cdd257a8b1e\") " pod="openstack/ovsdbserver-sb-0" Feb 24 02:35:04.353641 master-0 kubenswrapper[31411]: I0224 02:35:04.349427 31411 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 02:35:04.353641 master-0 kubenswrapper[31411]: I0224 02:35:04.349450 31411 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3b312a42-b48e-4c1f-857b-7a51612d6280\" (UniqueName: \"kubernetes.io/csi/topolvm.io^15fd9f9c-ca02-4684-8d26-b411b8b86b43\") pod \"ovsdbserver-sb-0\" (UID: \"faa44386-3634-42fe-b2fc-6cdd257a8b1e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/a4a4a318a9a142fc8855ccbe3587e85e729dfe8c84a302538c61c3d7648ca653/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 24 02:35:04.353641 master-0 kubenswrapper[31411]: I0224 02:35:04.349730 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/faa44386-3634-42fe-b2fc-6cdd257a8b1e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"faa44386-3634-42fe-b2fc-6cdd257a8b1e\") " pod="openstack/ovsdbserver-sb-0" Feb 24 02:35:04.355103 master-0 kubenswrapper[31411]: I0224 02:35:04.355061 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa44386-3634-42fe-b2fc-6cdd257a8b1e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"faa44386-3634-42fe-b2fc-6cdd257a8b1e\") " pod="openstack/ovsdbserver-sb-0" Feb 24 02:35:04.369665 master-0 kubenswrapper[31411]: I0224 02:35:04.369624 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsfzn\" (UniqueName: \"kubernetes.io/projected/faa44386-3634-42fe-b2fc-6cdd257a8b1e-kube-api-access-fsfzn\") pod \"ovsdbserver-sb-0\" (UID: \"faa44386-3634-42fe-b2fc-6cdd257a8b1e\") " pod="openstack/ovsdbserver-sb-0" Feb 24 02:35:06.793243 master-0 kubenswrapper[31411]: I0224 02:35:06.793155 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3b312a42-b48e-4c1f-857b-7a51612d6280\" (UniqueName: \"kubernetes.io/csi/topolvm.io^15fd9f9c-ca02-4684-8d26-b411b8b86b43\") pod \"ovsdbserver-sb-0\" (UID: \"faa44386-3634-42fe-b2fc-6cdd257a8b1e\") " pod="openstack/ovsdbserver-sb-0" Feb 24 02:35:06.900379 master-0 kubenswrapper[31411]: I0224 02:35:06.900293 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 24 02:35:13.487950 master-0 kubenswrapper[31411]: I0224 02:35:13.487826 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hjmv9"] Feb 24 02:35:13.499773 master-0 kubenswrapper[31411]: I0224 02:35:13.499706 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 24 02:35:13.501634 master-0 kubenswrapper[31411]: I0224 02:35:13.501512 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 24 02:35:13.536396 master-0 kubenswrapper[31411]: W0224 02:35:13.536320 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76252167_d1e5_4ee1_b26f_853eb9e161a7.slice/crio-e61443fc1f48db4f214647be752660c350636663ce8ad5dfbad4f36cb39056e9 WatchSource:0}: Error finding container e61443fc1f48db4f214647be752660c350636663ce8ad5dfbad4f36cb39056e9: Status 404 returned error can't find the container with id e61443fc1f48db4f214647be752660c350636663ce8ad5dfbad4f36cb39056e9 Feb 24 02:35:13.539589 master-0 kubenswrapper[31411]: W0224 02:35:13.539187 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86b07869_3ccf_46a9_9ca3_9954a1508cff.slice/crio-c563e7e3f5348a0c776b065d7424c7d91577c54914832819d1e08f036a518e16 WatchSource:0}: Error finding container c563e7e3f5348a0c776b065d7424c7d91577c54914832819d1e08f036a518e16: Status 404 returned error can't find the container with id c563e7e3f5348a0c776b065d7424c7d91577c54914832819d1e08f036a518e16 Feb 24 02:35:13.566474 master-0 kubenswrapper[31411]: I0224 02:35:13.566412 31411 generic.go:334] "Generic (PLEG): container finished" podID="8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a" containerID="bd20997455325414f8ff138c922aab11b6bb808314fbc9b522349bf27528f780" exitCode=0 Feb 24 02:35:13.566608 master-0 kubenswrapper[31411]: I0224 02:35:13.566488 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bc7f9869-4kmll" event={"ID":"8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a","Type":"ContainerDied","Data":"bd20997455325414f8ff138c922aab11b6bb808314fbc9b522349bf27528f780"} Feb 24 02:35:13.570634 master-0 kubenswrapper[31411]: I0224 02:35:13.570606 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"cca46d62-e09a-45a1-ae65-0465747dc0a7","Type":"ContainerStarted","Data":"67548f21dc47d2fb92325f3ab190de37d9e15681074a028b7dcac8bdbdce9ca9"} Feb 24 02:35:13.571439 master-0 kubenswrapper[31411]: I0224 02:35:13.571412 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 24 02:35:13.573187 master-0 kubenswrapper[31411]: I0224 02:35:13.573162 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"86b07869-3ccf-46a9-9ca3-9954a1508cff","Type":"ContainerStarted","Data":"c563e7e3f5348a0c776b065d7424c7d91577c54914832819d1e08f036a518e16"} Feb 24 02:35:13.574411 master-0 kubenswrapper[31411]: I0224 02:35:13.574384 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hjmv9" event={"ID":"76252167-d1e5-4ee1-b26f-853eb9e161a7","Type":"ContainerStarted","Data":"e61443fc1f48db4f214647be752660c350636663ce8ad5dfbad4f36cb39056e9"} Feb 24 02:35:13.752659 master-0 kubenswrapper[31411]: W0224 02:35:13.752518 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93374608_d6a1_4e71_8682_3a86e5815f29.slice/crio-6a53a8f04e11a9f3803fa5d4a9d073e14333a189baa272e330573f9f5fe3caf9 WatchSource:0}: Error finding container 6a53a8f04e11a9f3803fa5d4a9d073e14333a189baa272e330573f9f5fe3caf9: Status 404 returned error can't find the container with id 6a53a8f04e11a9f3803fa5d4a9d073e14333a189baa272e330573f9f5fe3caf9 Feb 24 02:35:13.915259 master-0 kubenswrapper[31411]: I0224 02:35:13.915163 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.855623351 podStartE2EDuration="23.915139445s" podCreationTimestamp="2026-02-24 02:34:50 +0000 UTC" firstStartedPulling="2026-02-24 02:34:51.675116869 +0000 UTC m=+834.892314715" lastFinishedPulling="2026-02-24 02:35:12.734632973 +0000 UTC m=+855.951830809" observedRunningTime="2026-02-24 02:35:13.63315773 +0000 UTC m=+856.850355586" watchObservedRunningTime="2026-02-24 02:35:13.915139445 +0000 UTC m=+857.132337301" Feb 24 02:35:13.924014 master-0 kubenswrapper[31411]: I0224 02:35:13.923987 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 24 02:35:14.121678 master-0 kubenswrapper[31411]: I0224 02:35:14.121635 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-lp2wm"] Feb 24 02:35:14.200744 master-0 kubenswrapper[31411]: I0224 02:35:14.200562 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bc7f9869-4kmll" Feb 24 02:35:14.373437 master-0 kubenswrapper[31411]: I0224 02:35:14.373232 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a-config\") pod \"8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a\" (UID: \"8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a\") " Feb 24 02:35:14.373437 master-0 kubenswrapper[31411]: I0224 02:35:14.373359 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkvwn\" (UniqueName: \"kubernetes.io/projected/8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a-kube-api-access-jkvwn\") pod \"8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a\" (UID: \"8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a\") " Feb 24 02:35:14.380111 master-0 kubenswrapper[31411]: I0224 02:35:14.380005 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a-kube-api-access-jkvwn" (OuterVolumeSpecName: "kube-api-access-jkvwn") pod "8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a" (UID: "8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a"). InnerVolumeSpecName "kube-api-access-jkvwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:35:14.480434 master-0 kubenswrapper[31411]: I0224 02:35:14.480130 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkvwn\" (UniqueName: \"kubernetes.io/projected/8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a-kube-api-access-jkvwn\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:14.536590 master-0 kubenswrapper[31411]: I0224 02:35:14.536486 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 24 02:35:14.597733 master-0 kubenswrapper[31411]: I0224 02:35:14.597640 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"93374608-d6a1-4e71-8682-3a86e5815f29","Type":"ContainerStarted","Data":"6a53a8f04e11a9f3803fa5d4a9d073e14333a189baa272e330573f9f5fe3caf9"} Feb 24 02:35:14.605422 master-0 kubenswrapper[31411]: I0224 02:35:14.605168 31411 generic.go:334] "Generic (PLEG): container finished" podID="9d42f804-aae5-4adf-84cf-6887d203342f" containerID="427c559c357c1b72be70fe101b48b199223f1b4f2190b06ff2344652728392b0" exitCode=0 Feb 24 02:35:14.605422 master-0 kubenswrapper[31411]: I0224 02:35:14.605253 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d4c486879-cr468" event={"ID":"9d42f804-aae5-4adf-84cf-6887d203342f","Type":"ContainerDied","Data":"427c559c357c1b72be70fe101b48b199223f1b4f2190b06ff2344652728392b0"} Feb 24 02:35:14.607832 master-0 kubenswrapper[31411]: I0224 02:35:14.607787 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lp2wm" event={"ID":"67e9af05-4de5-4257-b103-4af520af6fec","Type":"ContainerStarted","Data":"282dfe2a930ba96eb7dc243e8ee4d3a8f9c4f3e670a18f6d3d72d11be630c1e6"} Feb 24 02:35:14.610612 master-0 kubenswrapper[31411]: I0224 02:35:14.610560 31411 generic.go:334] "Generic (PLEG): container finished" podID="713f4764-f8a7-4867-bd77-54c68933ca65" containerID="e74b856b6a1bb261cd442ba2c8bc6d6a21e56f91f19a8b9203e73761682d6448" exitCode=0 Feb 24 02:35:14.610694 master-0 kubenswrapper[31411]: I0224 02:35:14.610668 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6974cff98c-qbhgh" event={"ID":"713f4764-f8a7-4867-bd77-54c68933ca65","Type":"ContainerDied","Data":"e74b856b6a1bb261cd442ba2c8bc6d6a21e56f91f19a8b9203e73761682d6448"} Feb 24 02:35:14.613665 master-0 kubenswrapper[31411]: I0224 02:35:14.613560 31411 generic.go:334] "Generic (PLEG): container finished" podID="bef074ce-14e6-4258-8172-bd2e640ae24b" containerID="7bc8db99f303a7551b80ce8a10c7a53443c2a3025ea38623526f0fa9e59b510a" exitCode=0 Feb 24 02:35:14.614071 master-0 kubenswrapper[31411]: I0224 02:35:14.613764 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c45d57b9c-jf69p" event={"ID":"bef074ce-14e6-4258-8172-bd2e640ae24b","Type":"ContainerDied","Data":"7bc8db99f303a7551b80ce8a10c7a53443c2a3025ea38623526f0fa9e59b510a"} Feb 24 02:35:14.617235 master-0 kubenswrapper[31411]: I0224 02:35:14.617178 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bc7f9869-4kmll" event={"ID":"8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a","Type":"ContainerDied","Data":"39addfa10000bc6b3ad84dcc6d47354d6e94e38399750e630bcbb6d2154f967c"} Feb 24 02:35:14.617235 master-0 kubenswrapper[31411]: I0224 02:35:14.617216 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bc7f9869-4kmll" Feb 24 02:35:14.617376 master-0 kubenswrapper[31411]: I0224 02:35:14.617253 31411 scope.go:117] "RemoveContainer" containerID="bd20997455325414f8ff138c922aab11b6bb808314fbc9b522349bf27528f780" Feb 24 02:35:14.620162 master-0 kubenswrapper[31411]: I0224 02:35:14.620091 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"faa44386-3634-42fe-b2fc-6cdd257a8b1e","Type":"ContainerStarted","Data":"e4caa687a324fa28787f26c63fb1ad19de7b5d89d4a9a2782bee7f333330b2d1"} Feb 24 02:35:14.638283 master-0 kubenswrapper[31411]: I0224 02:35:14.636908 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a-config" (OuterVolumeSpecName: "config") pod "8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a" (UID: "8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:35:14.641639 master-0 kubenswrapper[31411]: W0224 02:35:14.641543 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2e5cfb6_e3cd_428c_9efe_8d23b1f289df.slice/crio-56dda0511e4164ea7c670ddf8d8b1a85c08af5b4ce59891e705f8b73851e2f57 WatchSource:0}: Error finding container 56dda0511e4164ea7c670ddf8d8b1a85c08af5b4ce59891e705f8b73851e2f57: Status 404 returned error can't find the container with id 56dda0511e4164ea7c670ddf8d8b1a85c08af5b4ce59891e705f8b73851e2f57 Feb 24 02:35:14.686461 master-0 kubenswrapper[31411]: I0224 02:35:14.686407 31411 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:15.073544 master-0 kubenswrapper[31411]: I0224 02:35:15.073501 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bc7f9869-4kmll"] Feb 24 02:35:15.081434 master-0 kubenswrapper[31411]: I0224 02:35:15.080377 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bc7f9869-4kmll"] Feb 24 02:35:15.115606 master-0 kubenswrapper[31411]: I0224 02:35:15.114163 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a" path="/var/lib/kubelet/pods/8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a/volumes" Feb 24 02:35:15.182634 master-0 kubenswrapper[31411]: E0224 02:35:15.182536 31411 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 24 02:35:15.182634 master-0 kubenswrapper[31411]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/713f4764-f8a7-4867-bd77-54c68933ca65/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 24 02:35:15.182634 master-0 kubenswrapper[31411]: > podSandboxID="5be2d96d71995489bdd5e62083675ae8722bc95bc08e2345594eac5e45d11a61" Feb 24 02:35:15.182971 master-0 kubenswrapper[31411]: E0224 02:35:15.182943 31411 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 02:35:15.182971 master-0 kubenswrapper[31411]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c21a3e6f24adda8d9f7cfdb1115a43c928c3ee0ec263e331a215d9da533bbfcd,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nbchf8h696h5ffh5cdh585hc5hbfh597h58dhfh554h67bh9bh5c9hfch7dh5fbhbbh567h78h669hf8h65dh55dh588h5ddh88h694h669h95h8q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6pq4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000800000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6974cff98c-qbhgh_openstack(713f4764-f8a7-4867-bd77-54c68933ca65): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/713f4764-f8a7-4867-bd77-54c68933ca65/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 24 02:35:15.182971 master-0 kubenswrapper[31411]: > logger="UnhandledError" Feb 24 02:35:15.184218 master-0 kubenswrapper[31411]: E0224 02:35:15.184156 31411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/713f4764-f8a7-4867-bd77-54c68933ca65/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-6974cff98c-qbhgh" podUID="713f4764-f8a7-4867-bd77-54c68933ca65" Feb 24 02:35:15.349991 master-0 kubenswrapper[31411]: I0224 02:35:15.349936 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d4c486879-cr468" Feb 24 02:35:15.527374 master-0 kubenswrapper[31411]: I0224 02:35:15.527219 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk4mc\" (UniqueName: \"kubernetes.io/projected/9d42f804-aae5-4adf-84cf-6887d203342f-kube-api-access-jk4mc\") pod \"9d42f804-aae5-4adf-84cf-6887d203342f\" (UID: \"9d42f804-aae5-4adf-84cf-6887d203342f\") " Feb 24 02:35:15.527374 master-0 kubenswrapper[31411]: I0224 02:35:15.527362 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d42f804-aae5-4adf-84cf-6887d203342f-dns-svc\") pod \"9d42f804-aae5-4adf-84cf-6887d203342f\" (UID: \"9d42f804-aae5-4adf-84cf-6887d203342f\") " Feb 24 02:35:15.530201 master-0 kubenswrapper[31411]: I0224 02:35:15.527413 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d42f804-aae5-4adf-84cf-6887d203342f-config\") pod \"9d42f804-aae5-4adf-84cf-6887d203342f\" (UID: \"9d42f804-aae5-4adf-84cf-6887d203342f\") " Feb 24 02:35:15.534938 master-0 kubenswrapper[31411]: I0224 02:35:15.534872 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d42f804-aae5-4adf-84cf-6887d203342f-kube-api-access-jk4mc" (OuterVolumeSpecName: "kube-api-access-jk4mc") pod "9d42f804-aae5-4adf-84cf-6887d203342f" (UID: "9d42f804-aae5-4adf-84cf-6887d203342f"). InnerVolumeSpecName "kube-api-access-jk4mc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:35:15.563619 master-0 kubenswrapper[31411]: I0224 02:35:15.563551 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d42f804-aae5-4adf-84cf-6887d203342f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9d42f804-aae5-4adf-84cf-6887d203342f" (UID: "9d42f804-aae5-4adf-84cf-6887d203342f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:35:15.594791 master-0 kubenswrapper[31411]: I0224 02:35:15.594729 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d42f804-aae5-4adf-84cf-6887d203342f-config" (OuterVolumeSpecName: "config") pod "9d42f804-aae5-4adf-84cf-6887d203342f" (UID: "9d42f804-aae5-4adf-84cf-6887d203342f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:35:15.630471 master-0 kubenswrapper[31411]: I0224 02:35:15.630398 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk4mc\" (UniqueName: \"kubernetes.io/projected/9d42f804-aae5-4adf-84cf-6887d203342f-kube-api-access-jk4mc\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:15.630471 master-0 kubenswrapper[31411]: I0224 02:35:15.630468 31411 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d42f804-aae5-4adf-84cf-6887d203342f-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:15.630471 master-0 kubenswrapper[31411]: I0224 02:35:15.630480 31411 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d42f804-aae5-4adf-84cf-6887d203342f-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:15.637391 master-0 kubenswrapper[31411]: I0224 02:35:15.637346 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c45d57b9c-jf69p" event={"ID":"bef074ce-14e6-4258-8172-bd2e640ae24b","Type":"ContainerStarted","Data":"875a66b794e9377cb7c1be23dec6128c695853cdb2b4a5ea444834d6b524be82"} Feb 24 02:35:15.638548 master-0 kubenswrapper[31411]: I0224 02:35:15.638527 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c45d57b9c-jf69p" Feb 24 02:35:15.648082 master-0 kubenswrapper[31411]: I0224 02:35:15.647927 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fc47c58d-5bd1-4cb0-942f-6a048792da9a","Type":"ContainerStarted","Data":"cab178893e81ce3fc027589e361050956654cee75cf192c025e9750761ccd5a2"} Feb 24 02:35:15.652694 master-0 kubenswrapper[31411]: I0224 02:35:15.652605 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d4c486879-cr468" event={"ID":"9d42f804-aae5-4adf-84cf-6887d203342f","Type":"ContainerDied","Data":"03d9f3337f71c3356ead72ba5723c3886292e40cc87e98e6e358dd51b57ee05a"} Feb 24 02:35:15.652759 master-0 kubenswrapper[31411]: I0224 02:35:15.652712 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d4c486879-cr468" Feb 24 02:35:15.653228 master-0 kubenswrapper[31411]: I0224 02:35:15.652729 31411 scope.go:117] "RemoveContainer" containerID="427c559c357c1b72be70fe101b48b199223f1b4f2190b06ff2344652728392b0" Feb 24 02:35:15.656766 master-0 kubenswrapper[31411]: I0224 02:35:15.655682 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5680b3af-dae8-4617-80b2-30c0a9818130","Type":"ContainerStarted","Data":"c6824a0e2ca82e08c2d8420752f27bd17ff74edcb1ec26f90b9795fe5f26ce69"} Feb 24 02:35:15.682153 master-0 kubenswrapper[31411]: I0224 02:35:15.682096 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d2e5cfb6-e3cd-428c-9efe-8d23b1f289df","Type":"ContainerStarted","Data":"56dda0511e4164ea7c670ddf8d8b1a85c08af5b4ce59891e705f8b73851e2f57"} Feb 24 02:35:15.700506 master-0 kubenswrapper[31411]: I0224 02:35:15.700233 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c45d57b9c-jf69p" podStartSLOduration=4.11905447 podStartE2EDuration="32.700197696s" podCreationTimestamp="2026-02-24 02:34:43 +0000 UTC" firstStartedPulling="2026-02-24 02:34:44.459048863 +0000 UTC m=+827.676246709" lastFinishedPulling="2026-02-24 02:35:13.040192089 +0000 UTC m=+856.257389935" observedRunningTime="2026-02-24 02:35:15.678729154 +0000 UTC m=+858.895927000" watchObservedRunningTime="2026-02-24 02:35:15.700197696 +0000 UTC m=+858.917395542" Feb 24 02:35:15.838483 master-0 kubenswrapper[31411]: I0224 02:35:15.838331 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d4c486879-cr468"] Feb 24 02:35:15.855366 master-0 kubenswrapper[31411]: I0224 02:35:15.855321 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d4c486879-cr468"] Feb 24 02:35:16.698268 master-0 kubenswrapper[31411]: I0224 02:35:16.697783 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6974cff98c-qbhgh" event={"ID":"713f4764-f8a7-4867-bd77-54c68933ca65","Type":"ContainerStarted","Data":"ee9b80ad25cfd9625a8d508f04fb1903c4ecca6d160d6b7e1cbe1b2486f8ce19"} Feb 24 02:35:16.741243 master-0 kubenswrapper[31411]: I0224 02:35:16.741149 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6974cff98c-qbhgh" podStartSLOduration=5.438175905 podStartE2EDuration="34.741124556s" podCreationTimestamp="2026-02-24 02:34:42 +0000 UTC" firstStartedPulling="2026-02-24 02:34:43.692469293 +0000 UTC m=+826.909667139" lastFinishedPulling="2026-02-24 02:35:12.995417944 +0000 UTC m=+856.212615790" observedRunningTime="2026-02-24 02:35:16.734022387 +0000 UTC m=+859.951220243" watchObservedRunningTime="2026-02-24 02:35:16.741124556 +0000 UTC m=+859.958322412" Feb 24 02:35:17.116711 master-0 kubenswrapper[31411]: I0224 02:35:17.116478 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d42f804-aae5-4adf-84cf-6887d203342f" path="/var/lib/kubelet/pods/9d42f804-aae5-4adf-84cf-6887d203342f/volumes" Feb 24 02:35:17.521311 master-0 kubenswrapper[31411]: I0224 02:35:17.521231 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6974cff98c-qbhgh" Feb 24 02:35:21.133448 master-0 kubenswrapper[31411]: I0224 02:35:21.133369 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 24 02:35:22.522967 master-0 kubenswrapper[31411]: I0224 02:35:22.522881 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6974cff98c-qbhgh" Feb 24 02:35:22.785598 master-0 kubenswrapper[31411]: I0224 02:35:22.785419 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"faa44386-3634-42fe-b2fc-6cdd257a8b1e","Type":"ContainerStarted","Data":"00fc2ba1116f07ed70d7633d72e4f68c478a610b37e1f57ba7902d4be65d589f"} Feb 24 02:35:22.791676 master-0 kubenswrapper[31411]: I0224 02:35:22.790810 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"93374608-d6a1-4e71-8682-3a86e5815f29","Type":"ContainerStarted","Data":"ba988d73b4e4113bfb75f1ec9aa32a10ed85a2be5611e5ec6ed854f2eba855a9"} Feb 24 02:35:22.794143 master-0 kubenswrapper[31411]: I0224 02:35:22.794001 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"86b07869-3ccf-46a9-9ca3-9954a1508cff","Type":"ContainerStarted","Data":"3ce5cea28be25b62e9997eb5887e43eaf043dae90daf8d764fa71934a3aff59b"} Feb 24 02:35:22.796420 master-0 kubenswrapper[31411]: I0224 02:35:22.796373 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hjmv9" event={"ID":"76252167-d1e5-4ee1-b26f-853eb9e161a7","Type":"ContainerStarted","Data":"4b586b3d45521f93ad4392e69f8f3e02ce2158e159fb0d8c52c74f0f4281d121"} Feb 24 02:35:22.796519 master-0 kubenswrapper[31411]: I0224 02:35:22.796484 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-hjmv9" Feb 24 02:35:22.799314 master-0 kubenswrapper[31411]: I0224 02:35:22.799247 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d2e5cfb6-e3cd-428c-9efe-8d23b1f289df","Type":"ContainerStarted","Data":"796d395581c4e75492194270bbfe413e33e430cab46b82f0da98764337d53e95"} Feb 24 02:35:22.802182 master-0 kubenswrapper[31411]: I0224 02:35:22.802147 31411 generic.go:334] "Generic (PLEG): container finished" podID="67e9af05-4de5-4257-b103-4af520af6fec" containerID="56aec29a38aa951ef1fd42fcd6465ebe4344b4f315be1a3e5f475d9a142e241b" exitCode=0 Feb 24 02:35:22.802348 master-0 kubenswrapper[31411]: I0224 02:35:22.802191 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lp2wm" event={"ID":"67e9af05-4de5-4257-b103-4af520af6fec","Type":"ContainerDied","Data":"56aec29a38aa951ef1fd42fcd6465ebe4344b4f315be1a3e5f475d9a142e241b"} Feb 24 02:35:22.873799 master-0 kubenswrapper[31411]: I0224 02:35:22.873697 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-hjmv9" podStartSLOduration=15.905795306 podStartE2EDuration="23.873668057s" podCreationTimestamp="2026-02-24 02:34:59 +0000 UTC" firstStartedPulling="2026-02-24 02:35:13.543077015 +0000 UTC m=+856.760274861" lastFinishedPulling="2026-02-24 02:35:21.510949756 +0000 UTC m=+864.728147612" observedRunningTime="2026-02-24 02:35:22.84843036 +0000 UTC m=+866.065628206" watchObservedRunningTime="2026-02-24 02:35:22.873668057 +0000 UTC m=+866.090865903" Feb 24 02:35:23.799023 master-0 kubenswrapper[31411]: I0224 02:35:23.798866 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c45d57b9c-jf69p" Feb 24 02:35:23.814641 master-0 kubenswrapper[31411]: I0224 02:35:23.814489 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lp2wm" event={"ID":"67e9af05-4de5-4257-b103-4af520af6fec","Type":"ContainerStarted","Data":"ce5ab3b7bd9d7fa1df1dd907463899bfedeb4f4d726a301dec475c762ca68fc3"} Feb 24 02:35:23.916830 master-0 kubenswrapper[31411]: I0224 02:35:23.914977 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6974cff98c-qbhgh"] Feb 24 02:35:23.926637 master-0 kubenswrapper[31411]: I0224 02:35:23.925495 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6974cff98c-qbhgh" podUID="713f4764-f8a7-4867-bd77-54c68933ca65" containerName="dnsmasq-dns" containerID="cri-o://ee9b80ad25cfd9625a8d508f04fb1903c4ecca6d160d6b7e1cbe1b2486f8ce19" gracePeriod=10 Feb 24 02:35:24.579222 master-0 kubenswrapper[31411]: I0224 02:35:24.578219 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6974cff98c-qbhgh" Feb 24 02:35:24.635941 master-0 kubenswrapper[31411]: I0224 02:35:24.633980 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/713f4764-f8a7-4867-bd77-54c68933ca65-dns-svc\") pod \"713f4764-f8a7-4867-bd77-54c68933ca65\" (UID: \"713f4764-f8a7-4867-bd77-54c68933ca65\") " Feb 24 02:35:24.635941 master-0 kubenswrapper[31411]: I0224 02:35:24.634131 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/713f4764-f8a7-4867-bd77-54c68933ca65-config\") pod \"713f4764-f8a7-4867-bd77-54c68933ca65\" (UID: \"713f4764-f8a7-4867-bd77-54c68933ca65\") " Feb 24 02:35:24.635941 master-0 kubenswrapper[31411]: I0224 02:35:24.634208 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pq4r\" (UniqueName: \"kubernetes.io/projected/713f4764-f8a7-4867-bd77-54c68933ca65-kube-api-access-6pq4r\") pod \"713f4764-f8a7-4867-bd77-54c68933ca65\" (UID: \"713f4764-f8a7-4867-bd77-54c68933ca65\") " Feb 24 02:35:24.650692 master-0 kubenswrapper[31411]: I0224 02:35:24.642445 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/713f4764-f8a7-4867-bd77-54c68933ca65-kube-api-access-6pq4r" (OuterVolumeSpecName: "kube-api-access-6pq4r") pod "713f4764-f8a7-4867-bd77-54c68933ca65" (UID: "713f4764-f8a7-4867-bd77-54c68933ca65"). InnerVolumeSpecName "kube-api-access-6pq4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:35:24.705943 master-0 kubenswrapper[31411]: I0224 02:35:24.705854 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/713f4764-f8a7-4867-bd77-54c68933ca65-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "713f4764-f8a7-4867-bd77-54c68933ca65" (UID: "713f4764-f8a7-4867-bd77-54c68933ca65"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:35:24.712015 master-0 kubenswrapper[31411]: I0224 02:35:24.711924 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/713f4764-f8a7-4867-bd77-54c68933ca65-config" (OuterVolumeSpecName: "config") pod "713f4764-f8a7-4867-bd77-54c68933ca65" (UID: "713f4764-f8a7-4867-bd77-54c68933ca65"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:35:24.738596 master-0 kubenswrapper[31411]: I0224 02:35:24.737843 31411 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/713f4764-f8a7-4867-bd77-54c68933ca65-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:24.738596 master-0 kubenswrapper[31411]: I0224 02:35:24.737883 31411 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/713f4764-f8a7-4867-bd77-54c68933ca65-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:24.738596 master-0 kubenswrapper[31411]: I0224 02:35:24.737926 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pq4r\" (UniqueName: \"kubernetes.io/projected/713f4764-f8a7-4867-bd77-54c68933ca65-kube-api-access-6pq4r\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:24.842542 master-0 kubenswrapper[31411]: I0224 02:35:24.842445 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d2e5cfb6-e3cd-428c-9efe-8d23b1f289df","Type":"ContainerStarted","Data":"51c967649e932fc89b0daa32ba171dbf7196d9d4647562199672abe233901c29"} Feb 24 02:35:24.847160 master-0 kubenswrapper[31411]: I0224 02:35:24.847101 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lp2wm" event={"ID":"67e9af05-4de5-4257-b103-4af520af6fec","Type":"ContainerStarted","Data":"f6a233cc3ae2bf99cd9f88a3d4d32b8bbc403beb32f84324b01370a3c3f8a68e"} Feb 24 02:35:24.847331 master-0 kubenswrapper[31411]: I0224 02:35:24.847281 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-lp2wm" Feb 24 02:35:24.847418 master-0 kubenswrapper[31411]: I0224 02:35:24.847337 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-lp2wm" Feb 24 02:35:24.849237 master-0 kubenswrapper[31411]: I0224 02:35:24.849191 31411 generic.go:334] "Generic (PLEG): container finished" podID="713f4764-f8a7-4867-bd77-54c68933ca65" containerID="ee9b80ad25cfd9625a8d508f04fb1903c4ecca6d160d6b7e1cbe1b2486f8ce19" exitCode=0 Feb 24 02:35:24.849321 master-0 kubenswrapper[31411]: I0224 02:35:24.849252 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6974cff98c-qbhgh" event={"ID":"713f4764-f8a7-4867-bd77-54c68933ca65","Type":"ContainerDied","Data":"ee9b80ad25cfd9625a8d508f04fb1903c4ecca6d160d6b7e1cbe1b2486f8ce19"} Feb 24 02:35:24.849321 master-0 kubenswrapper[31411]: I0224 02:35:24.849274 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6974cff98c-qbhgh" event={"ID":"713f4764-f8a7-4867-bd77-54c68933ca65","Type":"ContainerDied","Data":"5be2d96d71995489bdd5e62083675ae8722bc95bc08e2345594eac5e45d11a61"} Feb 24 02:35:24.849321 master-0 kubenswrapper[31411]: I0224 02:35:24.849295 31411 scope.go:117] "RemoveContainer" containerID="ee9b80ad25cfd9625a8d508f04fb1903c4ecca6d160d6b7e1cbe1b2486f8ce19" Feb 24 02:35:24.849462 master-0 kubenswrapper[31411]: I0224 02:35:24.849426 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6974cff98c-qbhgh" Feb 24 02:35:24.852041 master-0 kubenswrapper[31411]: I0224 02:35:24.851986 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"faa44386-3634-42fe-b2fc-6cdd257a8b1e","Type":"ContainerStarted","Data":"273e2dbe7d980b85b3ef91fd1f2cc737f5b71cf871eb16fcff8d12fa99a049ad"} Feb 24 02:35:24.870556 master-0 kubenswrapper[31411]: I0224 02:35:24.870469 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=17.543514653 podStartE2EDuration="26.870447922s" podCreationTimestamp="2026-02-24 02:34:58 +0000 UTC" firstStartedPulling="2026-02-24 02:35:14.654615305 +0000 UTC m=+857.871813171" lastFinishedPulling="2026-02-24 02:35:23.981548594 +0000 UTC m=+867.198746440" observedRunningTime="2026-02-24 02:35:24.866921843 +0000 UTC m=+868.084119689" watchObservedRunningTime="2026-02-24 02:35:24.870447922 +0000 UTC m=+868.087645768" Feb 24 02:35:24.899832 master-0 kubenswrapper[31411]: I0224 02:35:24.899776 31411 scope.go:117] "RemoveContainer" containerID="e74b856b6a1bb261cd442ba2c8bc6d6a21e56f91f19a8b9203e73761682d6448" Feb 24 02:35:24.902750 master-0 kubenswrapper[31411]: I0224 02:35:24.900472 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 24 02:35:24.902750 master-0 kubenswrapper[31411]: I0224 02:35:24.901129 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6974cff98c-qbhgh"] Feb 24 02:35:24.914995 master-0 kubenswrapper[31411]: I0224 02:35:24.914935 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6974cff98c-qbhgh"] Feb 24 02:35:24.916627 master-0 kubenswrapper[31411]: I0224 02:35:24.916529 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-lp2wm" podStartSLOduration=18.679056678 podStartE2EDuration="25.916503133s" podCreationTimestamp="2026-02-24 02:34:59 +0000 UTC" firstStartedPulling="2026-02-24 02:35:14.244521829 +0000 UTC m=+857.461719665" lastFinishedPulling="2026-02-24 02:35:21.481968274 +0000 UTC m=+864.699166120" observedRunningTime="2026-02-24 02:35:24.908648773 +0000 UTC m=+868.125846619" watchObservedRunningTime="2026-02-24 02:35:24.916503133 +0000 UTC m=+868.133700979" Feb 24 02:35:24.938599 master-0 kubenswrapper[31411]: I0224 02:35:24.935718 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=12.881952687 podStartE2EDuration="22.935690541s" podCreationTimestamp="2026-02-24 02:35:02 +0000 UTC" firstStartedPulling="2026-02-24 02:35:13.950087415 +0000 UTC m=+857.167285251" lastFinishedPulling="2026-02-24 02:35:24.003825259 +0000 UTC m=+867.221023105" observedRunningTime="2026-02-24 02:35:24.932115391 +0000 UTC m=+868.149313247" watchObservedRunningTime="2026-02-24 02:35:24.935690541 +0000 UTC m=+868.152888387" Feb 24 02:35:24.947763 master-0 kubenswrapper[31411]: I0224 02:35:24.946542 31411 scope.go:117] "RemoveContainer" containerID="ee9b80ad25cfd9625a8d508f04fb1903c4ecca6d160d6b7e1cbe1b2486f8ce19" Feb 24 02:35:24.947763 master-0 kubenswrapper[31411]: E0224 02:35:24.947210 31411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee9b80ad25cfd9625a8d508f04fb1903c4ecca6d160d6b7e1cbe1b2486f8ce19\": container with ID starting with ee9b80ad25cfd9625a8d508f04fb1903c4ecca6d160d6b7e1cbe1b2486f8ce19 not found: ID does not exist" containerID="ee9b80ad25cfd9625a8d508f04fb1903c4ecca6d160d6b7e1cbe1b2486f8ce19" Feb 24 02:35:24.947763 master-0 kubenswrapper[31411]: I0224 02:35:24.947266 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee9b80ad25cfd9625a8d508f04fb1903c4ecca6d160d6b7e1cbe1b2486f8ce19"} err="failed to get container status \"ee9b80ad25cfd9625a8d508f04fb1903c4ecca6d160d6b7e1cbe1b2486f8ce19\": rpc error: code = NotFound desc = could not find container \"ee9b80ad25cfd9625a8d508f04fb1903c4ecca6d160d6b7e1cbe1b2486f8ce19\": container with ID starting with ee9b80ad25cfd9625a8d508f04fb1903c4ecca6d160d6b7e1cbe1b2486f8ce19 not found: ID does not exist" Feb 24 02:35:24.947763 master-0 kubenswrapper[31411]: I0224 02:35:24.947303 31411 scope.go:117] "RemoveContainer" containerID="e74b856b6a1bb261cd442ba2c8bc6d6a21e56f91f19a8b9203e73761682d6448" Feb 24 02:35:24.952595 master-0 kubenswrapper[31411]: E0224 02:35:24.948003 31411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e74b856b6a1bb261cd442ba2c8bc6d6a21e56f91f19a8b9203e73761682d6448\": container with ID starting with e74b856b6a1bb261cd442ba2c8bc6d6a21e56f91f19a8b9203e73761682d6448 not found: ID does not exist" containerID="e74b856b6a1bb261cd442ba2c8bc6d6a21e56f91f19a8b9203e73761682d6448" Feb 24 02:35:24.952595 master-0 kubenswrapper[31411]: I0224 02:35:24.948052 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e74b856b6a1bb261cd442ba2c8bc6d6a21e56f91f19a8b9203e73761682d6448"} err="failed to get container status \"e74b856b6a1bb261cd442ba2c8bc6d6a21e56f91f19a8b9203e73761682d6448\": rpc error: code = NotFound desc = could not find container \"e74b856b6a1bb261cd442ba2c8bc6d6a21e56f91f19a8b9203e73761682d6448\": container with ID starting with e74b856b6a1bb261cd442ba2c8bc6d6a21e56f91f19a8b9203e73761682d6448 not found: ID does not exist" Feb 24 02:35:24.960655 master-0 kubenswrapper[31411]: I0224 02:35:24.960436 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 24 02:35:25.123257 master-0 kubenswrapper[31411]: I0224 02:35:25.123009 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="713f4764-f8a7-4867-bd77-54c68933ca65" path="/var/lib/kubelet/pods/713f4764-f8a7-4867-bd77-54c68933ca65/volumes" Feb 24 02:35:25.873861 master-0 kubenswrapper[31411]: I0224 02:35:25.873759 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 24 02:35:26.843617 master-0 kubenswrapper[31411]: I0224 02:35:26.843488 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 24 02:35:26.891202 master-0 kubenswrapper[31411]: I0224 02:35:26.891117 31411 generic.go:334] "Generic (PLEG): container finished" podID="93374608-d6a1-4e71-8682-3a86e5815f29" containerID="ba988d73b4e4113bfb75f1ec9aa32a10ed85a2be5611e5ec6ed854f2eba855a9" exitCode=0 Feb 24 02:35:26.892085 master-0 kubenswrapper[31411]: I0224 02:35:26.891233 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"93374608-d6a1-4e71-8682-3a86e5815f29","Type":"ContainerDied","Data":"ba988d73b4e4113bfb75f1ec9aa32a10ed85a2be5611e5ec6ed854f2eba855a9"} Feb 24 02:35:26.894208 master-0 kubenswrapper[31411]: I0224 02:35:26.893562 31411 generic.go:334] "Generic (PLEG): container finished" podID="86b07869-3ccf-46a9-9ca3-9954a1508cff" containerID="3ce5cea28be25b62e9997eb5887e43eaf043dae90daf8d764fa71934a3aff59b" exitCode=0 Feb 24 02:35:26.894208 master-0 kubenswrapper[31411]: I0224 02:35:26.893713 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"86b07869-3ccf-46a9-9ca3-9954a1508cff","Type":"ContainerDied","Data":"3ce5cea28be25b62e9997eb5887e43eaf043dae90daf8d764fa71934a3aff59b"} Feb 24 02:35:26.923358 master-0 kubenswrapper[31411]: I0224 02:35:26.923294 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 24 02:35:26.984822 master-0 kubenswrapper[31411]: I0224 02:35:26.984748 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 24 02:35:27.356625 master-0 kubenswrapper[31411]: I0224 02:35:27.355826 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-679f75d775-s56hh"] Feb 24 02:35:27.356625 master-0 kubenswrapper[31411]: E0224 02:35:27.356277 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="713f4764-f8a7-4867-bd77-54c68933ca65" containerName="init" Feb 24 02:35:27.356625 master-0 kubenswrapper[31411]: I0224 02:35:27.356293 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="713f4764-f8a7-4867-bd77-54c68933ca65" containerName="init" Feb 24 02:35:27.356625 master-0 kubenswrapper[31411]: E0224 02:35:27.356333 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d42f804-aae5-4adf-84cf-6887d203342f" containerName="init" Feb 24 02:35:27.356625 master-0 kubenswrapper[31411]: I0224 02:35:27.356339 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d42f804-aae5-4adf-84cf-6887d203342f" containerName="init" Feb 24 02:35:27.356625 master-0 kubenswrapper[31411]: E0224 02:35:27.356353 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a" containerName="init" Feb 24 02:35:27.356625 master-0 kubenswrapper[31411]: I0224 02:35:27.356360 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a" containerName="init" Feb 24 02:35:27.356625 master-0 kubenswrapper[31411]: E0224 02:35:27.356385 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="713f4764-f8a7-4867-bd77-54c68933ca65" containerName="dnsmasq-dns" Feb 24 02:35:27.356625 master-0 kubenswrapper[31411]: I0224 02:35:27.356391 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="713f4764-f8a7-4867-bd77-54c68933ca65" containerName="dnsmasq-dns" Feb 24 02:35:27.356625 master-0 kubenswrapper[31411]: I0224 02:35:27.356586 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="713f4764-f8a7-4867-bd77-54c68933ca65" containerName="dnsmasq-dns" Feb 24 02:35:27.356625 master-0 kubenswrapper[31411]: I0224 02:35:27.356618 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fb75a8f-5e1e-479a-b9e2-b1fa1279cd4a" containerName="init" Feb 24 02:35:27.356625 master-0 kubenswrapper[31411]: I0224 02:35:27.356649 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d42f804-aae5-4adf-84cf-6887d203342f" containerName="init" Feb 24 02:35:27.357718 master-0 kubenswrapper[31411]: I0224 02:35:27.357686 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-679f75d775-s56hh" Feb 24 02:35:27.369543 master-0 kubenswrapper[31411]: I0224 02:35:27.369512 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 24 02:35:27.390548 master-0 kubenswrapper[31411]: I0224 02:35:27.390484 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-679f75d775-s56hh"] Feb 24 02:35:27.476106 master-0 kubenswrapper[31411]: I0224 02:35:27.475783 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-5w4cf"] Feb 24 02:35:27.483129 master-0 kubenswrapper[31411]: I0224 02:35:27.482769 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-5w4cf" Feb 24 02:35:27.495156 master-0 kubenswrapper[31411]: I0224 02:35:27.486646 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 24 02:35:27.495156 master-0 kubenswrapper[31411]: I0224 02:35:27.491487 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-5w4cf"] Feb 24 02:35:27.547500 master-0 kubenswrapper[31411]: I0224 02:35:27.547416 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35eba989-c9cb-4c28-9b21-5d149c4672eb-dns-svc\") pod \"dnsmasq-dns-679f75d775-s56hh\" (UID: \"35eba989-c9cb-4c28-9b21-5d149c4672eb\") " pod="openstack/dnsmasq-dns-679f75d775-s56hh" Feb 24 02:35:27.547781 master-0 kubenswrapper[31411]: I0224 02:35:27.547675 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35eba989-c9cb-4c28-9b21-5d149c4672eb-config\") pod \"dnsmasq-dns-679f75d775-s56hh\" (UID: \"35eba989-c9cb-4c28-9b21-5d149c4672eb\") " pod="openstack/dnsmasq-dns-679f75d775-s56hh" Feb 24 02:35:27.547781 master-0 kubenswrapper[31411]: I0224 02:35:27.547708 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqxbq\" (UniqueName: \"kubernetes.io/projected/35eba989-c9cb-4c28-9b21-5d149c4672eb-kube-api-access-xqxbq\") pod \"dnsmasq-dns-679f75d775-s56hh\" (UID: \"35eba989-c9cb-4c28-9b21-5d149c4672eb\") " pod="openstack/dnsmasq-dns-679f75d775-s56hh" Feb 24 02:35:27.547781 master-0 kubenswrapper[31411]: I0224 02:35:27.547733 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35eba989-c9cb-4c28-9b21-5d149c4672eb-ovsdbserver-sb\") pod \"dnsmasq-dns-679f75d775-s56hh\" (UID: \"35eba989-c9cb-4c28-9b21-5d149c4672eb\") " pod="openstack/dnsmasq-dns-679f75d775-s56hh" Feb 24 02:35:27.649626 master-0 kubenswrapper[31411]: I0224 02:35:27.649536 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/77be4a7f-0ff5-439e-9298-8e071291ba72-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5w4cf\" (UID: \"77be4a7f-0ff5-439e-9298-8e071291ba72\") " pod="openstack/ovn-controller-metrics-5w4cf" Feb 24 02:35:27.649626 master-0 kubenswrapper[31411]: I0224 02:35:27.649633 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35eba989-c9cb-4c28-9b21-5d149c4672eb-config\") pod \"dnsmasq-dns-679f75d775-s56hh\" (UID: \"35eba989-c9cb-4c28-9b21-5d149c4672eb\") " pod="openstack/dnsmasq-dns-679f75d775-s56hh" Feb 24 02:35:27.649975 master-0 kubenswrapper[31411]: I0224 02:35:27.649665 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/77be4a7f-0ff5-439e-9298-8e071291ba72-ovs-rundir\") pod \"ovn-controller-metrics-5w4cf\" (UID: \"77be4a7f-0ff5-439e-9298-8e071291ba72\") " pod="openstack/ovn-controller-metrics-5w4cf" Feb 24 02:35:27.649975 master-0 kubenswrapper[31411]: I0224 02:35:27.649689 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqxbq\" (UniqueName: \"kubernetes.io/projected/35eba989-c9cb-4c28-9b21-5d149c4672eb-kube-api-access-xqxbq\") pod \"dnsmasq-dns-679f75d775-s56hh\" (UID: \"35eba989-c9cb-4c28-9b21-5d149c4672eb\") " pod="openstack/dnsmasq-dns-679f75d775-s56hh" Feb 24 02:35:27.649975 master-0 kubenswrapper[31411]: I0224 02:35:27.649879 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35eba989-c9cb-4c28-9b21-5d149c4672eb-ovsdbserver-sb\") pod \"dnsmasq-dns-679f75d775-s56hh\" (UID: \"35eba989-c9cb-4c28-9b21-5d149c4672eb\") " pod="openstack/dnsmasq-dns-679f75d775-s56hh" Feb 24 02:35:27.650122 master-0 kubenswrapper[31411]: I0224 02:35:27.650047 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrndj\" (UniqueName: \"kubernetes.io/projected/77be4a7f-0ff5-439e-9298-8e071291ba72-kube-api-access-nrndj\") pod \"ovn-controller-metrics-5w4cf\" (UID: \"77be4a7f-0ff5-439e-9298-8e071291ba72\") " pod="openstack/ovn-controller-metrics-5w4cf" Feb 24 02:35:27.650518 master-0 kubenswrapper[31411]: I0224 02:35:27.650461 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/77be4a7f-0ff5-439e-9298-8e071291ba72-ovn-rundir\") pod \"ovn-controller-metrics-5w4cf\" (UID: \"77be4a7f-0ff5-439e-9298-8e071291ba72\") " pod="openstack/ovn-controller-metrics-5w4cf" Feb 24 02:35:27.650762 master-0 kubenswrapper[31411]: I0224 02:35:27.650722 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35eba989-c9cb-4c28-9b21-5d149c4672eb-config\") pod \"dnsmasq-dns-679f75d775-s56hh\" (UID: \"35eba989-c9cb-4c28-9b21-5d149c4672eb\") " pod="openstack/dnsmasq-dns-679f75d775-s56hh" Feb 24 02:35:27.650945 master-0 kubenswrapper[31411]: I0224 02:35:27.650901 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35eba989-c9cb-4c28-9b21-5d149c4672eb-ovsdbserver-sb\") pod \"dnsmasq-dns-679f75d775-s56hh\" (UID: \"35eba989-c9cb-4c28-9b21-5d149c4672eb\") " pod="openstack/dnsmasq-dns-679f75d775-s56hh" Feb 24 02:35:27.651114 master-0 kubenswrapper[31411]: I0224 02:35:27.651093 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35eba989-c9cb-4c28-9b21-5d149c4672eb-dns-svc\") pod \"dnsmasq-dns-679f75d775-s56hh\" (UID: \"35eba989-c9cb-4c28-9b21-5d149c4672eb\") " pod="openstack/dnsmasq-dns-679f75d775-s56hh" Feb 24 02:35:27.651307 master-0 kubenswrapper[31411]: I0224 02:35:27.651288 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77be4a7f-0ff5-439e-9298-8e071291ba72-config\") pod \"ovn-controller-metrics-5w4cf\" (UID: \"77be4a7f-0ff5-439e-9298-8e071291ba72\") " pod="openstack/ovn-controller-metrics-5w4cf" Feb 24 02:35:27.651424 master-0 kubenswrapper[31411]: I0224 02:35:27.651402 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77be4a7f-0ff5-439e-9298-8e071291ba72-combined-ca-bundle\") pod \"ovn-controller-metrics-5w4cf\" (UID: \"77be4a7f-0ff5-439e-9298-8e071291ba72\") " pod="openstack/ovn-controller-metrics-5w4cf" Feb 24 02:35:27.651851 master-0 kubenswrapper[31411]: I0224 02:35:27.651756 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35eba989-c9cb-4c28-9b21-5d149c4672eb-dns-svc\") pod \"dnsmasq-dns-679f75d775-s56hh\" (UID: \"35eba989-c9cb-4c28-9b21-5d149c4672eb\") " pod="openstack/dnsmasq-dns-679f75d775-s56hh" Feb 24 02:35:27.667289 master-0 kubenswrapper[31411]: I0224 02:35:27.667245 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqxbq\" (UniqueName: \"kubernetes.io/projected/35eba989-c9cb-4c28-9b21-5d149c4672eb-kube-api-access-xqxbq\") pod \"dnsmasq-dns-679f75d775-s56hh\" (UID: \"35eba989-c9cb-4c28-9b21-5d149c4672eb\") " pod="openstack/dnsmasq-dns-679f75d775-s56hh" Feb 24 02:35:27.685290 master-0 kubenswrapper[31411]: I0224 02:35:27.683282 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-679f75d775-s56hh" Feb 24 02:35:27.754674 master-0 kubenswrapper[31411]: I0224 02:35:27.754520 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/77be4a7f-0ff5-439e-9298-8e071291ba72-ovn-rundir\") pod \"ovn-controller-metrics-5w4cf\" (UID: \"77be4a7f-0ff5-439e-9298-8e071291ba72\") " pod="openstack/ovn-controller-metrics-5w4cf" Feb 24 02:35:27.754890 master-0 kubenswrapper[31411]: I0224 02:35:27.754687 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77be4a7f-0ff5-439e-9298-8e071291ba72-config\") pod \"ovn-controller-metrics-5w4cf\" (UID: \"77be4a7f-0ff5-439e-9298-8e071291ba72\") " pod="openstack/ovn-controller-metrics-5w4cf" Feb 24 02:35:27.754890 master-0 kubenswrapper[31411]: I0224 02:35:27.754730 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77be4a7f-0ff5-439e-9298-8e071291ba72-combined-ca-bundle\") pod \"ovn-controller-metrics-5w4cf\" (UID: \"77be4a7f-0ff5-439e-9298-8e071291ba72\") " pod="openstack/ovn-controller-metrics-5w4cf" Feb 24 02:35:27.754890 master-0 kubenswrapper[31411]: I0224 02:35:27.754754 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/77be4a7f-0ff5-439e-9298-8e071291ba72-ovn-rundir\") pod \"ovn-controller-metrics-5w4cf\" (UID: \"77be4a7f-0ff5-439e-9298-8e071291ba72\") " pod="openstack/ovn-controller-metrics-5w4cf" Feb 24 02:35:27.755024 master-0 kubenswrapper[31411]: I0224 02:35:27.754999 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/77be4a7f-0ff5-439e-9298-8e071291ba72-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5w4cf\" (UID: \"77be4a7f-0ff5-439e-9298-8e071291ba72\") " pod="openstack/ovn-controller-metrics-5w4cf" Feb 24 02:35:27.755199 master-0 kubenswrapper[31411]: I0224 02:35:27.755164 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/77be4a7f-0ff5-439e-9298-8e071291ba72-ovs-rundir\") pod \"ovn-controller-metrics-5w4cf\" (UID: \"77be4a7f-0ff5-439e-9298-8e071291ba72\") " pod="openstack/ovn-controller-metrics-5w4cf" Feb 24 02:35:27.755276 master-0 kubenswrapper[31411]: I0224 02:35:27.755253 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrndj\" (UniqueName: \"kubernetes.io/projected/77be4a7f-0ff5-439e-9298-8e071291ba72-kube-api-access-nrndj\") pod \"ovn-controller-metrics-5w4cf\" (UID: \"77be4a7f-0ff5-439e-9298-8e071291ba72\") " pod="openstack/ovn-controller-metrics-5w4cf" Feb 24 02:35:27.755491 master-0 kubenswrapper[31411]: I0224 02:35:27.755432 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/77be4a7f-0ff5-439e-9298-8e071291ba72-ovs-rundir\") pod \"ovn-controller-metrics-5w4cf\" (UID: \"77be4a7f-0ff5-439e-9298-8e071291ba72\") " pod="openstack/ovn-controller-metrics-5w4cf" Feb 24 02:35:27.755749 master-0 kubenswrapper[31411]: I0224 02:35:27.755692 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77be4a7f-0ff5-439e-9298-8e071291ba72-config\") pod \"ovn-controller-metrics-5w4cf\" (UID: \"77be4a7f-0ff5-439e-9298-8e071291ba72\") " pod="openstack/ovn-controller-metrics-5w4cf" Feb 24 02:35:27.759036 master-0 kubenswrapper[31411]: I0224 02:35:27.758797 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/77be4a7f-0ff5-439e-9298-8e071291ba72-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5w4cf\" (UID: \"77be4a7f-0ff5-439e-9298-8e071291ba72\") " pod="openstack/ovn-controller-metrics-5w4cf" Feb 24 02:35:27.762720 master-0 kubenswrapper[31411]: I0224 02:35:27.762336 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77be4a7f-0ff5-439e-9298-8e071291ba72-combined-ca-bundle\") pod \"ovn-controller-metrics-5w4cf\" (UID: \"77be4a7f-0ff5-439e-9298-8e071291ba72\") " pod="openstack/ovn-controller-metrics-5w4cf" Feb 24 02:35:27.795602 master-0 kubenswrapper[31411]: I0224 02:35:27.793592 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrndj\" (UniqueName: \"kubernetes.io/projected/77be4a7f-0ff5-439e-9298-8e071291ba72-kube-api-access-nrndj\") pod \"ovn-controller-metrics-5w4cf\" (UID: \"77be4a7f-0ff5-439e-9298-8e071291ba72\") " pod="openstack/ovn-controller-metrics-5w4cf" Feb 24 02:35:27.809598 master-0 kubenswrapper[31411]: I0224 02:35:27.805349 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-5w4cf" Feb 24 02:35:27.853598 master-0 kubenswrapper[31411]: I0224 02:35:27.853233 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 24 02:35:27.857592 master-0 kubenswrapper[31411]: I0224 02:35:27.854727 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-679f75d775-s56hh"] Feb 24 02:35:28.011676 master-0 kubenswrapper[31411]: I0224 02:35:28.010115 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 24 02:35:28.021596 master-0 kubenswrapper[31411]: I0224 02:35:28.018052 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"93374608-d6a1-4e71-8682-3a86e5815f29","Type":"ContainerStarted","Data":"2ae6b597a4cf7ad64af0e068aebd8ca6f49f43be10a0e53bb7fb274ab194ceba"} Feb 24 02:35:28.021596 master-0 kubenswrapper[31411]: I0224 02:35:28.021258 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"86b07869-3ccf-46a9-9ca3-9954a1508cff","Type":"ContainerStarted","Data":"048e63456f59732741835f7ca3e92e75e3a216d9bfaa3d8b6c1acba15dc81405"} Feb 24 02:35:28.086723 master-0 kubenswrapper[31411]: I0224 02:35:28.086667 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79745f7855-j9vwf"] Feb 24 02:35:28.096129 master-0 kubenswrapper[31411]: I0224 02:35:28.096088 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79745f7855-j9vwf" Feb 24 02:35:28.100908 master-0 kubenswrapper[31411]: I0224 02:35:28.100861 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 24 02:35:28.111438 master-0 kubenswrapper[31411]: I0224 02:35:28.109168 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79745f7855-j9vwf"] Feb 24 02:35:28.142603 master-0 kubenswrapper[31411]: I0224 02:35:28.142485 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=35.175456998 podStartE2EDuration="43.142452275s" podCreationTimestamp="2026-02-24 02:34:45 +0000 UTC" firstStartedPulling="2026-02-24 02:35:13.545827692 +0000 UTC m=+856.763025538" lastFinishedPulling="2026-02-24 02:35:21.512822939 +0000 UTC m=+864.730020815" observedRunningTime="2026-02-24 02:35:28.130325935 +0000 UTC m=+871.347523781" watchObservedRunningTime="2026-02-24 02:35:28.142452275 +0000 UTC m=+871.359650121" Feb 24 02:35:28.185951 master-0 kubenswrapper[31411]: I0224 02:35:28.184898 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7k7x\" (UniqueName: \"kubernetes.io/projected/7f8e94c7-6108-4435-a235-da04091a591a-kube-api-access-k7k7x\") pod \"dnsmasq-dns-79745f7855-j9vwf\" (UID: \"7f8e94c7-6108-4435-a235-da04091a591a\") " pod="openstack/dnsmasq-dns-79745f7855-j9vwf" Feb 24 02:35:28.185951 master-0 kubenswrapper[31411]: I0224 02:35:28.185081 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8e94c7-6108-4435-a235-da04091a591a-config\") pod \"dnsmasq-dns-79745f7855-j9vwf\" (UID: \"7f8e94c7-6108-4435-a235-da04091a591a\") " pod="openstack/dnsmasq-dns-79745f7855-j9vwf" Feb 24 02:35:28.185951 master-0 kubenswrapper[31411]: I0224 02:35:28.185207 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f8e94c7-6108-4435-a235-da04091a591a-ovsdbserver-nb\") pod \"dnsmasq-dns-79745f7855-j9vwf\" (UID: \"7f8e94c7-6108-4435-a235-da04091a591a\") " pod="openstack/dnsmasq-dns-79745f7855-j9vwf" Feb 24 02:35:28.185951 master-0 kubenswrapper[31411]: I0224 02:35:28.185470 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f8e94c7-6108-4435-a235-da04091a591a-dns-svc\") pod \"dnsmasq-dns-79745f7855-j9vwf\" (UID: \"7f8e94c7-6108-4435-a235-da04091a591a\") " pod="openstack/dnsmasq-dns-79745f7855-j9vwf" Feb 24 02:35:28.185951 master-0 kubenswrapper[31411]: I0224 02:35:28.185735 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f8e94c7-6108-4435-a235-da04091a591a-ovsdbserver-sb\") pod \"dnsmasq-dns-79745f7855-j9vwf\" (UID: \"7f8e94c7-6108-4435-a235-da04091a591a\") " pod="openstack/dnsmasq-dns-79745f7855-j9vwf" Feb 24 02:35:28.221060 master-0 kubenswrapper[31411]: I0224 02:35:28.217985 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=36.498112553 podStartE2EDuration="44.217961532s" podCreationTimestamp="2026-02-24 02:34:44 +0000 UTC" firstStartedPulling="2026-02-24 02:35:13.762044413 +0000 UTC m=+856.979242289" lastFinishedPulling="2026-02-24 02:35:21.481893382 +0000 UTC m=+864.699091268" observedRunningTime="2026-02-24 02:35:28.208058394 +0000 UTC m=+871.425256250" watchObservedRunningTime="2026-02-24 02:35:28.217961532 +0000 UTC m=+871.435159378" Feb 24 02:35:28.291040 master-0 kubenswrapper[31411]: I0224 02:35:28.289963 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f8e94c7-6108-4435-a235-da04091a591a-dns-svc\") pod \"dnsmasq-dns-79745f7855-j9vwf\" (UID: \"7f8e94c7-6108-4435-a235-da04091a591a\") " pod="openstack/dnsmasq-dns-79745f7855-j9vwf" Feb 24 02:35:28.291040 master-0 kubenswrapper[31411]: I0224 02:35:28.290103 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f8e94c7-6108-4435-a235-da04091a591a-ovsdbserver-sb\") pod \"dnsmasq-dns-79745f7855-j9vwf\" (UID: \"7f8e94c7-6108-4435-a235-da04091a591a\") " pod="openstack/dnsmasq-dns-79745f7855-j9vwf" Feb 24 02:35:28.291040 master-0 kubenswrapper[31411]: I0224 02:35:28.290164 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7k7x\" (UniqueName: \"kubernetes.io/projected/7f8e94c7-6108-4435-a235-da04091a591a-kube-api-access-k7k7x\") pod \"dnsmasq-dns-79745f7855-j9vwf\" (UID: \"7f8e94c7-6108-4435-a235-da04091a591a\") " pod="openstack/dnsmasq-dns-79745f7855-j9vwf" Feb 24 02:35:28.291040 master-0 kubenswrapper[31411]: I0224 02:35:28.290190 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8e94c7-6108-4435-a235-da04091a591a-config\") pod \"dnsmasq-dns-79745f7855-j9vwf\" (UID: \"7f8e94c7-6108-4435-a235-da04091a591a\") " pod="openstack/dnsmasq-dns-79745f7855-j9vwf" Feb 24 02:35:28.291040 master-0 kubenswrapper[31411]: I0224 02:35:28.290243 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f8e94c7-6108-4435-a235-da04091a591a-ovsdbserver-nb\") pod \"dnsmasq-dns-79745f7855-j9vwf\" (UID: \"7f8e94c7-6108-4435-a235-da04091a591a\") " pod="openstack/dnsmasq-dns-79745f7855-j9vwf" Feb 24 02:35:28.293202 master-0 kubenswrapper[31411]: I0224 02:35:28.291701 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f8e94c7-6108-4435-a235-da04091a591a-dns-svc\") pod \"dnsmasq-dns-79745f7855-j9vwf\" (UID: \"7f8e94c7-6108-4435-a235-da04091a591a\") " pod="openstack/dnsmasq-dns-79745f7855-j9vwf" Feb 24 02:35:28.293202 master-0 kubenswrapper[31411]: I0224 02:35:28.292500 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f8e94c7-6108-4435-a235-da04091a591a-ovsdbserver-sb\") pod \"dnsmasq-dns-79745f7855-j9vwf\" (UID: \"7f8e94c7-6108-4435-a235-da04091a591a\") " pod="openstack/dnsmasq-dns-79745f7855-j9vwf" Feb 24 02:35:28.293202 master-0 kubenswrapper[31411]: I0224 02:35:28.293060 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8e94c7-6108-4435-a235-da04091a591a-config\") pod \"dnsmasq-dns-79745f7855-j9vwf\" (UID: \"7f8e94c7-6108-4435-a235-da04091a591a\") " pod="openstack/dnsmasq-dns-79745f7855-j9vwf" Feb 24 02:35:28.294654 master-0 kubenswrapper[31411]: I0224 02:35:28.293688 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f8e94c7-6108-4435-a235-da04091a591a-ovsdbserver-nb\") pod \"dnsmasq-dns-79745f7855-j9vwf\" (UID: \"7f8e94c7-6108-4435-a235-da04091a591a\") " pod="openstack/dnsmasq-dns-79745f7855-j9vwf" Feb 24 02:35:28.316064 master-0 kubenswrapper[31411]: I0224 02:35:28.316024 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7k7x\" (UniqueName: \"kubernetes.io/projected/7f8e94c7-6108-4435-a235-da04091a591a-kube-api-access-k7k7x\") pod \"dnsmasq-dns-79745f7855-j9vwf\" (UID: \"7f8e94c7-6108-4435-a235-da04091a591a\") " pod="openstack/dnsmasq-dns-79745f7855-j9vwf" Feb 24 02:35:28.412663 master-0 kubenswrapper[31411]: I0224 02:35:28.412617 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-679f75d775-s56hh"] Feb 24 02:35:28.428245 master-0 kubenswrapper[31411]: I0224 02:35:28.428108 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 24 02:35:28.431236 master-0 kubenswrapper[31411]: I0224 02:35:28.431207 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 24 02:35:28.441710 master-0 kubenswrapper[31411]: I0224 02:35:28.436168 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 24 02:35:28.441710 master-0 kubenswrapper[31411]: I0224 02:35:28.436224 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 24 02:35:28.441710 master-0 kubenswrapper[31411]: I0224 02:35:28.436491 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 24 02:35:28.447807 master-0 kubenswrapper[31411]: I0224 02:35:28.447766 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79745f7855-j9vwf" Feb 24 02:35:28.453819 master-0 kubenswrapper[31411]: I0224 02:35:28.453772 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 24 02:35:28.594538 master-0 kubenswrapper[31411]: I0224 02:35:28.594470 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-5w4cf"] Feb 24 02:35:28.602610 master-0 kubenswrapper[31411]: I0224 02:35:28.602550 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd15c92e-3d10-4ad1-8769-ff495fcf6f49-config\") pod \"ovn-northd-0\" (UID: \"bd15c92e-3d10-4ad1-8769-ff495fcf6f49\") " pod="openstack/ovn-northd-0" Feb 24 02:35:28.602610 master-0 kubenswrapper[31411]: I0224 02:35:28.602605 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bd15c92e-3d10-4ad1-8769-ff495fcf6f49-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"bd15c92e-3d10-4ad1-8769-ff495fcf6f49\") " pod="openstack/ovn-northd-0" Feb 24 02:35:28.602919 master-0 kubenswrapper[31411]: I0224 02:35:28.602870 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd15c92e-3d10-4ad1-8769-ff495fcf6f49-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"bd15c92e-3d10-4ad1-8769-ff495fcf6f49\") " pod="openstack/ovn-northd-0" Feb 24 02:35:28.603079 master-0 kubenswrapper[31411]: I0224 02:35:28.603054 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hkht\" (UniqueName: \"kubernetes.io/projected/bd15c92e-3d10-4ad1-8769-ff495fcf6f49-kube-api-access-9hkht\") pod \"ovn-northd-0\" (UID: \"bd15c92e-3d10-4ad1-8769-ff495fcf6f49\") " pod="openstack/ovn-northd-0" Feb 24 02:35:28.603296 master-0 kubenswrapper[31411]: I0224 02:35:28.603268 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd15c92e-3d10-4ad1-8769-ff495fcf6f49-scripts\") pod \"ovn-northd-0\" (UID: \"bd15c92e-3d10-4ad1-8769-ff495fcf6f49\") " pod="openstack/ovn-northd-0" Feb 24 02:35:28.603371 master-0 kubenswrapper[31411]: I0224 02:35:28.603349 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd15c92e-3d10-4ad1-8769-ff495fcf6f49-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"bd15c92e-3d10-4ad1-8769-ff495fcf6f49\") " pod="openstack/ovn-northd-0" Feb 24 02:35:28.603424 master-0 kubenswrapper[31411]: I0224 02:35:28.603389 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd15c92e-3d10-4ad1-8769-ff495fcf6f49-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"bd15c92e-3d10-4ad1-8769-ff495fcf6f49\") " pod="openstack/ovn-northd-0" Feb 24 02:35:28.706408 master-0 kubenswrapper[31411]: I0224 02:35:28.706345 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd15c92e-3d10-4ad1-8769-ff495fcf6f49-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"bd15c92e-3d10-4ad1-8769-ff495fcf6f49\") " pod="openstack/ovn-northd-0" Feb 24 02:35:28.706519 master-0 kubenswrapper[31411]: I0224 02:35:28.706467 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd15c92e-3d10-4ad1-8769-ff495fcf6f49-config\") pod \"ovn-northd-0\" (UID: \"bd15c92e-3d10-4ad1-8769-ff495fcf6f49\") " pod="openstack/ovn-northd-0" Feb 24 02:35:28.706519 master-0 kubenswrapper[31411]: I0224 02:35:28.706498 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bd15c92e-3d10-4ad1-8769-ff495fcf6f49-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"bd15c92e-3d10-4ad1-8769-ff495fcf6f49\") " pod="openstack/ovn-northd-0" Feb 24 02:35:28.706603 master-0 kubenswrapper[31411]: I0224 02:35:28.706544 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd15c92e-3d10-4ad1-8769-ff495fcf6f49-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"bd15c92e-3d10-4ad1-8769-ff495fcf6f49\") " pod="openstack/ovn-northd-0" Feb 24 02:35:28.706746 master-0 kubenswrapper[31411]: I0224 02:35:28.706705 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hkht\" (UniqueName: \"kubernetes.io/projected/bd15c92e-3d10-4ad1-8769-ff495fcf6f49-kube-api-access-9hkht\") pod \"ovn-northd-0\" (UID: \"bd15c92e-3d10-4ad1-8769-ff495fcf6f49\") " pod="openstack/ovn-northd-0" Feb 24 02:35:28.706996 master-0 kubenswrapper[31411]: I0224 02:35:28.706844 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd15c92e-3d10-4ad1-8769-ff495fcf6f49-scripts\") pod \"ovn-northd-0\" (UID: \"bd15c92e-3d10-4ad1-8769-ff495fcf6f49\") " pod="openstack/ovn-northd-0" Feb 24 02:35:28.706996 master-0 kubenswrapper[31411]: I0224 02:35:28.706903 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd15c92e-3d10-4ad1-8769-ff495fcf6f49-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"bd15c92e-3d10-4ad1-8769-ff495fcf6f49\") " pod="openstack/ovn-northd-0" Feb 24 02:35:28.714014 master-0 kubenswrapper[31411]: I0224 02:35:28.713638 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd15c92e-3d10-4ad1-8769-ff495fcf6f49-config\") pod \"ovn-northd-0\" (UID: \"bd15c92e-3d10-4ad1-8769-ff495fcf6f49\") " pod="openstack/ovn-northd-0" Feb 24 02:35:28.715756 master-0 kubenswrapper[31411]: I0224 02:35:28.715707 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd15c92e-3d10-4ad1-8769-ff495fcf6f49-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"bd15c92e-3d10-4ad1-8769-ff495fcf6f49\") " pod="openstack/ovn-northd-0" Feb 24 02:35:28.716160 master-0 kubenswrapper[31411]: I0224 02:35:28.716126 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bd15c92e-3d10-4ad1-8769-ff495fcf6f49-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"bd15c92e-3d10-4ad1-8769-ff495fcf6f49\") " pod="openstack/ovn-northd-0" Feb 24 02:35:28.717204 master-0 kubenswrapper[31411]: I0224 02:35:28.717148 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd15c92e-3d10-4ad1-8769-ff495fcf6f49-scripts\") pod \"ovn-northd-0\" (UID: \"bd15c92e-3d10-4ad1-8769-ff495fcf6f49\") " pod="openstack/ovn-northd-0" Feb 24 02:35:28.718629 master-0 kubenswrapper[31411]: I0224 02:35:28.718560 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd15c92e-3d10-4ad1-8769-ff495fcf6f49-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"bd15c92e-3d10-4ad1-8769-ff495fcf6f49\") " pod="openstack/ovn-northd-0" Feb 24 02:35:28.748977 master-0 kubenswrapper[31411]: I0224 02:35:28.747623 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hkht\" (UniqueName: \"kubernetes.io/projected/bd15c92e-3d10-4ad1-8769-ff495fcf6f49-kube-api-access-9hkht\") pod \"ovn-northd-0\" (UID: \"bd15c92e-3d10-4ad1-8769-ff495fcf6f49\") " pod="openstack/ovn-northd-0" Feb 24 02:35:28.768697 master-0 kubenswrapper[31411]: I0224 02:35:28.768597 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd15c92e-3d10-4ad1-8769-ff495fcf6f49-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"bd15c92e-3d10-4ad1-8769-ff495fcf6f49\") " pod="openstack/ovn-northd-0" Feb 24 02:35:28.989256 master-0 kubenswrapper[31411]: I0224 02:35:28.988456 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79745f7855-j9vwf"] Feb 24 02:35:29.022905 master-0 kubenswrapper[31411]: I0224 02:35:29.022561 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79745f7855-j9vwf"] Feb 24 02:35:29.063119 master-0 kubenswrapper[31411]: I0224 02:35:29.062345 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b55dc5f67-k2lcw"] Feb 24 02:35:29.064688 master-0 kubenswrapper[31411]: I0224 02:35:29.064619 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 24 02:35:29.064759 master-0 kubenswrapper[31411]: I0224 02:35:29.064741 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b55dc5f67-k2lcw" Feb 24 02:35:29.071257 master-0 kubenswrapper[31411]: I0224 02:35:29.071192 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b55dc5f67-k2lcw"] Feb 24 02:35:29.144527 master-0 kubenswrapper[31411]: I0224 02:35:29.144466 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 24 02:35:29.144779 master-0 kubenswrapper[31411]: I0224 02:35:29.144551 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 02:35:29.165470 master-0 kubenswrapper[31411]: I0224 02:35:29.165212 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-5w4cf" event={"ID":"77be4a7f-0ff5-439e-9298-8e071291ba72","Type":"ContainerStarted","Data":"33d29721f56942a7bcf3a32ed9f6dc7c267863af16b07620caca1f06d0291f20"} Feb 24 02:35:29.167589 master-0 kubenswrapper[31411]: I0224 02:35:29.167166 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79745f7855-j9vwf" event={"ID":"7f8e94c7-6108-4435-a235-da04091a591a","Type":"ContainerStarted","Data":"988b99c7f97c238c86cbf105cf5628c91f32b0b01e74f0dbd36050f96db31790"} Feb 24 02:35:29.191175 master-0 kubenswrapper[31411]: I0224 02:35:29.191117 31411 generic.go:334] "Generic (PLEG): container finished" podID="35eba989-c9cb-4c28-9b21-5d149c4672eb" containerID="32799e292dbacee5155cb9e18194b42c0c77635b92d984fb9b837ec0b6d6cc0f" exitCode=0 Feb 24 02:35:29.191422 master-0 kubenswrapper[31411]: I0224 02:35:29.191273 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-679f75d775-s56hh" event={"ID":"35eba989-c9cb-4c28-9b21-5d149c4672eb","Type":"ContainerDied","Data":"32799e292dbacee5155cb9e18194b42c0c77635b92d984fb9b837ec0b6d6cc0f"} Feb 24 02:35:29.191464 master-0 kubenswrapper[31411]: I0224 02:35:29.191426 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-679f75d775-s56hh" event={"ID":"35eba989-c9cb-4c28-9b21-5d149c4672eb","Type":"ContainerStarted","Data":"8775c59455e8df11c1a004d926090027e1bea22941d3ad85778eb16ba1325dc6"} Feb 24 02:35:29.289131 master-0 kubenswrapper[31411]: I0224 02:35:29.252020 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmzvk\" (UniqueName: \"kubernetes.io/projected/42530b69-b5be-4699-873f-51cf3161c255-kube-api-access-hmzvk\") pod \"dnsmasq-dns-5b55dc5f67-k2lcw\" (UID: \"42530b69-b5be-4699-873f-51cf3161c255\") " pod="openstack/dnsmasq-dns-5b55dc5f67-k2lcw" Feb 24 02:35:29.289131 master-0 kubenswrapper[31411]: I0224 02:35:29.252376 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42530b69-b5be-4699-873f-51cf3161c255-dns-svc\") pod \"dnsmasq-dns-5b55dc5f67-k2lcw\" (UID: \"42530b69-b5be-4699-873f-51cf3161c255\") " pod="openstack/dnsmasq-dns-5b55dc5f67-k2lcw" Feb 24 02:35:29.289131 master-0 kubenswrapper[31411]: I0224 02:35:29.252494 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42530b69-b5be-4699-873f-51cf3161c255-config\") pod \"dnsmasq-dns-5b55dc5f67-k2lcw\" (UID: \"42530b69-b5be-4699-873f-51cf3161c255\") " pod="openstack/dnsmasq-dns-5b55dc5f67-k2lcw" Feb 24 02:35:29.289131 master-0 kubenswrapper[31411]: I0224 02:35:29.252723 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42530b69-b5be-4699-873f-51cf3161c255-ovsdbserver-nb\") pod \"dnsmasq-dns-5b55dc5f67-k2lcw\" (UID: \"42530b69-b5be-4699-873f-51cf3161c255\") " pod="openstack/dnsmasq-dns-5b55dc5f67-k2lcw" Feb 24 02:35:29.289131 master-0 kubenswrapper[31411]: I0224 02:35:29.253171 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42530b69-b5be-4699-873f-51cf3161c255-ovsdbserver-sb\") pod \"dnsmasq-dns-5b55dc5f67-k2lcw\" (UID: \"42530b69-b5be-4699-873f-51cf3161c255\") " pod="openstack/dnsmasq-dns-5b55dc5f67-k2lcw" Feb 24 02:35:29.356792 master-0 kubenswrapper[31411]: I0224 02:35:29.356742 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42530b69-b5be-4699-873f-51cf3161c255-ovsdbserver-nb\") pod \"dnsmasq-dns-5b55dc5f67-k2lcw\" (UID: \"42530b69-b5be-4699-873f-51cf3161c255\") " pod="openstack/dnsmasq-dns-5b55dc5f67-k2lcw" Feb 24 02:35:29.356926 master-0 kubenswrapper[31411]: I0224 02:35:29.356883 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42530b69-b5be-4699-873f-51cf3161c255-ovsdbserver-sb\") pod \"dnsmasq-dns-5b55dc5f67-k2lcw\" (UID: \"42530b69-b5be-4699-873f-51cf3161c255\") " pod="openstack/dnsmasq-dns-5b55dc5f67-k2lcw" Feb 24 02:35:29.357058 master-0 kubenswrapper[31411]: I0224 02:35:29.357029 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmzvk\" (UniqueName: \"kubernetes.io/projected/42530b69-b5be-4699-873f-51cf3161c255-kube-api-access-hmzvk\") pod \"dnsmasq-dns-5b55dc5f67-k2lcw\" (UID: \"42530b69-b5be-4699-873f-51cf3161c255\") " pod="openstack/dnsmasq-dns-5b55dc5f67-k2lcw" Feb 24 02:35:29.357116 master-0 kubenswrapper[31411]: I0224 02:35:29.357057 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42530b69-b5be-4699-873f-51cf3161c255-dns-svc\") pod \"dnsmasq-dns-5b55dc5f67-k2lcw\" (UID: \"42530b69-b5be-4699-873f-51cf3161c255\") " pod="openstack/dnsmasq-dns-5b55dc5f67-k2lcw" Feb 24 02:35:29.357221 master-0 kubenswrapper[31411]: I0224 02:35:29.357182 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42530b69-b5be-4699-873f-51cf3161c255-config\") pod \"dnsmasq-dns-5b55dc5f67-k2lcw\" (UID: \"42530b69-b5be-4699-873f-51cf3161c255\") " pod="openstack/dnsmasq-dns-5b55dc5f67-k2lcw" Feb 24 02:35:29.358997 master-0 kubenswrapper[31411]: I0224 02:35:29.358976 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42530b69-b5be-4699-873f-51cf3161c255-ovsdbserver-nb\") pod \"dnsmasq-dns-5b55dc5f67-k2lcw\" (UID: \"42530b69-b5be-4699-873f-51cf3161c255\") " pod="openstack/dnsmasq-dns-5b55dc5f67-k2lcw" Feb 24 02:35:29.361117 master-0 kubenswrapper[31411]: I0224 02:35:29.361091 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42530b69-b5be-4699-873f-51cf3161c255-ovsdbserver-sb\") pod \"dnsmasq-dns-5b55dc5f67-k2lcw\" (UID: \"42530b69-b5be-4699-873f-51cf3161c255\") " pod="openstack/dnsmasq-dns-5b55dc5f67-k2lcw" Feb 24 02:35:29.363299 master-0 kubenswrapper[31411]: I0224 02:35:29.363265 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42530b69-b5be-4699-873f-51cf3161c255-dns-svc\") pod \"dnsmasq-dns-5b55dc5f67-k2lcw\" (UID: \"42530b69-b5be-4699-873f-51cf3161c255\") " pod="openstack/dnsmasq-dns-5b55dc5f67-k2lcw" Feb 24 02:35:29.365584 master-0 kubenswrapper[31411]: I0224 02:35:29.365537 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42530b69-b5be-4699-873f-51cf3161c255-config\") pod \"dnsmasq-dns-5b55dc5f67-k2lcw\" (UID: \"42530b69-b5be-4699-873f-51cf3161c255\") " pod="openstack/dnsmasq-dns-5b55dc5f67-k2lcw" Feb 24 02:35:29.389314 master-0 kubenswrapper[31411]: I0224 02:35:29.388812 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmzvk\" (UniqueName: \"kubernetes.io/projected/42530b69-b5be-4699-873f-51cf3161c255-kube-api-access-hmzvk\") pod \"dnsmasq-dns-5b55dc5f67-k2lcw\" (UID: \"42530b69-b5be-4699-873f-51cf3161c255\") " pod="openstack/dnsmasq-dns-5b55dc5f67-k2lcw" Feb 24 02:35:29.470698 master-0 kubenswrapper[31411]: I0224 02:35:29.466929 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b55dc5f67-k2lcw" Feb 24 02:35:29.701411 master-0 kubenswrapper[31411]: I0224 02:35:29.701353 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-679f75d775-s56hh" Feb 24 02:35:29.766698 master-0 kubenswrapper[31411]: I0224 02:35:29.766611 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35eba989-c9cb-4c28-9b21-5d149c4672eb-ovsdbserver-sb\") pod \"35eba989-c9cb-4c28-9b21-5d149c4672eb\" (UID: \"35eba989-c9cb-4c28-9b21-5d149c4672eb\") " Feb 24 02:35:29.766698 master-0 kubenswrapper[31411]: I0224 02:35:29.766697 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqxbq\" (UniqueName: \"kubernetes.io/projected/35eba989-c9cb-4c28-9b21-5d149c4672eb-kube-api-access-xqxbq\") pod \"35eba989-c9cb-4c28-9b21-5d149c4672eb\" (UID: \"35eba989-c9cb-4c28-9b21-5d149c4672eb\") " Feb 24 02:35:29.767179 master-0 kubenswrapper[31411]: I0224 02:35:29.767146 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35eba989-c9cb-4c28-9b21-5d149c4672eb-config\") pod \"35eba989-c9cb-4c28-9b21-5d149c4672eb\" (UID: \"35eba989-c9cb-4c28-9b21-5d149c4672eb\") " Feb 24 02:35:29.767239 master-0 kubenswrapper[31411]: I0224 02:35:29.767184 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35eba989-c9cb-4c28-9b21-5d149c4672eb-dns-svc\") pod \"35eba989-c9cb-4c28-9b21-5d149c4672eb\" (UID: \"35eba989-c9cb-4c28-9b21-5d149c4672eb\") " Feb 24 02:35:29.771631 master-0 kubenswrapper[31411]: I0224 02:35:29.771564 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35eba989-c9cb-4c28-9b21-5d149c4672eb-kube-api-access-xqxbq" (OuterVolumeSpecName: "kube-api-access-xqxbq") pod "35eba989-c9cb-4c28-9b21-5d149c4672eb" (UID: "35eba989-c9cb-4c28-9b21-5d149c4672eb"). InnerVolumeSpecName "kube-api-access-xqxbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:35:29.807283 master-0 kubenswrapper[31411]: I0224 02:35:29.796755 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35eba989-c9cb-4c28-9b21-5d149c4672eb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "35eba989-c9cb-4c28-9b21-5d149c4672eb" (UID: "35eba989-c9cb-4c28-9b21-5d149c4672eb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:35:29.852012 master-0 kubenswrapper[31411]: I0224 02:35:29.851890 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 24 02:35:29.864153 master-0 kubenswrapper[31411]: I0224 02:35:29.864004 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35eba989-c9cb-4c28-9b21-5d149c4672eb-config" (OuterVolumeSpecName: "config") pod "35eba989-c9cb-4c28-9b21-5d149c4672eb" (UID: "35eba989-c9cb-4c28-9b21-5d149c4672eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:35:29.864153 master-0 kubenswrapper[31411]: I0224 02:35:29.864113 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35eba989-c9cb-4c28-9b21-5d149c4672eb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "35eba989-c9cb-4c28-9b21-5d149c4672eb" (UID: "35eba989-c9cb-4c28-9b21-5d149c4672eb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:35:29.869537 master-0 kubenswrapper[31411]: I0224 02:35:29.869481 31411 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35eba989-c9cb-4c28-9b21-5d149c4672eb-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:29.869537 master-0 kubenswrapper[31411]: I0224 02:35:29.869530 31411 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35eba989-c9cb-4c28-9b21-5d149c4672eb-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:29.869666 master-0 kubenswrapper[31411]: I0224 02:35:29.869540 31411 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35eba989-c9cb-4c28-9b21-5d149c4672eb-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:29.869666 master-0 kubenswrapper[31411]: I0224 02:35:29.869556 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqxbq\" (UniqueName: \"kubernetes.io/projected/35eba989-c9cb-4c28-9b21-5d149c4672eb-kube-api-access-xqxbq\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:30.010604 master-0 kubenswrapper[31411]: I0224 02:35:30.006309 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b55dc5f67-k2lcw"] Feb 24 02:35:30.015845 master-0 kubenswrapper[31411]: W0224 02:35:30.011363 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42530b69_b5be_4699_873f_51cf3161c255.slice/crio-3a947c896bab121a3fb3c5c6a8f0566d90ecf16668da27679d5113ef5e5d3efa WatchSource:0}: Error finding container 3a947c896bab121a3fb3c5c6a8f0566d90ecf16668da27679d5113ef5e5d3efa: Status 404 returned error can't find the container with id 3a947c896bab121a3fb3c5c6a8f0566d90ecf16668da27679d5113ef5e5d3efa Feb 24 02:35:30.207992 master-0 kubenswrapper[31411]: I0224 02:35:30.207898 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-679f75d775-s56hh" Feb 24 02:35:30.219206 master-0 kubenswrapper[31411]: I0224 02:35:30.207889 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-679f75d775-s56hh" event={"ID":"35eba989-c9cb-4c28-9b21-5d149c4672eb","Type":"ContainerDied","Data":"8775c59455e8df11c1a004d926090027e1bea22941d3ad85778eb16ba1325dc6"} Feb 24 02:35:30.219206 master-0 kubenswrapper[31411]: I0224 02:35:30.208145 31411 scope.go:117] "RemoveContainer" containerID="32799e292dbacee5155cb9e18194b42c0c77635b92d984fb9b837ec0b6d6cc0f" Feb 24 02:35:30.219206 master-0 kubenswrapper[31411]: I0224 02:35:30.210494 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b55dc5f67-k2lcw" event={"ID":"42530b69-b5be-4699-873f-51cf3161c255","Type":"ContainerStarted","Data":"3a947c896bab121a3fb3c5c6a8f0566d90ecf16668da27679d5113ef5e5d3efa"} Feb 24 02:35:30.219206 master-0 kubenswrapper[31411]: I0224 02:35:30.216604 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-5w4cf" event={"ID":"77be4a7f-0ff5-439e-9298-8e071291ba72","Type":"ContainerStarted","Data":"5a23aea118961118736984f4eeff16eb5ef51838a5605d8217e090b0bba8a3af"} Feb 24 02:35:30.220602 master-0 kubenswrapper[31411]: I0224 02:35:30.220509 31411 generic.go:334] "Generic (PLEG): container finished" podID="7f8e94c7-6108-4435-a235-da04091a591a" containerID="69aa71de288013123845b360854976978070fda65562360e4ed9e625c1c13f11" exitCode=0 Feb 24 02:35:30.220834 master-0 kubenswrapper[31411]: I0224 02:35:30.220620 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79745f7855-j9vwf" event={"ID":"7f8e94c7-6108-4435-a235-da04091a591a","Type":"ContainerDied","Data":"69aa71de288013123845b360854976978070fda65562360e4ed9e625c1c13f11"} Feb 24 02:35:30.223380 master-0 kubenswrapper[31411]: I0224 02:35:30.223306 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"bd15c92e-3d10-4ad1-8769-ff495fcf6f49","Type":"ContainerStarted","Data":"3fa35fc37e9161e19a63b6871d096cbd22b0831a4dac541bd61f0f8ffcfa9a21"} Feb 24 02:35:30.385969 master-0 kubenswrapper[31411]: I0224 02:35:30.382994 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-5w4cf" podStartSLOduration=3.382966593 podStartE2EDuration="3.382966593s" podCreationTimestamp="2026-02-24 02:35:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:35:30.373378774 +0000 UTC m=+873.590576640" watchObservedRunningTime="2026-02-24 02:35:30.382966593 +0000 UTC m=+873.600164439" Feb 24 02:35:30.432480 master-0 kubenswrapper[31411]: I0224 02:35:30.432405 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-679f75d775-s56hh"] Feb 24 02:35:30.447882 master-0 kubenswrapper[31411]: I0224 02:35:30.442752 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-679f75d775-s56hh"] Feb 24 02:35:30.736308 master-0 kubenswrapper[31411]: I0224 02:35:30.736263 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79745f7855-j9vwf" Feb 24 02:35:30.794284 master-0 kubenswrapper[31411]: I0224 02:35:30.794204 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f8e94c7-6108-4435-a235-da04091a591a-dns-svc\") pod \"7f8e94c7-6108-4435-a235-da04091a591a\" (UID: \"7f8e94c7-6108-4435-a235-da04091a591a\") " Feb 24 02:35:30.794683 master-0 kubenswrapper[31411]: I0224 02:35:30.794442 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7k7x\" (UniqueName: \"kubernetes.io/projected/7f8e94c7-6108-4435-a235-da04091a591a-kube-api-access-k7k7x\") pod \"7f8e94c7-6108-4435-a235-da04091a591a\" (UID: \"7f8e94c7-6108-4435-a235-da04091a591a\") " Feb 24 02:35:30.794683 master-0 kubenswrapper[31411]: I0224 02:35:30.794533 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8e94c7-6108-4435-a235-da04091a591a-config\") pod \"7f8e94c7-6108-4435-a235-da04091a591a\" (UID: \"7f8e94c7-6108-4435-a235-da04091a591a\") " Feb 24 02:35:30.794764 master-0 kubenswrapper[31411]: I0224 02:35:30.794690 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f8e94c7-6108-4435-a235-da04091a591a-ovsdbserver-sb\") pod \"7f8e94c7-6108-4435-a235-da04091a591a\" (UID: \"7f8e94c7-6108-4435-a235-da04091a591a\") " Feb 24 02:35:30.794764 master-0 kubenswrapper[31411]: I0224 02:35:30.794745 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f8e94c7-6108-4435-a235-da04091a591a-ovsdbserver-nb\") pod \"7f8e94c7-6108-4435-a235-da04091a591a\" (UID: \"7f8e94c7-6108-4435-a235-da04091a591a\") " Feb 24 02:35:30.801432 master-0 kubenswrapper[31411]: I0224 02:35:30.798001 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f8e94c7-6108-4435-a235-da04091a591a-kube-api-access-k7k7x" (OuterVolumeSpecName: "kube-api-access-k7k7x") pod "7f8e94c7-6108-4435-a235-da04091a591a" (UID: "7f8e94c7-6108-4435-a235-da04091a591a"). InnerVolumeSpecName "kube-api-access-k7k7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:35:30.828023 master-0 kubenswrapper[31411]: I0224 02:35:30.827966 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f8e94c7-6108-4435-a235-da04091a591a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7f8e94c7-6108-4435-a235-da04091a591a" (UID: "7f8e94c7-6108-4435-a235-da04091a591a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:35:30.840445 master-0 kubenswrapper[31411]: I0224 02:35:30.840366 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f8e94c7-6108-4435-a235-da04091a591a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7f8e94c7-6108-4435-a235-da04091a591a" (UID: "7f8e94c7-6108-4435-a235-da04091a591a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:35:30.852794 master-0 kubenswrapper[31411]: I0224 02:35:30.852715 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f8e94c7-6108-4435-a235-da04091a591a-config" (OuterVolumeSpecName: "config") pod "7f8e94c7-6108-4435-a235-da04091a591a" (UID: "7f8e94c7-6108-4435-a235-da04091a591a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:35:30.869423 master-0 kubenswrapper[31411]: I0224 02:35:30.868889 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f8e94c7-6108-4435-a235-da04091a591a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7f8e94c7-6108-4435-a235-da04091a591a" (UID: "7f8e94c7-6108-4435-a235-da04091a591a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:35:30.898850 master-0 kubenswrapper[31411]: I0224 02:35:30.898793 31411 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f8e94c7-6108-4435-a235-da04091a591a-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:30.898850 master-0 kubenswrapper[31411]: I0224 02:35:30.898842 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7k7x\" (UniqueName: \"kubernetes.io/projected/7f8e94c7-6108-4435-a235-da04091a591a-kube-api-access-k7k7x\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:30.898850 master-0 kubenswrapper[31411]: I0224 02:35:30.898858 31411 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8e94c7-6108-4435-a235-da04091a591a-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:30.898850 master-0 kubenswrapper[31411]: I0224 02:35:30.898868 31411 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f8e94c7-6108-4435-a235-da04091a591a-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:30.898850 master-0 kubenswrapper[31411]: I0224 02:35:30.898878 31411 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f8e94c7-6108-4435-a235-da04091a591a-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:31.119831 master-0 kubenswrapper[31411]: I0224 02:35:31.118592 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35eba989-c9cb-4c28-9b21-5d149c4672eb" path="/var/lib/kubelet/pods/35eba989-c9cb-4c28-9b21-5d149c4672eb/volumes" Feb 24 02:35:31.119831 master-0 kubenswrapper[31411]: I0224 02:35:31.119538 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 24 02:35:31.120931 master-0 kubenswrapper[31411]: E0224 02:35:31.120352 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f8e94c7-6108-4435-a235-da04091a591a" containerName="init" Feb 24 02:35:31.120931 master-0 kubenswrapper[31411]: I0224 02:35:31.120374 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f8e94c7-6108-4435-a235-da04091a591a" containerName="init" Feb 24 02:35:31.120931 master-0 kubenswrapper[31411]: E0224 02:35:31.120421 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35eba989-c9cb-4c28-9b21-5d149c4672eb" containerName="init" Feb 24 02:35:31.120931 master-0 kubenswrapper[31411]: I0224 02:35:31.120428 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="35eba989-c9cb-4c28-9b21-5d149c4672eb" containerName="init" Feb 24 02:35:31.120931 master-0 kubenswrapper[31411]: I0224 02:35:31.120720 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="35eba989-c9cb-4c28-9b21-5d149c4672eb" containerName="init" Feb 24 02:35:31.120931 master-0 kubenswrapper[31411]: I0224 02:35:31.120741 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f8e94c7-6108-4435-a235-da04091a591a" containerName="init" Feb 24 02:35:31.129123 master-0 kubenswrapper[31411]: I0224 02:35:31.128240 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 24 02:35:31.133689 master-0 kubenswrapper[31411]: I0224 02:35:31.130875 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 24 02:35:31.133689 master-0 kubenswrapper[31411]: I0224 02:35:31.131379 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 24 02:35:31.136000 master-0 kubenswrapper[31411]: I0224 02:35:31.135959 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 24 02:35:31.143287 master-0 kubenswrapper[31411]: I0224 02:35:31.138191 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 24 02:35:31.205676 master-0 kubenswrapper[31411]: I0224 02:35:31.205614 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9c814b7c-b62b-4104-8139-8e6cd597d33f-lock\") pod \"swift-storage-0\" (UID: \"9c814b7c-b62b-4104-8139-8e6cd597d33f\") " pod="openstack/swift-storage-0" Feb 24 02:35:31.205799 master-0 kubenswrapper[31411]: I0224 02:35:31.205701 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bt5f\" (UniqueName: \"kubernetes.io/projected/9c814b7c-b62b-4104-8139-8e6cd597d33f-kube-api-access-8bt5f\") pod \"swift-storage-0\" (UID: \"9c814b7c-b62b-4104-8139-8e6cd597d33f\") " pod="openstack/swift-storage-0" Feb 24 02:35:31.205799 master-0 kubenswrapper[31411]: I0224 02:35:31.205781 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c814b7c-b62b-4104-8139-8e6cd597d33f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"9c814b7c-b62b-4104-8139-8e6cd597d33f\") " pod="openstack/swift-storage-0" Feb 24 02:35:31.205909 master-0 kubenswrapper[31411]: I0224 02:35:31.205854 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9c814b7c-b62b-4104-8139-8e6cd597d33f-cache\") pod \"swift-storage-0\" (UID: \"9c814b7c-b62b-4104-8139-8e6cd597d33f\") " pod="openstack/swift-storage-0" Feb 24 02:35:31.206091 master-0 kubenswrapper[31411]: I0224 02:35:31.206042 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0e673ada-f93f-4478-865c-179323f1aba0\" (UniqueName: \"kubernetes.io/csi/topolvm.io^03dd39aa-eff5-4c10-8ac6-00748703c90f\") pod \"swift-storage-0\" (UID: \"9c814b7c-b62b-4104-8139-8e6cd597d33f\") " pod="openstack/swift-storage-0" Feb 24 02:35:31.206144 master-0 kubenswrapper[31411]: I0224 02:35:31.206088 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9c814b7c-b62b-4104-8139-8e6cd597d33f-etc-swift\") pod \"swift-storage-0\" (UID: \"9c814b7c-b62b-4104-8139-8e6cd597d33f\") " pod="openstack/swift-storage-0" Feb 24 02:35:31.262947 master-0 kubenswrapper[31411]: I0224 02:35:31.262463 31411 generic.go:334] "Generic (PLEG): container finished" podID="42530b69-b5be-4699-873f-51cf3161c255" containerID="529e8c557b8008f4c2b0087c1c9534f3ded60d53f6160c927fd52e9aa98352ca" exitCode=0 Feb 24 02:35:31.263695 master-0 kubenswrapper[31411]: I0224 02:35:31.262763 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b55dc5f67-k2lcw" event={"ID":"42530b69-b5be-4699-873f-51cf3161c255","Type":"ContainerDied","Data":"529e8c557b8008f4c2b0087c1c9534f3ded60d53f6160c927fd52e9aa98352ca"} Feb 24 02:35:31.271426 master-0 kubenswrapper[31411]: I0224 02:35:31.271335 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79745f7855-j9vwf" event={"ID":"7f8e94c7-6108-4435-a235-da04091a591a","Type":"ContainerDied","Data":"988b99c7f97c238c86cbf105cf5628c91f32b0b01e74f0dbd36050f96db31790"} Feb 24 02:35:31.271491 master-0 kubenswrapper[31411]: I0224 02:35:31.271464 31411 scope.go:117] "RemoveContainer" containerID="69aa71de288013123845b360854976978070fda65562360e4ed9e625c1c13f11" Feb 24 02:35:31.271557 master-0 kubenswrapper[31411]: I0224 02:35:31.271380 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79745f7855-j9vwf" Feb 24 02:35:31.308257 master-0 kubenswrapper[31411]: I0224 02:35:31.308030 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9c814b7c-b62b-4104-8139-8e6cd597d33f-lock\") pod \"swift-storage-0\" (UID: \"9c814b7c-b62b-4104-8139-8e6cd597d33f\") " pod="openstack/swift-storage-0" Feb 24 02:35:31.308257 master-0 kubenswrapper[31411]: I0224 02:35:31.308113 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bt5f\" (UniqueName: \"kubernetes.io/projected/9c814b7c-b62b-4104-8139-8e6cd597d33f-kube-api-access-8bt5f\") pod \"swift-storage-0\" (UID: \"9c814b7c-b62b-4104-8139-8e6cd597d33f\") " pod="openstack/swift-storage-0" Feb 24 02:35:31.308257 master-0 kubenswrapper[31411]: I0224 02:35:31.308164 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c814b7c-b62b-4104-8139-8e6cd597d33f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"9c814b7c-b62b-4104-8139-8e6cd597d33f\") " pod="openstack/swift-storage-0" Feb 24 02:35:31.308257 master-0 kubenswrapper[31411]: I0224 02:35:31.308222 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9c814b7c-b62b-4104-8139-8e6cd597d33f-cache\") pod \"swift-storage-0\" (UID: \"9c814b7c-b62b-4104-8139-8e6cd597d33f\") " pod="openstack/swift-storage-0" Feb 24 02:35:31.308737 master-0 kubenswrapper[31411]: I0224 02:35:31.308362 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0e673ada-f93f-4478-865c-179323f1aba0\" (UniqueName: \"kubernetes.io/csi/topolvm.io^03dd39aa-eff5-4c10-8ac6-00748703c90f\") pod \"swift-storage-0\" (UID: \"9c814b7c-b62b-4104-8139-8e6cd597d33f\") " pod="openstack/swift-storage-0" Feb 24 02:35:31.308737 master-0 kubenswrapper[31411]: I0224 02:35:31.308383 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9c814b7c-b62b-4104-8139-8e6cd597d33f-etc-swift\") pod \"swift-storage-0\" (UID: \"9c814b7c-b62b-4104-8139-8e6cd597d33f\") " pod="openstack/swift-storage-0" Feb 24 02:35:31.308827 master-0 kubenswrapper[31411]: E0224 02:35:31.308767 31411 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 24 02:35:31.308827 master-0 kubenswrapper[31411]: E0224 02:35:31.308816 31411 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 24 02:35:31.309621 master-0 kubenswrapper[31411]: E0224 02:35:31.309565 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9c814b7c-b62b-4104-8139-8e6cd597d33f-etc-swift podName:9c814b7c-b62b-4104-8139-8e6cd597d33f nodeName:}" failed. No retries permitted until 2026-02-24 02:35:31.809512256 +0000 UTC m=+875.026710102 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9c814b7c-b62b-4104-8139-8e6cd597d33f-etc-swift") pod "swift-storage-0" (UID: "9c814b7c-b62b-4104-8139-8e6cd597d33f") : configmap "swift-ring-files" not found Feb 24 02:35:31.310216 master-0 kubenswrapper[31411]: I0224 02:35:31.309879 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9c814b7c-b62b-4104-8139-8e6cd597d33f-lock\") pod \"swift-storage-0\" (UID: \"9c814b7c-b62b-4104-8139-8e6cd597d33f\") " pod="openstack/swift-storage-0" Feb 24 02:35:31.310216 master-0 kubenswrapper[31411]: I0224 02:35:31.310174 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9c814b7c-b62b-4104-8139-8e6cd597d33f-cache\") pod \"swift-storage-0\" (UID: \"9c814b7c-b62b-4104-8139-8e6cd597d33f\") " pod="openstack/swift-storage-0" Feb 24 02:35:31.316723 master-0 kubenswrapper[31411]: I0224 02:35:31.316162 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c814b7c-b62b-4104-8139-8e6cd597d33f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"9c814b7c-b62b-4104-8139-8e6cd597d33f\") " pod="openstack/swift-storage-0" Feb 24 02:35:31.323058 master-0 kubenswrapper[31411]: I0224 02:35:31.323009 31411 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 02:35:31.323130 master-0 kubenswrapper[31411]: I0224 02:35:31.323105 31411 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0e673ada-f93f-4478-865c-179323f1aba0\" (UniqueName: \"kubernetes.io/csi/topolvm.io^03dd39aa-eff5-4c10-8ac6-00748703c90f\") pod \"swift-storage-0\" (UID: \"9c814b7c-b62b-4104-8139-8e6cd597d33f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/6c67cb9ac1cc799579d1da0a500b7b6a1ffeb8a09c92a03b1159153111f1908d/globalmount\"" pod="openstack/swift-storage-0" Feb 24 02:35:31.333089 master-0 kubenswrapper[31411]: I0224 02:35:31.333037 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bt5f\" (UniqueName: \"kubernetes.io/projected/9c814b7c-b62b-4104-8139-8e6cd597d33f-kube-api-access-8bt5f\") pod \"swift-storage-0\" (UID: \"9c814b7c-b62b-4104-8139-8e6cd597d33f\") " pod="openstack/swift-storage-0" Feb 24 02:35:31.439887 master-0 kubenswrapper[31411]: E0224 02:35:31.439821 31411 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f8e94c7_6108_4435_a235_da04091a591a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f8e94c7_6108_4435_a235_da04091a591a.slice/crio-988b99c7f97c238c86cbf105cf5628c91f32b0b01e74f0dbd36050f96db31790\": RecentStats: unable to find data in memory cache]" Feb 24 02:35:31.476978 master-0 kubenswrapper[31411]: I0224 02:35:31.476900 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79745f7855-j9vwf"] Feb 24 02:35:31.486259 master-0 kubenswrapper[31411]: I0224 02:35:31.486190 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79745f7855-j9vwf"] Feb 24 02:35:31.821832 master-0 kubenswrapper[31411]: I0224 02:35:31.821744 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9c814b7c-b62b-4104-8139-8e6cd597d33f-etc-swift\") pod \"swift-storage-0\" (UID: \"9c814b7c-b62b-4104-8139-8e6cd597d33f\") " pod="openstack/swift-storage-0" Feb 24 02:35:31.822065 master-0 kubenswrapper[31411]: E0224 02:35:31.822007 31411 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 24 02:35:31.822065 master-0 kubenswrapper[31411]: E0224 02:35:31.822061 31411 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 24 02:35:31.822163 master-0 kubenswrapper[31411]: E0224 02:35:31.822140 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9c814b7c-b62b-4104-8139-8e6cd597d33f-etc-swift podName:9c814b7c-b62b-4104-8139-8e6cd597d33f nodeName:}" failed. No retries permitted until 2026-02-24 02:35:32.822112556 +0000 UTC m=+876.039310412 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9c814b7c-b62b-4104-8139-8e6cd597d33f-etc-swift") pod "swift-storage-0" (UID: "9c814b7c-b62b-4104-8139-8e6cd597d33f") : configmap "swift-ring-files" not found Feb 24 02:35:31.927929 master-0 kubenswrapper[31411]: I0224 02:35:31.927311 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-gm5ph"] Feb 24 02:35:31.929402 master-0 kubenswrapper[31411]: I0224 02:35:31.929371 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gm5ph" Feb 24 02:35:31.940742 master-0 kubenswrapper[31411]: I0224 02:35:31.940005 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 24 02:35:31.940742 master-0 kubenswrapper[31411]: I0224 02:35:31.940052 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 24 02:35:31.940742 master-0 kubenswrapper[31411]: I0224 02:35:31.940266 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 24 02:35:31.979614 master-0 kubenswrapper[31411]: I0224 02:35:31.976634 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-gm5ph"] Feb 24 02:35:32.028848 master-0 kubenswrapper[31411]: I0224 02:35:32.028764 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c1211f9d-8d58-49d4-8892-61d252358fa6-ring-data-devices\") pod \"swift-ring-rebalance-gm5ph\" (UID: \"c1211f9d-8d58-49d4-8892-61d252358fa6\") " pod="openstack/swift-ring-rebalance-gm5ph" Feb 24 02:35:32.029107 master-0 kubenswrapper[31411]: I0224 02:35:32.028903 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b4xr\" (UniqueName: \"kubernetes.io/projected/c1211f9d-8d58-49d4-8892-61d252358fa6-kube-api-access-9b4xr\") pod \"swift-ring-rebalance-gm5ph\" (UID: \"c1211f9d-8d58-49d4-8892-61d252358fa6\") " pod="openstack/swift-ring-rebalance-gm5ph" Feb 24 02:35:32.029107 master-0 kubenswrapper[31411]: I0224 02:35:32.028989 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c1211f9d-8d58-49d4-8892-61d252358fa6-dispersionconf\") pod \"swift-ring-rebalance-gm5ph\" (UID: \"c1211f9d-8d58-49d4-8892-61d252358fa6\") " pod="openstack/swift-ring-rebalance-gm5ph" Feb 24 02:35:32.029107 master-0 kubenswrapper[31411]: I0224 02:35:32.029018 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1211f9d-8d58-49d4-8892-61d252358fa6-scripts\") pod \"swift-ring-rebalance-gm5ph\" (UID: \"c1211f9d-8d58-49d4-8892-61d252358fa6\") " pod="openstack/swift-ring-rebalance-gm5ph" Feb 24 02:35:32.029387 master-0 kubenswrapper[31411]: I0224 02:35:32.029192 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c1211f9d-8d58-49d4-8892-61d252358fa6-swiftconf\") pod \"swift-ring-rebalance-gm5ph\" (UID: \"c1211f9d-8d58-49d4-8892-61d252358fa6\") " pod="openstack/swift-ring-rebalance-gm5ph" Feb 24 02:35:32.029387 master-0 kubenswrapper[31411]: I0224 02:35:32.029324 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1211f9d-8d58-49d4-8892-61d252358fa6-combined-ca-bundle\") pod \"swift-ring-rebalance-gm5ph\" (UID: \"c1211f9d-8d58-49d4-8892-61d252358fa6\") " pod="openstack/swift-ring-rebalance-gm5ph" Feb 24 02:35:32.029387 master-0 kubenswrapper[31411]: I0224 02:35:32.029352 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c1211f9d-8d58-49d4-8892-61d252358fa6-etc-swift\") pod \"swift-ring-rebalance-gm5ph\" (UID: \"c1211f9d-8d58-49d4-8892-61d252358fa6\") " pod="openstack/swift-ring-rebalance-gm5ph" Feb 24 02:35:32.131632 master-0 kubenswrapper[31411]: I0224 02:35:32.131525 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b4xr\" (UniqueName: \"kubernetes.io/projected/c1211f9d-8d58-49d4-8892-61d252358fa6-kube-api-access-9b4xr\") pod \"swift-ring-rebalance-gm5ph\" (UID: \"c1211f9d-8d58-49d4-8892-61d252358fa6\") " pod="openstack/swift-ring-rebalance-gm5ph" Feb 24 02:35:32.134431 master-0 kubenswrapper[31411]: I0224 02:35:32.131733 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c1211f9d-8d58-49d4-8892-61d252358fa6-dispersionconf\") pod \"swift-ring-rebalance-gm5ph\" (UID: \"c1211f9d-8d58-49d4-8892-61d252358fa6\") " pod="openstack/swift-ring-rebalance-gm5ph" Feb 24 02:35:32.134431 master-0 kubenswrapper[31411]: I0224 02:35:32.131975 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1211f9d-8d58-49d4-8892-61d252358fa6-scripts\") pod \"swift-ring-rebalance-gm5ph\" (UID: \"c1211f9d-8d58-49d4-8892-61d252358fa6\") " pod="openstack/swift-ring-rebalance-gm5ph" Feb 24 02:35:32.134431 master-0 kubenswrapper[31411]: I0224 02:35:32.132211 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c1211f9d-8d58-49d4-8892-61d252358fa6-swiftconf\") pod \"swift-ring-rebalance-gm5ph\" (UID: \"c1211f9d-8d58-49d4-8892-61d252358fa6\") " pod="openstack/swift-ring-rebalance-gm5ph" Feb 24 02:35:32.134431 master-0 kubenswrapper[31411]: I0224 02:35:32.132401 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1211f9d-8d58-49d4-8892-61d252358fa6-combined-ca-bundle\") pod \"swift-ring-rebalance-gm5ph\" (UID: \"c1211f9d-8d58-49d4-8892-61d252358fa6\") " pod="openstack/swift-ring-rebalance-gm5ph" Feb 24 02:35:32.134431 master-0 kubenswrapper[31411]: I0224 02:35:32.132445 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c1211f9d-8d58-49d4-8892-61d252358fa6-etc-swift\") pod \"swift-ring-rebalance-gm5ph\" (UID: \"c1211f9d-8d58-49d4-8892-61d252358fa6\") " pod="openstack/swift-ring-rebalance-gm5ph" Feb 24 02:35:32.134431 master-0 kubenswrapper[31411]: I0224 02:35:32.132996 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1211f9d-8d58-49d4-8892-61d252358fa6-scripts\") pod \"swift-ring-rebalance-gm5ph\" (UID: \"c1211f9d-8d58-49d4-8892-61d252358fa6\") " pod="openstack/swift-ring-rebalance-gm5ph" Feb 24 02:35:32.134431 master-0 kubenswrapper[31411]: I0224 02:35:32.133164 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c1211f9d-8d58-49d4-8892-61d252358fa6-ring-data-devices\") pod \"swift-ring-rebalance-gm5ph\" (UID: \"c1211f9d-8d58-49d4-8892-61d252358fa6\") " pod="openstack/swift-ring-rebalance-gm5ph" Feb 24 02:35:32.134431 master-0 kubenswrapper[31411]: I0224 02:35:32.133300 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c1211f9d-8d58-49d4-8892-61d252358fa6-etc-swift\") pod \"swift-ring-rebalance-gm5ph\" (UID: \"c1211f9d-8d58-49d4-8892-61d252358fa6\") " pod="openstack/swift-ring-rebalance-gm5ph" Feb 24 02:35:32.134431 master-0 kubenswrapper[31411]: I0224 02:35:32.134043 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c1211f9d-8d58-49d4-8892-61d252358fa6-ring-data-devices\") pod \"swift-ring-rebalance-gm5ph\" (UID: \"c1211f9d-8d58-49d4-8892-61d252358fa6\") " pod="openstack/swift-ring-rebalance-gm5ph" Feb 24 02:35:32.143686 master-0 kubenswrapper[31411]: I0224 02:35:32.136494 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c1211f9d-8d58-49d4-8892-61d252358fa6-swiftconf\") pod \"swift-ring-rebalance-gm5ph\" (UID: \"c1211f9d-8d58-49d4-8892-61d252358fa6\") " pod="openstack/swift-ring-rebalance-gm5ph" Feb 24 02:35:32.143686 master-0 kubenswrapper[31411]: I0224 02:35:32.138650 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c1211f9d-8d58-49d4-8892-61d252358fa6-dispersionconf\") pod \"swift-ring-rebalance-gm5ph\" (UID: \"c1211f9d-8d58-49d4-8892-61d252358fa6\") " pod="openstack/swift-ring-rebalance-gm5ph" Feb 24 02:35:32.143686 master-0 kubenswrapper[31411]: I0224 02:35:32.138763 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1211f9d-8d58-49d4-8892-61d252358fa6-combined-ca-bundle\") pod \"swift-ring-rebalance-gm5ph\" (UID: \"c1211f9d-8d58-49d4-8892-61d252358fa6\") " pod="openstack/swift-ring-rebalance-gm5ph" Feb 24 02:35:32.161550 master-0 kubenswrapper[31411]: I0224 02:35:32.161476 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b4xr\" (UniqueName: \"kubernetes.io/projected/c1211f9d-8d58-49d4-8892-61d252358fa6-kube-api-access-9b4xr\") pod \"swift-ring-rebalance-gm5ph\" (UID: \"c1211f9d-8d58-49d4-8892-61d252358fa6\") " pod="openstack/swift-ring-rebalance-gm5ph" Feb 24 02:35:32.306545 master-0 kubenswrapper[31411]: I0224 02:35:32.306438 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"bd15c92e-3d10-4ad1-8769-ff495fcf6f49","Type":"ContainerStarted","Data":"fb72de183a97a3ad0294186901a133b95600d072aae96b1afe84fea1a8443219"} Feb 24 02:35:32.306545 master-0 kubenswrapper[31411]: I0224 02:35:32.306541 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"bd15c92e-3d10-4ad1-8769-ff495fcf6f49","Type":"ContainerStarted","Data":"222c28fb99219ec28499e676461f1ed7e467ad25d7b3b1e970b92303a41e8803"} Feb 24 02:35:32.307253 master-0 kubenswrapper[31411]: I0224 02:35:32.306698 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 24 02:35:32.310948 master-0 kubenswrapper[31411]: I0224 02:35:32.310888 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b55dc5f67-k2lcw" event={"ID":"42530b69-b5be-4699-873f-51cf3161c255","Type":"ContainerStarted","Data":"b47802dc03ac933559f7cacd5354414ae6b343727c0f8b39b0f31fbc1f75c829"} Feb 24 02:35:32.311172 master-0 kubenswrapper[31411]: I0224 02:35:32.311107 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b55dc5f67-k2lcw" Feb 24 02:35:32.318405 master-0 kubenswrapper[31411]: I0224 02:35:32.318337 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gm5ph" Feb 24 02:35:32.354516 master-0 kubenswrapper[31411]: I0224 02:35:32.354417 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.056423761 podStartE2EDuration="4.354396357s" podCreationTimestamp="2026-02-24 02:35:28 +0000 UTC" firstStartedPulling="2026-02-24 02:35:29.85312982 +0000 UTC m=+873.070327666" lastFinishedPulling="2026-02-24 02:35:31.151102406 +0000 UTC m=+874.368300262" observedRunningTime="2026-02-24 02:35:32.350823377 +0000 UTC m=+875.568021223" watchObservedRunningTime="2026-02-24 02:35:32.354396357 +0000 UTC m=+875.571594203" Feb 24 02:35:32.392242 master-0 kubenswrapper[31411]: I0224 02:35:32.392070 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b55dc5f67-k2lcw" podStartSLOduration=4.392027112 podStartE2EDuration="4.392027112s" podCreationTimestamp="2026-02-24 02:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:35:32.388066411 +0000 UTC m=+875.605264277" watchObservedRunningTime="2026-02-24 02:35:32.392027112 +0000 UTC m=+875.609224958" Feb 24 02:35:32.781602 master-0 kubenswrapper[31411]: I0224 02:35:32.778064 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0e673ada-f93f-4478-865c-179323f1aba0\" (UniqueName: \"kubernetes.io/csi/topolvm.io^03dd39aa-eff5-4c10-8ac6-00748703c90f\") pod \"swift-storage-0\" (UID: \"9c814b7c-b62b-4104-8139-8e6cd597d33f\") " pod="openstack/swift-storage-0" Feb 24 02:35:32.867265 master-0 kubenswrapper[31411]: I0224 02:35:32.867118 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9c814b7c-b62b-4104-8139-8e6cd597d33f-etc-swift\") pod \"swift-storage-0\" (UID: \"9c814b7c-b62b-4104-8139-8e6cd597d33f\") " pod="openstack/swift-storage-0" Feb 24 02:35:32.867514 master-0 kubenswrapper[31411]: E0224 02:35:32.867449 31411 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 24 02:35:32.867514 master-0 kubenswrapper[31411]: E0224 02:35:32.867471 31411 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 24 02:35:32.867609 master-0 kubenswrapper[31411]: E0224 02:35:32.867556 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9c814b7c-b62b-4104-8139-8e6cd597d33f-etc-swift podName:9c814b7c-b62b-4104-8139-8e6cd597d33f nodeName:}" failed. No retries permitted until 2026-02-24 02:35:34.867538432 +0000 UTC m=+878.084736278 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9c814b7c-b62b-4104-8139-8e6cd597d33f-etc-swift") pod "swift-storage-0" (UID: "9c814b7c-b62b-4104-8139-8e6cd597d33f") : configmap "swift-ring-files" not found Feb 24 02:35:32.908005 master-0 kubenswrapper[31411]: I0224 02:35:32.907946 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-gm5ph"] Feb 24 02:35:32.928311 master-0 kubenswrapper[31411]: W0224 02:35:32.928189 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1211f9d_8d58_49d4_8892_61d252358fa6.slice/crio-336f66be71b2cb9ce138900498ff5a8530143f67aa178ed495ecb50cdc7051c5 WatchSource:0}: Error finding container 336f66be71b2cb9ce138900498ff5a8530143f67aa178ed495ecb50cdc7051c5: Status 404 returned error can't find the container with id 336f66be71b2cb9ce138900498ff5a8530143f67aa178ed495ecb50cdc7051c5 Feb 24 02:35:33.105139 master-0 kubenswrapper[31411]: I0224 02:35:33.105055 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f8e94c7-6108-4435-a235-da04091a591a" path="/var/lib/kubelet/pods/7f8e94c7-6108-4435-a235-da04091a591a/volumes" Feb 24 02:35:33.345448 master-0 kubenswrapper[31411]: I0224 02:35:33.345274 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gm5ph" event={"ID":"c1211f9d-8d58-49d4-8892-61d252358fa6","Type":"ContainerStarted","Data":"336f66be71b2cb9ce138900498ff5a8530143f67aa178ed495ecb50cdc7051c5"} Feb 24 02:35:34.939602 master-0 kubenswrapper[31411]: I0224 02:35:34.939161 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9c814b7c-b62b-4104-8139-8e6cd597d33f-etc-swift\") pod \"swift-storage-0\" (UID: \"9c814b7c-b62b-4104-8139-8e6cd597d33f\") " pod="openstack/swift-storage-0" Feb 24 02:35:34.939602 master-0 kubenswrapper[31411]: E0224 02:35:34.939390 31411 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 24 02:35:34.939602 master-0 kubenswrapper[31411]: E0224 02:35:34.939408 31411 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 24 02:35:34.939602 master-0 kubenswrapper[31411]: E0224 02:35:34.939459 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9c814b7c-b62b-4104-8139-8e6cd597d33f-etc-swift podName:9c814b7c-b62b-4104-8139-8e6cd597d33f nodeName:}" failed. No retries permitted until 2026-02-24 02:35:38.939443683 +0000 UTC m=+882.156641529 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9c814b7c-b62b-4104-8139-8e6cd597d33f-etc-swift") pod "swift-storage-0" (UID: "9c814b7c-b62b-4104-8139-8e6cd597d33f") : configmap "swift-ring-files" not found Feb 24 02:35:37.307199 master-0 kubenswrapper[31411]: I0224 02:35:37.307123 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 24 02:35:37.464071 master-0 kubenswrapper[31411]: I0224 02:35:37.464009 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 24 02:35:37.634298 master-0 kubenswrapper[31411]: I0224 02:35:37.634124 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 02:35:37.634298 master-0 kubenswrapper[31411]: I0224 02:35:37.634197 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 24 02:35:37.738824 master-0 kubenswrapper[31411]: I0224 02:35:37.738380 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 24 02:35:38.442144 master-0 kubenswrapper[31411]: I0224 02:35:38.442063 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gm5ph" event={"ID":"c1211f9d-8d58-49d4-8892-61d252358fa6","Type":"ContainerStarted","Data":"2f206227059baaf98b504889171230c0bcae9307cabecdd8be899abd8c1163d8"} Feb 24 02:35:38.476453 master-0 kubenswrapper[31411]: I0224 02:35:38.476174 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-gm5ph" podStartSLOduration=3.352906811 podStartE2EDuration="7.476074314s" podCreationTimestamp="2026-02-24 02:35:31 +0000 UTC" firstStartedPulling="2026-02-24 02:35:32.934190221 +0000 UTC m=+876.151388057" lastFinishedPulling="2026-02-24 02:35:37.057357704 +0000 UTC m=+880.274555560" observedRunningTime="2026-02-24 02:35:38.474443359 +0000 UTC m=+881.691641275" watchObservedRunningTime="2026-02-24 02:35:38.476074314 +0000 UTC m=+881.693272190" Feb 24 02:35:38.592374 master-0 kubenswrapper[31411]: I0224 02:35:38.591340 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 24 02:35:38.975046 master-0 kubenswrapper[31411]: I0224 02:35:38.974934 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9c814b7c-b62b-4104-8139-8e6cd597d33f-etc-swift\") pod \"swift-storage-0\" (UID: \"9c814b7c-b62b-4104-8139-8e6cd597d33f\") " pod="openstack/swift-storage-0" Feb 24 02:35:38.975431 master-0 kubenswrapper[31411]: E0224 02:35:38.975356 31411 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 24 02:35:38.975431 master-0 kubenswrapper[31411]: E0224 02:35:38.975390 31411 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 24 02:35:38.975631 master-0 kubenswrapper[31411]: E0224 02:35:38.975483 31411 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9c814b7c-b62b-4104-8139-8e6cd597d33f-etc-swift podName:9c814b7c-b62b-4104-8139-8e6cd597d33f nodeName:}" failed. No retries permitted until 2026-02-24 02:35:46.975451763 +0000 UTC m=+890.192649659 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9c814b7c-b62b-4104-8139-8e6cd597d33f-etc-swift") pod "swift-storage-0" (UID: "9c814b7c-b62b-4104-8139-8e6cd597d33f") : configmap "swift-ring-files" not found Feb 24 02:35:39.469040 master-0 kubenswrapper[31411]: I0224 02:35:39.468946 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b55dc5f67-k2lcw" Feb 24 02:35:39.565374 master-0 kubenswrapper[31411]: I0224 02:35:39.562708 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c45d57b9c-jf69p"] Feb 24 02:35:39.565374 master-0 kubenswrapper[31411]: I0224 02:35:39.563120 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c45d57b9c-jf69p" podUID="bef074ce-14e6-4258-8172-bd2e640ae24b" containerName="dnsmasq-dns" containerID="cri-o://875a66b794e9377cb7c1be23dec6128c695853cdb2b4a5ea444834d6b524be82" gracePeriod=10 Feb 24 02:35:40.262644 master-0 kubenswrapper[31411]: I0224 02:35:40.262487 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c45d57b9c-jf69p" Feb 24 02:35:40.426788 master-0 kubenswrapper[31411]: I0224 02:35:40.426694 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bef074ce-14e6-4258-8172-bd2e640ae24b-config\") pod \"bef074ce-14e6-4258-8172-bd2e640ae24b\" (UID: \"bef074ce-14e6-4258-8172-bd2e640ae24b\") " Feb 24 02:35:40.426788 master-0 kubenswrapper[31411]: I0224 02:35:40.426789 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bef074ce-14e6-4258-8172-bd2e640ae24b-dns-svc\") pod \"bef074ce-14e6-4258-8172-bd2e640ae24b\" (UID: \"bef074ce-14e6-4258-8172-bd2e640ae24b\") " Feb 24 02:35:40.427212 master-0 kubenswrapper[31411]: I0224 02:35:40.426998 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxfhw\" (UniqueName: \"kubernetes.io/projected/bef074ce-14e6-4258-8172-bd2e640ae24b-kube-api-access-fxfhw\") pod \"bef074ce-14e6-4258-8172-bd2e640ae24b\" (UID: \"bef074ce-14e6-4258-8172-bd2e640ae24b\") " Feb 24 02:35:40.432909 master-0 kubenswrapper[31411]: I0224 02:35:40.432831 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bef074ce-14e6-4258-8172-bd2e640ae24b-kube-api-access-fxfhw" (OuterVolumeSpecName: "kube-api-access-fxfhw") pod "bef074ce-14e6-4258-8172-bd2e640ae24b" (UID: "bef074ce-14e6-4258-8172-bd2e640ae24b"). InnerVolumeSpecName "kube-api-access-fxfhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:35:40.470434 master-0 kubenswrapper[31411]: I0224 02:35:40.470356 31411 generic.go:334] "Generic (PLEG): container finished" podID="bef074ce-14e6-4258-8172-bd2e640ae24b" containerID="875a66b794e9377cb7c1be23dec6128c695853cdb2b4a5ea444834d6b524be82" exitCode=0 Feb 24 02:35:40.470434 master-0 kubenswrapper[31411]: I0224 02:35:40.470434 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c45d57b9c-jf69p" event={"ID":"bef074ce-14e6-4258-8172-bd2e640ae24b","Type":"ContainerDied","Data":"875a66b794e9377cb7c1be23dec6128c695853cdb2b4a5ea444834d6b524be82"} Feb 24 02:35:40.471382 master-0 kubenswrapper[31411]: I0224 02:35:40.470503 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c45d57b9c-jf69p" event={"ID":"bef074ce-14e6-4258-8172-bd2e640ae24b","Type":"ContainerDied","Data":"41fa347412359539880b9ddb0e1e83dfe92ff1f425c7d7ad7e7f1df699b7f38d"} Feb 24 02:35:40.471382 master-0 kubenswrapper[31411]: I0224 02:35:40.470540 31411 scope.go:117] "RemoveContainer" containerID="875a66b794e9377cb7c1be23dec6128c695853cdb2b4a5ea444834d6b524be82" Feb 24 02:35:40.471382 master-0 kubenswrapper[31411]: I0224 02:35:40.470441 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c45d57b9c-jf69p" Feb 24 02:35:40.517519 master-0 kubenswrapper[31411]: I0224 02:35:40.517424 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bef074ce-14e6-4258-8172-bd2e640ae24b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bef074ce-14e6-4258-8172-bd2e640ae24b" (UID: "bef074ce-14e6-4258-8172-bd2e640ae24b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:35:40.522060 master-0 kubenswrapper[31411]: I0224 02:35:40.521992 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bef074ce-14e6-4258-8172-bd2e640ae24b-config" (OuterVolumeSpecName: "config") pod "bef074ce-14e6-4258-8172-bd2e640ae24b" (UID: "bef074ce-14e6-4258-8172-bd2e640ae24b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:35:40.531326 master-0 kubenswrapper[31411]: I0224 02:35:40.531253 31411 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bef074ce-14e6-4258-8172-bd2e640ae24b-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:40.531326 master-0 kubenswrapper[31411]: I0224 02:35:40.531290 31411 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bef074ce-14e6-4258-8172-bd2e640ae24b-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:40.531326 master-0 kubenswrapper[31411]: I0224 02:35:40.531303 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxfhw\" (UniqueName: \"kubernetes.io/projected/bef074ce-14e6-4258-8172-bd2e640ae24b-kube-api-access-fxfhw\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:40.552345 master-0 kubenswrapper[31411]: I0224 02:35:40.552267 31411 scope.go:117] "RemoveContainer" containerID="7bc8db99f303a7551b80ce8a10c7a53443c2a3025ea38623526f0fa9e59b510a" Feb 24 02:35:40.590740 master-0 kubenswrapper[31411]: I0224 02:35:40.590694 31411 scope.go:117] "RemoveContainer" containerID="875a66b794e9377cb7c1be23dec6128c695853cdb2b4a5ea444834d6b524be82" Feb 24 02:35:40.591665 master-0 kubenswrapper[31411]: E0224 02:35:40.591614 31411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"875a66b794e9377cb7c1be23dec6128c695853cdb2b4a5ea444834d6b524be82\": container with ID starting with 875a66b794e9377cb7c1be23dec6128c695853cdb2b4a5ea444834d6b524be82 not found: ID does not exist" containerID="875a66b794e9377cb7c1be23dec6128c695853cdb2b4a5ea444834d6b524be82" Feb 24 02:35:40.591911 master-0 kubenswrapper[31411]: I0224 02:35:40.591850 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"875a66b794e9377cb7c1be23dec6128c695853cdb2b4a5ea444834d6b524be82"} err="failed to get container status \"875a66b794e9377cb7c1be23dec6128c695853cdb2b4a5ea444834d6b524be82\": rpc error: code = NotFound desc = could not find container \"875a66b794e9377cb7c1be23dec6128c695853cdb2b4a5ea444834d6b524be82\": container with ID starting with 875a66b794e9377cb7c1be23dec6128c695853cdb2b4a5ea444834d6b524be82 not found: ID does not exist" Feb 24 02:35:40.592071 master-0 kubenswrapper[31411]: I0224 02:35:40.592046 31411 scope.go:117] "RemoveContainer" containerID="7bc8db99f303a7551b80ce8a10c7a53443c2a3025ea38623526f0fa9e59b510a" Feb 24 02:35:40.592891 master-0 kubenswrapper[31411]: E0224 02:35:40.592832 31411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bc8db99f303a7551b80ce8a10c7a53443c2a3025ea38623526f0fa9e59b510a\": container with ID starting with 7bc8db99f303a7551b80ce8a10c7a53443c2a3025ea38623526f0fa9e59b510a not found: ID does not exist" containerID="7bc8db99f303a7551b80ce8a10c7a53443c2a3025ea38623526f0fa9e59b510a" Feb 24 02:35:40.593018 master-0 kubenswrapper[31411]: I0224 02:35:40.592903 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bc8db99f303a7551b80ce8a10c7a53443c2a3025ea38623526f0fa9e59b510a"} err="failed to get container status \"7bc8db99f303a7551b80ce8a10c7a53443c2a3025ea38623526f0fa9e59b510a\": rpc error: code = NotFound desc = could not find container \"7bc8db99f303a7551b80ce8a10c7a53443c2a3025ea38623526f0fa9e59b510a\": container with ID starting with 7bc8db99f303a7551b80ce8a10c7a53443c2a3025ea38623526f0fa9e59b510a not found: ID does not exist" Feb 24 02:35:40.815358 master-0 kubenswrapper[31411]: I0224 02:35:40.815260 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c45d57b9c-jf69p"] Feb 24 02:35:40.845440 master-0 kubenswrapper[31411]: I0224 02:35:40.844021 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c45d57b9c-jf69p"] Feb 24 02:35:41.121231 master-0 kubenswrapper[31411]: I0224 02:35:41.121038 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bef074ce-14e6-4258-8172-bd2e640ae24b" path="/var/lib/kubelet/pods/bef074ce-14e6-4258-8172-bd2e640ae24b/volumes" Feb 24 02:35:41.743016 master-0 kubenswrapper[31411]: I0224 02:35:41.742927 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-jl696"] Feb 24 02:35:41.743855 master-0 kubenswrapper[31411]: E0224 02:35:41.743785 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bef074ce-14e6-4258-8172-bd2e640ae24b" containerName="init" Feb 24 02:35:41.743855 master-0 kubenswrapper[31411]: I0224 02:35:41.743808 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="bef074ce-14e6-4258-8172-bd2e640ae24b" containerName="init" Feb 24 02:35:41.743855 master-0 kubenswrapper[31411]: E0224 02:35:41.743837 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bef074ce-14e6-4258-8172-bd2e640ae24b" containerName="dnsmasq-dns" Feb 24 02:35:41.743855 master-0 kubenswrapper[31411]: I0224 02:35:41.743847 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="bef074ce-14e6-4258-8172-bd2e640ae24b" containerName="dnsmasq-dns" Feb 24 02:35:41.744286 master-0 kubenswrapper[31411]: I0224 02:35:41.744248 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="bef074ce-14e6-4258-8172-bd2e640ae24b" containerName="dnsmasq-dns" Feb 24 02:35:41.745405 master-0 kubenswrapper[31411]: I0224 02:35:41.745361 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jl696" Feb 24 02:35:41.762848 master-0 kubenswrapper[31411]: I0224 02:35:41.762769 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e923-account-create-update-dswn2"] Feb 24 02:35:41.765112 master-0 kubenswrapper[31411]: I0224 02:35:41.765074 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e923-account-create-update-dswn2" Feb 24 02:35:41.767412 master-0 kubenswrapper[31411]: I0224 02:35:41.767373 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 24 02:35:41.879697 master-0 kubenswrapper[31411]: I0224 02:35:41.879612 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7s4s\" (UniqueName: \"kubernetes.io/projected/fc6984b0-e97a-4750-b89f-290aa2cc36b9-kube-api-access-c7s4s\") pod \"glance-db-create-jl696\" (UID: \"fc6984b0-e97a-4750-b89f-290aa2cc36b9\") " pod="openstack/glance-db-create-jl696" Feb 24 02:35:41.880048 master-0 kubenswrapper[31411]: I0224 02:35:41.879742 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9158aad4-fb9b-4900-99cd-c78da20e920d-operator-scripts\") pod \"glance-e923-account-create-update-dswn2\" (UID: \"9158aad4-fb9b-4900-99cd-c78da20e920d\") " pod="openstack/glance-e923-account-create-update-dswn2" Feb 24 02:35:41.881296 master-0 kubenswrapper[31411]: I0224 02:35:41.881214 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhjnn\" (UniqueName: \"kubernetes.io/projected/9158aad4-fb9b-4900-99cd-c78da20e920d-kube-api-access-zhjnn\") pod \"glance-e923-account-create-update-dswn2\" (UID: \"9158aad4-fb9b-4900-99cd-c78da20e920d\") " pod="openstack/glance-e923-account-create-update-dswn2" Feb 24 02:35:41.881707 master-0 kubenswrapper[31411]: I0224 02:35:41.881653 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc6984b0-e97a-4750-b89f-290aa2cc36b9-operator-scripts\") pod \"glance-db-create-jl696\" (UID: \"fc6984b0-e97a-4750-b89f-290aa2cc36b9\") " pod="openstack/glance-db-create-jl696" Feb 24 02:35:41.914405 master-0 kubenswrapper[31411]: I0224 02:35:41.914305 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-jl696"] Feb 24 02:35:41.926476 master-0 kubenswrapper[31411]: I0224 02:35:41.926318 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e923-account-create-update-dswn2"] Feb 24 02:35:41.984845 master-0 kubenswrapper[31411]: I0224 02:35:41.984739 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc6984b0-e97a-4750-b89f-290aa2cc36b9-operator-scripts\") pod \"glance-db-create-jl696\" (UID: \"fc6984b0-e97a-4750-b89f-290aa2cc36b9\") " pod="openstack/glance-db-create-jl696" Feb 24 02:35:41.985140 master-0 kubenswrapper[31411]: I0224 02:35:41.985005 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7s4s\" (UniqueName: \"kubernetes.io/projected/fc6984b0-e97a-4750-b89f-290aa2cc36b9-kube-api-access-c7s4s\") pod \"glance-db-create-jl696\" (UID: \"fc6984b0-e97a-4750-b89f-290aa2cc36b9\") " pod="openstack/glance-db-create-jl696" Feb 24 02:35:41.985140 master-0 kubenswrapper[31411]: I0224 02:35:41.985072 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9158aad4-fb9b-4900-99cd-c78da20e920d-operator-scripts\") pod \"glance-e923-account-create-update-dswn2\" (UID: \"9158aad4-fb9b-4900-99cd-c78da20e920d\") " pod="openstack/glance-e923-account-create-update-dswn2" Feb 24 02:35:41.985278 master-0 kubenswrapper[31411]: I0224 02:35:41.985164 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhjnn\" (UniqueName: \"kubernetes.io/projected/9158aad4-fb9b-4900-99cd-c78da20e920d-kube-api-access-zhjnn\") pod \"glance-e923-account-create-update-dswn2\" (UID: \"9158aad4-fb9b-4900-99cd-c78da20e920d\") " pod="openstack/glance-e923-account-create-update-dswn2" Feb 24 02:35:41.986543 master-0 kubenswrapper[31411]: I0224 02:35:41.986456 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc6984b0-e97a-4750-b89f-290aa2cc36b9-operator-scripts\") pod \"glance-db-create-jl696\" (UID: \"fc6984b0-e97a-4750-b89f-290aa2cc36b9\") " pod="openstack/glance-db-create-jl696" Feb 24 02:35:41.988447 master-0 kubenswrapper[31411]: I0224 02:35:41.986744 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9158aad4-fb9b-4900-99cd-c78da20e920d-operator-scripts\") pod \"glance-e923-account-create-update-dswn2\" (UID: \"9158aad4-fb9b-4900-99cd-c78da20e920d\") " pod="openstack/glance-e923-account-create-update-dswn2" Feb 24 02:35:42.196919 master-0 kubenswrapper[31411]: I0224 02:35:42.196750 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7s4s\" (UniqueName: \"kubernetes.io/projected/fc6984b0-e97a-4750-b89f-290aa2cc36b9-kube-api-access-c7s4s\") pod \"glance-db-create-jl696\" (UID: \"fc6984b0-e97a-4750-b89f-290aa2cc36b9\") " pod="openstack/glance-db-create-jl696" Feb 24 02:35:42.200095 master-0 kubenswrapper[31411]: I0224 02:35:42.200036 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhjnn\" (UniqueName: \"kubernetes.io/projected/9158aad4-fb9b-4900-99cd-c78da20e920d-kube-api-access-zhjnn\") pod \"glance-e923-account-create-update-dswn2\" (UID: \"9158aad4-fb9b-4900-99cd-c78da20e920d\") " pod="openstack/glance-e923-account-create-update-dswn2" Feb 24 02:35:42.383287 master-0 kubenswrapper[31411]: I0224 02:35:42.382073 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jl696" Feb 24 02:35:42.394423 master-0 kubenswrapper[31411]: I0224 02:35:42.394353 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e923-account-create-update-dswn2" Feb 24 02:35:43.494210 master-0 kubenswrapper[31411]: W0224 02:35:43.494161 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9158aad4_fb9b_4900_99cd_c78da20e920d.slice/crio-6a7dd5a895eab2bd05c3fe54684ac0a4c68cfe819062734726ac10bc0508db5c WatchSource:0}: Error finding container 6a7dd5a895eab2bd05c3fe54684ac0a4c68cfe819062734726ac10bc0508db5c: Status 404 returned error can't find the container with id 6a7dd5a895eab2bd05c3fe54684ac0a4c68cfe819062734726ac10bc0508db5c Feb 24 02:35:43.499813 master-0 kubenswrapper[31411]: W0224 02:35:43.499766 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc6984b0_e97a_4750_b89f_290aa2cc36b9.slice/crio-c292fd1b94016dfce78375ce770e486333ff842db7970d1a89d2d925f4820980 WatchSource:0}: Error finding container c292fd1b94016dfce78375ce770e486333ff842db7970d1a89d2d925f4820980: Status 404 returned error can't find the container with id c292fd1b94016dfce78375ce770e486333ff842db7970d1a89d2d925f4820980 Feb 24 02:35:43.503254 master-0 kubenswrapper[31411]: I0224 02:35:43.503220 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-jl696"] Feb 24 02:35:43.521240 master-0 kubenswrapper[31411]: I0224 02:35:43.519932 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e923-account-create-update-dswn2"] Feb 24 02:35:43.545965 master-0 kubenswrapper[31411]: I0224 02:35:43.545903 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e923-account-create-update-dswn2" event={"ID":"9158aad4-fb9b-4900-99cd-c78da20e920d","Type":"ContainerStarted","Data":"6a7dd5a895eab2bd05c3fe54684ac0a4c68cfe819062734726ac10bc0508db5c"} Feb 24 02:35:43.548256 master-0 kubenswrapper[31411]: I0224 02:35:43.548204 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jl696" event={"ID":"fc6984b0-e97a-4750-b89f-290aa2cc36b9","Type":"ContainerStarted","Data":"c292fd1b94016dfce78375ce770e486333ff842db7970d1a89d2d925f4820980"} Feb 24 02:35:44.287872 master-0 kubenswrapper[31411]: I0224 02:35:44.287791 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-pxfms"] Feb 24 02:35:44.289611 master-0 kubenswrapper[31411]: I0224 02:35:44.289476 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pxfms" Feb 24 02:35:44.294210 master-0 kubenswrapper[31411]: I0224 02:35:44.294161 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 24 02:35:44.303984 master-0 kubenswrapper[31411]: I0224 02:35:44.303912 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pxfms"] Feb 24 02:35:44.369770 master-0 kubenswrapper[31411]: I0224 02:35:44.369548 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7-operator-scripts\") pod \"root-account-create-update-pxfms\" (UID: \"bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7\") " pod="openstack/root-account-create-update-pxfms" Feb 24 02:35:44.370296 master-0 kubenswrapper[31411]: I0224 02:35:44.370273 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlxc8\" (UniqueName: \"kubernetes.io/projected/bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7-kube-api-access-jlxc8\") pod \"root-account-create-update-pxfms\" (UID: \"bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7\") " pod="openstack/root-account-create-update-pxfms" Feb 24 02:35:44.474383 master-0 kubenswrapper[31411]: I0224 02:35:44.474315 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlxc8\" (UniqueName: \"kubernetes.io/projected/bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7-kube-api-access-jlxc8\") pod \"root-account-create-update-pxfms\" (UID: \"bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7\") " pod="openstack/root-account-create-update-pxfms" Feb 24 02:35:44.474613 master-0 kubenswrapper[31411]: I0224 02:35:44.474553 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7-operator-scripts\") pod \"root-account-create-update-pxfms\" (UID: \"bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7\") " pod="openstack/root-account-create-update-pxfms" Feb 24 02:35:44.475850 master-0 kubenswrapper[31411]: I0224 02:35:44.475713 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7-operator-scripts\") pod \"root-account-create-update-pxfms\" (UID: \"bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7\") " pod="openstack/root-account-create-update-pxfms" Feb 24 02:35:44.492948 master-0 kubenswrapper[31411]: I0224 02:35:44.492911 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlxc8\" (UniqueName: \"kubernetes.io/projected/bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7-kube-api-access-jlxc8\") pod \"root-account-create-update-pxfms\" (UID: \"bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7\") " pod="openstack/root-account-create-update-pxfms" Feb 24 02:35:44.573544 master-0 kubenswrapper[31411]: I0224 02:35:44.573431 31411 generic.go:334] "Generic (PLEG): container finished" podID="9158aad4-fb9b-4900-99cd-c78da20e920d" containerID="fa0b6426fb5825820f55aba01104c816b6b323615f887f32756604f2dbc2e66b" exitCode=0 Feb 24 02:35:44.574710 master-0 kubenswrapper[31411]: I0224 02:35:44.573628 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e923-account-create-update-dswn2" event={"ID":"9158aad4-fb9b-4900-99cd-c78da20e920d","Type":"ContainerDied","Data":"fa0b6426fb5825820f55aba01104c816b6b323615f887f32756604f2dbc2e66b"} Feb 24 02:35:44.577152 master-0 kubenswrapper[31411]: I0224 02:35:44.577078 31411 generic.go:334] "Generic (PLEG): container finished" podID="c1211f9d-8d58-49d4-8892-61d252358fa6" containerID="2f206227059baaf98b504889171230c0bcae9307cabecdd8be899abd8c1163d8" exitCode=0 Feb 24 02:35:44.577341 master-0 kubenswrapper[31411]: I0224 02:35:44.577205 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gm5ph" event={"ID":"c1211f9d-8d58-49d4-8892-61d252358fa6","Type":"ContainerDied","Data":"2f206227059baaf98b504889171230c0bcae9307cabecdd8be899abd8c1163d8"} Feb 24 02:35:44.591433 master-0 kubenswrapper[31411]: I0224 02:35:44.591300 31411 generic.go:334] "Generic (PLEG): container finished" podID="fc6984b0-e97a-4750-b89f-290aa2cc36b9" containerID="74de2ff9ddd0ff5fce54b3b6888b341dd50d253ae5a806a1d255df1f158482be" exitCode=0 Feb 24 02:35:44.591433 master-0 kubenswrapper[31411]: I0224 02:35:44.591417 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jl696" event={"ID":"fc6984b0-e97a-4750-b89f-290aa2cc36b9","Type":"ContainerDied","Data":"74de2ff9ddd0ff5fce54b3b6888b341dd50d253ae5a806a1d255df1f158482be"} Feb 24 02:35:44.648477 master-0 kubenswrapper[31411]: I0224 02:35:44.648272 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pxfms" Feb 24 02:35:45.296710 master-0 kubenswrapper[31411]: W0224 02:35:45.295734 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc6a4062_0e87_4d7e_bb7d_8e3a127ea5e7.slice/crio-ba88d794891bb16880291a49324f6b1f732396b040d4c24d970912b975916aaa WatchSource:0}: Error finding container ba88d794891bb16880291a49324f6b1f732396b040d4c24d970912b975916aaa: Status 404 returned error can't find the container with id ba88d794891bb16880291a49324f6b1f732396b040d4c24d970912b975916aaa Feb 24 02:35:45.297731 master-0 kubenswrapper[31411]: I0224 02:35:45.297615 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pxfms"] Feb 24 02:35:45.610127 master-0 kubenswrapper[31411]: I0224 02:35:45.610033 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pxfms" event={"ID":"bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7","Type":"ContainerStarted","Data":"ba88d794891bb16880291a49324f6b1f732396b040d4c24d970912b975916aaa"} Feb 24 02:35:45.652432 master-0 kubenswrapper[31411]: I0224 02:35:45.652298 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-pxfms" podStartSLOduration=1.652263392 podStartE2EDuration="1.652263392s" podCreationTimestamp="2026-02-24 02:35:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:35:45.644100383 +0000 UTC m=+888.861298309" watchObservedRunningTime="2026-02-24 02:35:45.652263392 +0000 UTC m=+888.869461248" Feb 24 02:35:46.392643 master-0 kubenswrapper[31411]: I0224 02:35:46.391952 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gm5ph" Feb 24 02:35:46.467700 master-0 kubenswrapper[31411]: I0224 02:35:46.465945 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1211f9d-8d58-49d4-8892-61d252358fa6-scripts\") pod \"c1211f9d-8d58-49d4-8892-61d252358fa6\" (UID: \"c1211f9d-8d58-49d4-8892-61d252358fa6\") " Feb 24 02:35:46.467700 master-0 kubenswrapper[31411]: I0224 02:35:46.466048 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b4xr\" (UniqueName: \"kubernetes.io/projected/c1211f9d-8d58-49d4-8892-61d252358fa6-kube-api-access-9b4xr\") pod \"c1211f9d-8d58-49d4-8892-61d252358fa6\" (UID: \"c1211f9d-8d58-49d4-8892-61d252358fa6\") " Feb 24 02:35:46.467700 master-0 kubenswrapper[31411]: I0224 02:35:46.466094 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c1211f9d-8d58-49d4-8892-61d252358fa6-ring-data-devices\") pod \"c1211f9d-8d58-49d4-8892-61d252358fa6\" (UID: \"c1211f9d-8d58-49d4-8892-61d252358fa6\") " Feb 24 02:35:46.467700 master-0 kubenswrapper[31411]: I0224 02:35:46.466141 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c1211f9d-8d58-49d4-8892-61d252358fa6-swiftconf\") pod \"c1211f9d-8d58-49d4-8892-61d252358fa6\" (UID: \"c1211f9d-8d58-49d4-8892-61d252358fa6\") " Feb 24 02:35:46.467700 master-0 kubenswrapper[31411]: I0224 02:35:46.466194 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c1211f9d-8d58-49d4-8892-61d252358fa6-dispersionconf\") pod \"c1211f9d-8d58-49d4-8892-61d252358fa6\" (UID: \"c1211f9d-8d58-49d4-8892-61d252358fa6\") " Feb 24 02:35:46.467700 master-0 kubenswrapper[31411]: I0224 02:35:46.466240 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c1211f9d-8d58-49d4-8892-61d252358fa6-etc-swift\") pod \"c1211f9d-8d58-49d4-8892-61d252358fa6\" (UID: \"c1211f9d-8d58-49d4-8892-61d252358fa6\") " Feb 24 02:35:46.467700 master-0 kubenswrapper[31411]: I0224 02:35:46.466269 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1211f9d-8d58-49d4-8892-61d252358fa6-combined-ca-bundle\") pod \"c1211f9d-8d58-49d4-8892-61d252358fa6\" (UID: \"c1211f9d-8d58-49d4-8892-61d252358fa6\") " Feb 24 02:35:46.471087 master-0 kubenswrapper[31411]: I0224 02:35:46.471017 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1211f9d-8d58-49d4-8892-61d252358fa6-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "c1211f9d-8d58-49d4-8892-61d252358fa6" (UID: "c1211f9d-8d58-49d4-8892-61d252358fa6"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:35:46.471150 master-0 kubenswrapper[31411]: I0224 02:35:46.471101 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1211f9d-8d58-49d4-8892-61d252358fa6-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c1211f9d-8d58-49d4-8892-61d252358fa6" (UID: "c1211f9d-8d58-49d4-8892-61d252358fa6"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:35:46.472502 master-0 kubenswrapper[31411]: I0224 02:35:46.472436 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1211f9d-8d58-49d4-8892-61d252358fa6-kube-api-access-9b4xr" (OuterVolumeSpecName: "kube-api-access-9b4xr") pod "c1211f9d-8d58-49d4-8892-61d252358fa6" (UID: "c1211f9d-8d58-49d4-8892-61d252358fa6"). InnerVolumeSpecName "kube-api-access-9b4xr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:35:46.480451 master-0 kubenswrapper[31411]: I0224 02:35:46.480402 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1211f9d-8d58-49d4-8892-61d252358fa6-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "c1211f9d-8d58-49d4-8892-61d252358fa6" (UID: "c1211f9d-8d58-49d4-8892-61d252358fa6"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:35:46.503014 master-0 kubenswrapper[31411]: I0224 02:35:46.502909 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1211f9d-8d58-49d4-8892-61d252358fa6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1211f9d-8d58-49d4-8892-61d252358fa6" (UID: "c1211f9d-8d58-49d4-8892-61d252358fa6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:35:46.512246 master-0 kubenswrapper[31411]: I0224 02:35:46.512180 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1211f9d-8d58-49d4-8892-61d252358fa6-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "c1211f9d-8d58-49d4-8892-61d252358fa6" (UID: "c1211f9d-8d58-49d4-8892-61d252358fa6"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:35:46.520364 master-0 kubenswrapper[31411]: I0224 02:35:46.520321 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1211f9d-8d58-49d4-8892-61d252358fa6-scripts" (OuterVolumeSpecName: "scripts") pod "c1211f9d-8d58-49d4-8892-61d252358fa6" (UID: "c1211f9d-8d58-49d4-8892-61d252358fa6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:35:46.565324 master-0 kubenswrapper[31411]: I0224 02:35:46.565207 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jl696" Feb 24 02:35:46.567991 master-0 kubenswrapper[31411]: I0224 02:35:46.567923 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc6984b0-e97a-4750-b89f-290aa2cc36b9-operator-scripts\") pod \"fc6984b0-e97a-4750-b89f-290aa2cc36b9\" (UID: \"fc6984b0-e97a-4750-b89f-290aa2cc36b9\") " Feb 24 02:35:46.568598 master-0 kubenswrapper[31411]: I0224 02:35:46.568454 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7s4s\" (UniqueName: \"kubernetes.io/projected/fc6984b0-e97a-4750-b89f-290aa2cc36b9-kube-api-access-c7s4s\") pod \"fc6984b0-e97a-4750-b89f-290aa2cc36b9\" (UID: \"fc6984b0-e97a-4750-b89f-290aa2cc36b9\") " Feb 24 02:35:46.568598 master-0 kubenswrapper[31411]: I0224 02:35:46.568530 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc6984b0-e97a-4750-b89f-290aa2cc36b9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fc6984b0-e97a-4750-b89f-290aa2cc36b9" (UID: "fc6984b0-e97a-4750-b89f-290aa2cc36b9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:35:46.569083 master-0 kubenswrapper[31411]: I0224 02:35:46.569052 31411 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c1211f9d-8d58-49d4-8892-61d252358fa6-etc-swift\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:46.569083 master-0 kubenswrapper[31411]: I0224 02:35:46.569079 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1211f9d-8d58-49d4-8892-61d252358fa6-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:46.569192 master-0 kubenswrapper[31411]: I0224 02:35:46.569090 31411 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1211f9d-8d58-49d4-8892-61d252358fa6-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:46.569192 master-0 kubenswrapper[31411]: I0224 02:35:46.569102 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b4xr\" (UniqueName: \"kubernetes.io/projected/c1211f9d-8d58-49d4-8892-61d252358fa6-kube-api-access-9b4xr\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:46.569192 master-0 kubenswrapper[31411]: I0224 02:35:46.569114 31411 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c1211f9d-8d58-49d4-8892-61d252358fa6-ring-data-devices\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:46.569192 master-0 kubenswrapper[31411]: I0224 02:35:46.569124 31411 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc6984b0-e97a-4750-b89f-290aa2cc36b9-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:46.569192 master-0 kubenswrapper[31411]: I0224 02:35:46.569134 31411 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c1211f9d-8d58-49d4-8892-61d252358fa6-swiftconf\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:46.569192 master-0 kubenswrapper[31411]: I0224 02:35:46.569142 31411 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c1211f9d-8d58-49d4-8892-61d252358fa6-dispersionconf\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:46.571920 master-0 kubenswrapper[31411]: I0224 02:35:46.571877 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc6984b0-e97a-4750-b89f-290aa2cc36b9-kube-api-access-c7s4s" (OuterVolumeSpecName: "kube-api-access-c7s4s") pod "fc6984b0-e97a-4750-b89f-290aa2cc36b9" (UID: "fc6984b0-e97a-4750-b89f-290aa2cc36b9"). InnerVolumeSpecName "kube-api-access-c7s4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:35:46.574818 master-0 kubenswrapper[31411]: I0224 02:35:46.574796 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e923-account-create-update-dswn2" Feb 24 02:35:46.648664 master-0 kubenswrapper[31411]: I0224 02:35:46.648468 31411 generic.go:334] "Generic (PLEG): container finished" podID="bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7" containerID="c20e22b587026297ecbfb9bb4a70af1be08b2eb8a914ba2123643c2845e06777" exitCode=0 Feb 24 02:35:46.648664 master-0 kubenswrapper[31411]: I0224 02:35:46.648653 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pxfms" event={"ID":"bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7","Type":"ContainerDied","Data":"c20e22b587026297ecbfb9bb4a70af1be08b2eb8a914ba2123643c2845e06777"} Feb 24 02:35:46.661208 master-0 kubenswrapper[31411]: I0224 02:35:46.653166 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gm5ph" event={"ID":"c1211f9d-8d58-49d4-8892-61d252358fa6","Type":"ContainerDied","Data":"336f66be71b2cb9ce138900498ff5a8530143f67aa178ed495ecb50cdc7051c5"} Feb 24 02:35:46.661208 master-0 kubenswrapper[31411]: I0224 02:35:46.653251 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="336f66be71b2cb9ce138900498ff5a8530143f67aa178ed495ecb50cdc7051c5" Feb 24 02:35:46.661208 master-0 kubenswrapper[31411]: I0224 02:35:46.653393 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gm5ph" Feb 24 02:35:46.661208 master-0 kubenswrapper[31411]: I0224 02:35:46.656420 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jl696" event={"ID":"fc6984b0-e97a-4750-b89f-290aa2cc36b9","Type":"ContainerDied","Data":"c292fd1b94016dfce78375ce770e486333ff842db7970d1a89d2d925f4820980"} Feb 24 02:35:46.661208 master-0 kubenswrapper[31411]: I0224 02:35:46.656465 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jl696" Feb 24 02:35:46.661208 master-0 kubenswrapper[31411]: I0224 02:35:46.656473 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c292fd1b94016dfce78375ce770e486333ff842db7970d1a89d2d925f4820980" Feb 24 02:35:46.661208 master-0 kubenswrapper[31411]: I0224 02:35:46.659139 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e923-account-create-update-dswn2" event={"ID":"9158aad4-fb9b-4900-99cd-c78da20e920d","Type":"ContainerDied","Data":"6a7dd5a895eab2bd05c3fe54684ac0a4c68cfe819062734726ac10bc0508db5c"} Feb 24 02:35:46.661208 master-0 kubenswrapper[31411]: I0224 02:35:46.659194 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a7dd5a895eab2bd05c3fe54684ac0a4c68cfe819062734726ac10bc0508db5c" Feb 24 02:35:46.661208 master-0 kubenswrapper[31411]: I0224 02:35:46.659263 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e923-account-create-update-dswn2" Feb 24 02:35:46.683800 master-0 kubenswrapper[31411]: I0224 02:35:46.678011 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9158aad4-fb9b-4900-99cd-c78da20e920d-operator-scripts\") pod \"9158aad4-fb9b-4900-99cd-c78da20e920d\" (UID: \"9158aad4-fb9b-4900-99cd-c78da20e920d\") " Feb 24 02:35:46.683800 master-0 kubenswrapper[31411]: I0224 02:35:46.678133 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhjnn\" (UniqueName: \"kubernetes.io/projected/9158aad4-fb9b-4900-99cd-c78da20e920d-kube-api-access-zhjnn\") pod \"9158aad4-fb9b-4900-99cd-c78da20e920d\" (UID: \"9158aad4-fb9b-4900-99cd-c78da20e920d\") " Feb 24 02:35:46.683800 master-0 kubenswrapper[31411]: I0224 02:35:46.678567 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9158aad4-fb9b-4900-99cd-c78da20e920d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9158aad4-fb9b-4900-99cd-c78da20e920d" (UID: "9158aad4-fb9b-4900-99cd-c78da20e920d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:35:46.683800 master-0 kubenswrapper[31411]: I0224 02:35:46.680462 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7s4s\" (UniqueName: \"kubernetes.io/projected/fc6984b0-e97a-4750-b89f-290aa2cc36b9-kube-api-access-c7s4s\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:46.683800 master-0 kubenswrapper[31411]: I0224 02:35:46.680494 31411 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9158aad4-fb9b-4900-99cd-c78da20e920d-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:46.685459 master-0 kubenswrapper[31411]: I0224 02:35:46.685245 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9158aad4-fb9b-4900-99cd-c78da20e920d-kube-api-access-zhjnn" (OuterVolumeSpecName: "kube-api-access-zhjnn") pod "9158aad4-fb9b-4900-99cd-c78da20e920d" (UID: "9158aad4-fb9b-4900-99cd-c78da20e920d"). InnerVolumeSpecName "kube-api-access-zhjnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:35:46.783719 master-0 kubenswrapper[31411]: I0224 02:35:46.783650 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhjnn\" (UniqueName: \"kubernetes.io/projected/9158aad4-fb9b-4900-99cd-c78da20e920d-kube-api-access-zhjnn\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:46.988279 master-0 kubenswrapper[31411]: I0224 02:35:46.988198 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9c814b7c-b62b-4104-8139-8e6cd597d33f-etc-swift\") pod \"swift-storage-0\" (UID: \"9c814b7c-b62b-4104-8139-8e6cd597d33f\") " pod="openstack/swift-storage-0" Feb 24 02:35:46.996479 master-0 kubenswrapper[31411]: I0224 02:35:46.996408 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9c814b7c-b62b-4104-8139-8e6cd597d33f-etc-swift\") pod \"swift-storage-0\" (UID: \"9c814b7c-b62b-4104-8139-8e6cd597d33f\") " pod="openstack/swift-storage-0" Feb 24 02:35:47.056711 master-0 kubenswrapper[31411]: I0224 02:35:47.056617 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 24 02:35:47.701915 master-0 kubenswrapper[31411]: I0224 02:35:47.701861 31411 generic.go:334] "Generic (PLEG): container finished" podID="fc47c58d-5bd1-4cb0-942f-6a048792da9a" containerID="cab178893e81ce3fc027589e361050956654cee75cf192c025e9750761ccd5a2" exitCode=0 Feb 24 02:35:47.702774 master-0 kubenswrapper[31411]: I0224 02:35:47.702726 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fc47c58d-5bd1-4cb0-942f-6a048792da9a","Type":"ContainerDied","Data":"cab178893e81ce3fc027589e361050956654cee75cf192c025e9750761ccd5a2"} Feb 24 02:35:47.706342 master-0 kubenswrapper[31411]: I0224 02:35:47.706259 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 24 02:35:47.708610 master-0 kubenswrapper[31411]: I0224 02:35:47.708486 31411 generic.go:334] "Generic (PLEG): container finished" podID="5680b3af-dae8-4617-80b2-30c0a9818130" containerID="c6824a0e2ca82e08c2d8420752f27bd17ff74edcb1ec26f90b9795fe5f26ce69" exitCode=0 Feb 24 02:35:47.708856 master-0 kubenswrapper[31411]: I0224 02:35:47.708804 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5680b3af-dae8-4617-80b2-30c0a9818130","Type":"ContainerDied","Data":"c6824a0e2ca82e08c2d8420752f27bd17ff74edcb1ec26f90b9795fe5f26ce69"} Feb 24 02:35:47.809846 master-0 kubenswrapper[31411]: I0224 02:35:47.808944 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-d7rmf"] Feb 24 02:35:47.810122 master-0 kubenswrapper[31411]: E0224 02:35:47.810089 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc6984b0-e97a-4750-b89f-290aa2cc36b9" containerName="mariadb-database-create" Feb 24 02:35:47.810122 master-0 kubenswrapper[31411]: I0224 02:35:47.810108 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc6984b0-e97a-4750-b89f-290aa2cc36b9" containerName="mariadb-database-create" Feb 24 02:35:47.810290 master-0 kubenswrapper[31411]: E0224 02:35:47.810168 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9158aad4-fb9b-4900-99cd-c78da20e920d" containerName="mariadb-account-create-update" Feb 24 02:35:47.810290 master-0 kubenswrapper[31411]: I0224 02:35:47.810177 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="9158aad4-fb9b-4900-99cd-c78da20e920d" containerName="mariadb-account-create-update" Feb 24 02:35:47.810290 master-0 kubenswrapper[31411]: E0224 02:35:47.810216 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1211f9d-8d58-49d4-8892-61d252358fa6" containerName="swift-ring-rebalance" Feb 24 02:35:47.810290 master-0 kubenswrapper[31411]: I0224 02:35:47.810224 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1211f9d-8d58-49d4-8892-61d252358fa6" containerName="swift-ring-rebalance" Feb 24 02:35:47.810830 master-0 kubenswrapper[31411]: I0224 02:35:47.810800 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1211f9d-8d58-49d4-8892-61d252358fa6" containerName="swift-ring-rebalance" Feb 24 02:35:47.810934 master-0 kubenswrapper[31411]: I0224 02:35:47.810866 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="9158aad4-fb9b-4900-99cd-c78da20e920d" containerName="mariadb-account-create-update" Feb 24 02:35:47.810934 master-0 kubenswrapper[31411]: I0224 02:35:47.810907 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc6984b0-e97a-4750-b89f-290aa2cc36b9" containerName="mariadb-database-create" Feb 24 02:35:47.817689 master-0 kubenswrapper[31411]: I0224 02:35:47.817600 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-d7rmf" Feb 24 02:35:47.854620 master-0 kubenswrapper[31411]: I0224 02:35:47.852708 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-d7rmf"] Feb 24 02:35:47.933926 master-0 kubenswrapper[31411]: I0224 02:35:47.929210 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5d23-account-create-update-q2xlr"] Feb 24 02:35:47.933926 master-0 kubenswrapper[31411]: I0224 02:35:47.931137 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5d23-account-create-update-q2xlr" Feb 24 02:35:47.933926 master-0 kubenswrapper[31411]: I0224 02:35:47.933891 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5d23-account-create-update-q2xlr"] Feb 24 02:35:47.934991 master-0 kubenswrapper[31411]: I0224 02:35:47.934945 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 24 02:35:47.942656 master-0 kubenswrapper[31411]: I0224 02:35:47.942599 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw42h\" (UniqueName: \"kubernetes.io/projected/33d27427-00de-48ea-879d-ff3376adfbae-kube-api-access-vw42h\") pod \"keystone-db-create-d7rmf\" (UID: \"33d27427-00de-48ea-879d-ff3376adfbae\") " pod="openstack/keystone-db-create-d7rmf" Feb 24 02:35:47.942757 master-0 kubenswrapper[31411]: I0224 02:35:47.942654 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33d27427-00de-48ea-879d-ff3376adfbae-operator-scripts\") pod \"keystone-db-create-d7rmf\" (UID: \"33d27427-00de-48ea-879d-ff3376adfbae\") " pod="openstack/keystone-db-create-d7rmf" Feb 24 02:35:48.050764 master-0 kubenswrapper[31411]: I0224 02:35:48.050674 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34321c8e-7008-47b6-99ad-7752b89e8045-operator-scripts\") pod \"keystone-5d23-account-create-update-q2xlr\" (UID: \"34321c8e-7008-47b6-99ad-7752b89e8045\") " pod="openstack/keystone-5d23-account-create-update-q2xlr" Feb 24 02:35:48.050901 master-0 kubenswrapper[31411]: I0224 02:35:48.050795 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lf5m\" (UniqueName: \"kubernetes.io/projected/34321c8e-7008-47b6-99ad-7752b89e8045-kube-api-access-4lf5m\") pod \"keystone-5d23-account-create-update-q2xlr\" (UID: \"34321c8e-7008-47b6-99ad-7752b89e8045\") " pod="openstack/keystone-5d23-account-create-update-q2xlr" Feb 24 02:35:48.051076 master-0 kubenswrapper[31411]: I0224 02:35:48.051049 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw42h\" (UniqueName: \"kubernetes.io/projected/33d27427-00de-48ea-879d-ff3376adfbae-kube-api-access-vw42h\") pod \"keystone-db-create-d7rmf\" (UID: \"33d27427-00de-48ea-879d-ff3376adfbae\") " pod="openstack/keystone-db-create-d7rmf" Feb 24 02:35:48.051149 master-0 kubenswrapper[31411]: I0224 02:35:48.051087 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33d27427-00de-48ea-879d-ff3376adfbae-operator-scripts\") pod \"keystone-db-create-d7rmf\" (UID: \"33d27427-00de-48ea-879d-ff3376adfbae\") " pod="openstack/keystone-db-create-d7rmf" Feb 24 02:35:48.053668 master-0 kubenswrapper[31411]: I0224 02:35:48.053467 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33d27427-00de-48ea-879d-ff3376adfbae-operator-scripts\") pod \"keystone-db-create-d7rmf\" (UID: \"33d27427-00de-48ea-879d-ff3376adfbae\") " pod="openstack/keystone-db-create-d7rmf" Feb 24 02:35:48.117701 master-0 kubenswrapper[31411]: I0224 02:35:48.116885 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-07e8-account-create-update-4xjm5"] Feb 24 02:35:48.123590 master-0 kubenswrapper[31411]: I0224 02:35:48.119816 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-07e8-account-create-update-4xjm5" Feb 24 02:35:48.123590 master-0 kubenswrapper[31411]: I0224 02:35:48.122312 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw42h\" (UniqueName: \"kubernetes.io/projected/33d27427-00de-48ea-879d-ff3376adfbae-kube-api-access-vw42h\") pod \"keystone-db-create-d7rmf\" (UID: \"33d27427-00de-48ea-879d-ff3376adfbae\") " pod="openstack/keystone-db-create-d7rmf" Feb 24 02:35:48.128589 master-0 kubenswrapper[31411]: I0224 02:35:48.125519 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 24 02:35:48.129748 master-0 kubenswrapper[31411]: I0224 02:35:48.129173 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-cjhw4"] Feb 24 02:35:48.132737 master-0 kubenswrapper[31411]: I0224 02:35:48.130548 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cjhw4" Feb 24 02:35:48.140370 master-0 kubenswrapper[31411]: I0224 02:35:48.140333 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-07e8-account-create-update-4xjm5"] Feb 24 02:35:48.147800 master-0 kubenswrapper[31411]: I0224 02:35:48.146920 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-cjhw4"] Feb 24 02:35:48.157594 master-0 kubenswrapper[31411]: I0224 02:35:48.153322 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34321c8e-7008-47b6-99ad-7752b89e8045-operator-scripts\") pod \"keystone-5d23-account-create-update-q2xlr\" (UID: \"34321c8e-7008-47b6-99ad-7752b89e8045\") " pod="openstack/keystone-5d23-account-create-update-q2xlr" Feb 24 02:35:48.157594 master-0 kubenswrapper[31411]: I0224 02:35:48.153401 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lf5m\" (UniqueName: \"kubernetes.io/projected/34321c8e-7008-47b6-99ad-7752b89e8045-kube-api-access-4lf5m\") pod \"keystone-5d23-account-create-update-q2xlr\" (UID: \"34321c8e-7008-47b6-99ad-7752b89e8045\") " pod="openstack/keystone-5d23-account-create-update-q2xlr" Feb 24 02:35:48.157594 master-0 kubenswrapper[31411]: I0224 02:35:48.154963 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34321c8e-7008-47b6-99ad-7752b89e8045-operator-scripts\") pod \"keystone-5d23-account-create-update-q2xlr\" (UID: \"34321c8e-7008-47b6-99ad-7752b89e8045\") " pod="openstack/keystone-5d23-account-create-update-q2xlr" Feb 24 02:35:48.179638 master-0 kubenswrapper[31411]: I0224 02:35:48.179326 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lf5m\" (UniqueName: \"kubernetes.io/projected/34321c8e-7008-47b6-99ad-7752b89e8045-kube-api-access-4lf5m\") pod \"keystone-5d23-account-create-update-q2xlr\" (UID: \"34321c8e-7008-47b6-99ad-7752b89e8045\") " pod="openstack/keystone-5d23-account-create-update-q2xlr" Feb 24 02:35:48.222381 master-0 kubenswrapper[31411]: I0224 02:35:48.222336 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pxfms" Feb 24 02:35:48.240274 master-0 kubenswrapper[31411]: I0224 02:35:48.240233 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-d7rmf" Feb 24 02:35:48.255780 master-0 kubenswrapper[31411]: I0224 02:35:48.255755 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/856ea150-40e7-4381-ab60-83a9974d5ff7-operator-scripts\") pod \"placement-07e8-account-create-update-4xjm5\" (UID: \"856ea150-40e7-4381-ab60-83a9974d5ff7\") " pod="openstack/placement-07e8-account-create-update-4xjm5" Feb 24 02:35:48.256125 master-0 kubenswrapper[31411]: I0224 02:35:48.256101 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lzbz\" (UniqueName: \"kubernetes.io/projected/856ea150-40e7-4381-ab60-83a9974d5ff7-kube-api-access-7lzbz\") pod \"placement-07e8-account-create-update-4xjm5\" (UID: \"856ea150-40e7-4381-ab60-83a9974d5ff7\") " pod="openstack/placement-07e8-account-create-update-4xjm5" Feb 24 02:35:48.256290 master-0 kubenswrapper[31411]: I0224 02:35:48.256274 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25-operator-scripts\") pod \"placement-db-create-cjhw4\" (UID: \"5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25\") " pod="openstack/placement-db-create-cjhw4" Feb 24 02:35:48.256399 master-0 kubenswrapper[31411]: I0224 02:35:48.256385 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd25j\" (UniqueName: \"kubernetes.io/projected/5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25-kube-api-access-hd25j\") pod \"placement-db-create-cjhw4\" (UID: \"5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25\") " pod="openstack/placement-db-create-cjhw4" Feb 24 02:35:48.273511 master-0 kubenswrapper[31411]: I0224 02:35:48.273451 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5d23-account-create-update-q2xlr" Feb 24 02:35:48.377030 master-0 kubenswrapper[31411]: I0224 02:35:48.359064 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7-operator-scripts\") pod \"bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7\" (UID: \"bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7\") " Feb 24 02:35:48.377030 master-0 kubenswrapper[31411]: I0224 02:35:48.359212 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlxc8\" (UniqueName: \"kubernetes.io/projected/bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7-kube-api-access-jlxc8\") pod \"bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7\" (UID: \"bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7\") " Feb 24 02:35:48.377030 master-0 kubenswrapper[31411]: I0224 02:35:48.360091 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/856ea150-40e7-4381-ab60-83a9974d5ff7-operator-scripts\") pod \"placement-07e8-account-create-update-4xjm5\" (UID: \"856ea150-40e7-4381-ab60-83a9974d5ff7\") " pod="openstack/placement-07e8-account-create-update-4xjm5" Feb 24 02:35:48.377030 master-0 kubenswrapper[31411]: I0224 02:35:48.360132 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7" (UID: "bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:35:48.377030 master-0 kubenswrapper[31411]: I0224 02:35:48.360176 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lzbz\" (UniqueName: \"kubernetes.io/projected/856ea150-40e7-4381-ab60-83a9974d5ff7-kube-api-access-7lzbz\") pod \"placement-07e8-account-create-update-4xjm5\" (UID: \"856ea150-40e7-4381-ab60-83a9974d5ff7\") " pod="openstack/placement-07e8-account-create-update-4xjm5" Feb 24 02:35:48.377030 master-0 kubenswrapper[31411]: I0224 02:35:48.360292 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25-operator-scripts\") pod \"placement-db-create-cjhw4\" (UID: \"5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25\") " pod="openstack/placement-db-create-cjhw4" Feb 24 02:35:48.377030 master-0 kubenswrapper[31411]: I0224 02:35:48.360351 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd25j\" (UniqueName: \"kubernetes.io/projected/5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25-kube-api-access-hd25j\") pod \"placement-db-create-cjhw4\" (UID: \"5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25\") " pod="openstack/placement-db-create-cjhw4" Feb 24 02:35:48.377030 master-0 kubenswrapper[31411]: I0224 02:35:48.360493 31411 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:48.377030 master-0 kubenswrapper[31411]: I0224 02:35:48.362677 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25-operator-scripts\") pod \"placement-db-create-cjhw4\" (UID: \"5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25\") " pod="openstack/placement-db-create-cjhw4" Feb 24 02:35:48.377030 master-0 kubenswrapper[31411]: I0224 02:35:48.363886 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7-kube-api-access-jlxc8" (OuterVolumeSpecName: "kube-api-access-jlxc8") pod "bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7" (UID: "bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7"). InnerVolumeSpecName "kube-api-access-jlxc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:35:48.377030 master-0 kubenswrapper[31411]: I0224 02:35:48.374894 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/856ea150-40e7-4381-ab60-83a9974d5ff7-operator-scripts\") pod \"placement-07e8-account-create-update-4xjm5\" (UID: \"856ea150-40e7-4381-ab60-83a9974d5ff7\") " pod="openstack/placement-07e8-account-create-update-4xjm5" Feb 24 02:35:48.379539 master-0 kubenswrapper[31411]: I0224 02:35:48.379342 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd25j\" (UniqueName: \"kubernetes.io/projected/5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25-kube-api-access-hd25j\") pod \"placement-db-create-cjhw4\" (UID: \"5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25\") " pod="openstack/placement-db-create-cjhw4" Feb 24 02:35:48.380871 master-0 kubenswrapper[31411]: I0224 02:35:48.380840 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lzbz\" (UniqueName: \"kubernetes.io/projected/856ea150-40e7-4381-ab60-83a9974d5ff7-kube-api-access-7lzbz\") pod \"placement-07e8-account-create-update-4xjm5\" (UID: \"856ea150-40e7-4381-ab60-83a9974d5ff7\") " pod="openstack/placement-07e8-account-create-update-4xjm5" Feb 24 02:35:48.462794 master-0 kubenswrapper[31411]: I0224 02:35:48.462457 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlxc8\" (UniqueName: \"kubernetes.io/projected/bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7-kube-api-access-jlxc8\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:48.469481 master-0 kubenswrapper[31411]: I0224 02:35:48.469426 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-07e8-account-create-update-4xjm5" Feb 24 02:35:48.486008 master-0 kubenswrapper[31411]: I0224 02:35:48.485944 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cjhw4" Feb 24 02:35:48.724916 master-0 kubenswrapper[31411]: I0224 02:35:48.724864 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fc47c58d-5bd1-4cb0-942f-6a048792da9a","Type":"ContainerStarted","Data":"2909c4857b5ed7ad192f02d593ccdefb360625a5022b4d3cc9868feb9ca199c8"} Feb 24 02:35:48.727680 master-0 kubenswrapper[31411]: I0224 02:35:48.726804 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:35:48.728982 master-0 kubenswrapper[31411]: I0224 02:35:48.728904 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pxfms" event={"ID":"bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7","Type":"ContainerDied","Data":"ba88d794891bb16880291a49324f6b1f732396b040d4c24d970912b975916aaa"} Feb 24 02:35:48.729643 master-0 kubenswrapper[31411]: I0224 02:35:48.729099 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pxfms" Feb 24 02:35:48.729643 master-0 kubenswrapper[31411]: I0224 02:35:48.729142 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba88d794891bb16880291a49324f6b1f732396b040d4c24d970912b975916aaa" Feb 24 02:35:48.733141 master-0 kubenswrapper[31411]: I0224 02:35:48.733101 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5680b3af-dae8-4617-80b2-30c0a9818130","Type":"ContainerStarted","Data":"03fdec2cfb056a9dccbd7eef31cf1d2e63ad1fe4c4fe3ee4d1731cd336361d0e"} Feb 24 02:35:48.733437 master-0 kubenswrapper[31411]: I0224 02:35:48.733369 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 24 02:35:48.736762 master-0 kubenswrapper[31411]: I0224 02:35:48.736665 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9c814b7c-b62b-4104-8139-8e6cd597d33f","Type":"ContainerStarted","Data":"2c128800d612b4ae2be49a21361171352936c8bf3177326503767d7afe1d92cf"} Feb 24 02:35:48.792554 master-0 kubenswrapper[31411]: I0224 02:35:48.792399 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=50.013080733 podStartE2EDuration="1m5.792371499s" podCreationTimestamp="2026-02-24 02:34:43 +0000 UTC" firstStartedPulling="2026-02-24 02:34:57.213675249 +0000 UTC m=+840.430873105" lastFinishedPulling="2026-02-24 02:35:12.992966025 +0000 UTC m=+856.210163871" observedRunningTime="2026-02-24 02:35:48.770249779 +0000 UTC m=+891.987447635" watchObservedRunningTime="2026-02-24 02:35:48.792371499 +0000 UTC m=+892.009569335" Feb 24 02:35:48.810850 master-0 kubenswrapper[31411]: I0224 02:35:48.810644 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-d7rmf"] Feb 24 02:35:48.812828 master-0 kubenswrapper[31411]: I0224 02:35:48.812665 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=46.406415966 podStartE2EDuration="1m6.812634817s" podCreationTimestamp="2026-02-24 02:34:42 +0000 UTC" firstStartedPulling="2026-02-24 02:34:52.515315251 +0000 UTC m=+835.732513097" lastFinishedPulling="2026-02-24 02:35:12.921534092 +0000 UTC m=+856.138731948" observedRunningTime="2026-02-24 02:35:48.800704432 +0000 UTC m=+892.017902288" watchObservedRunningTime="2026-02-24 02:35:48.812634817 +0000 UTC m=+892.029832663" Feb 24 02:35:49.022774 master-0 kubenswrapper[31411]: I0224 02:35:49.022696 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5d23-account-create-update-q2xlr"] Feb 24 02:35:49.031902 master-0 kubenswrapper[31411]: I0224 02:35:49.031854 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-07e8-account-create-update-4xjm5"] Feb 24 02:35:49.034272 master-0 kubenswrapper[31411]: W0224 02:35:49.034243 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34321c8e_7008_47b6_99ad_7752b89e8045.slice/crio-76d732a42dfd12f21df4edfd548bd6b9fbb9976d9940136097d8902ecee04dd3 WatchSource:0}: Error finding container 76d732a42dfd12f21df4edfd548bd6b9fbb9976d9940136097d8902ecee04dd3: Status 404 returned error can't find the container with id 76d732a42dfd12f21df4edfd548bd6b9fbb9976d9940136097d8902ecee04dd3 Feb 24 02:35:49.207601 master-0 kubenswrapper[31411]: I0224 02:35:49.202852 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 24 02:35:49.240278 master-0 kubenswrapper[31411]: I0224 02:35:49.240229 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-cjhw4"] Feb 24 02:35:49.751177 master-0 kubenswrapper[31411]: I0224 02:35:49.750127 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9c814b7c-b62b-4104-8139-8e6cd597d33f","Type":"ContainerStarted","Data":"6443e31d5aeb376a27994ac13d2b058a27585cb0d36ada48f5d784ba5b97e0ca"} Feb 24 02:35:49.753689 master-0 kubenswrapper[31411]: I0224 02:35:49.751798 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-d7rmf" event={"ID":"33d27427-00de-48ea-879d-ff3376adfbae","Type":"ContainerStarted","Data":"44da10972596f19eda3d5b23c65ce994fe9ec7974580fd66084a9f783a62e356"} Feb 24 02:35:49.753689 master-0 kubenswrapper[31411]: I0224 02:35:49.751869 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-d7rmf" event={"ID":"33d27427-00de-48ea-879d-ff3376adfbae","Type":"ContainerStarted","Data":"62f4badfbe148bf2a82798c428839babefdca858fdac17bc4392b68af5abd488"} Feb 24 02:35:49.757139 master-0 kubenswrapper[31411]: I0224 02:35:49.756161 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-07e8-account-create-update-4xjm5" event={"ID":"856ea150-40e7-4381-ab60-83a9974d5ff7","Type":"ContainerStarted","Data":"06f2b62681f4a8848bb5355e114e6e4bb4606b29bbb7762ebb5012cc0a5e1660"} Feb 24 02:35:49.757139 master-0 kubenswrapper[31411]: I0224 02:35:49.756232 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-07e8-account-create-update-4xjm5" event={"ID":"856ea150-40e7-4381-ab60-83a9974d5ff7","Type":"ContainerStarted","Data":"547c1c2e9560a0f8f4836ee4ee782b271bab3d0dfa4eaec816ccb446de427484"} Feb 24 02:35:49.760736 master-0 kubenswrapper[31411]: I0224 02:35:49.760564 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cjhw4" event={"ID":"5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25","Type":"ContainerStarted","Data":"2124a3042cb7397e98aa772cf754d14881cec0b76a807be556609d45409f8987"} Feb 24 02:35:49.760736 master-0 kubenswrapper[31411]: I0224 02:35:49.760615 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cjhw4" event={"ID":"5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25","Type":"ContainerStarted","Data":"2dbc8fa572fefcb83567e84d3f490f174579fe468dfb37bc4ecbbe09340a2493"} Feb 24 02:35:49.765555 master-0 kubenswrapper[31411]: I0224 02:35:49.765531 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5d23-account-create-update-q2xlr" event={"ID":"34321c8e-7008-47b6-99ad-7752b89e8045","Type":"ContainerStarted","Data":"76d732a42dfd12f21df4edfd548bd6b9fbb9976d9940136097d8902ecee04dd3"} Feb 24 02:35:49.814023 master-0 kubenswrapper[31411]: I0224 02:35:49.795843 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-d7rmf" podStartSLOduration=2.795820458 podStartE2EDuration="2.795820458s" podCreationTimestamp="2026-02-24 02:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:35:49.780297022 +0000 UTC m=+892.997494868" watchObservedRunningTime="2026-02-24 02:35:49.795820458 +0000 UTC m=+893.013018304" Feb 24 02:35:49.824770 master-0 kubenswrapper[31411]: I0224 02:35:49.824645 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5d23-account-create-update-q2xlr" podStartSLOduration=2.8246157849999998 podStartE2EDuration="2.824615785s" podCreationTimestamp="2026-02-24 02:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:35:49.818973237 +0000 UTC m=+893.036171083" watchObservedRunningTime="2026-02-24 02:35:49.824615785 +0000 UTC m=+893.041813631" Feb 24 02:35:49.851591 master-0 kubenswrapper[31411]: I0224 02:35:49.851498 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-cjhw4" podStartSLOduration=1.851479498 podStartE2EDuration="1.851479498s" podCreationTimestamp="2026-02-24 02:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:35:49.848499734 +0000 UTC m=+893.065697580" watchObservedRunningTime="2026-02-24 02:35:49.851479498 +0000 UTC m=+893.068677334" Feb 24 02:35:49.877697 master-0 kubenswrapper[31411]: I0224 02:35:49.877464 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-07e8-account-create-update-4xjm5" podStartSLOduration=1.877428775 podStartE2EDuration="1.877428775s" podCreationTimestamp="2026-02-24 02:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:35:49.876648623 +0000 UTC m=+893.093846469" watchObservedRunningTime="2026-02-24 02:35:49.877428775 +0000 UTC m=+893.094626621" Feb 24 02:35:50.781965 master-0 kubenswrapper[31411]: I0224 02:35:50.781880 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9c814b7c-b62b-4104-8139-8e6cd597d33f","Type":"ContainerStarted","Data":"61eeef34782dfb474fbb9da239d0f15d3c52aa0b8a8187b224d3d64aee240d41"} Feb 24 02:35:50.782657 master-0 kubenswrapper[31411]: I0224 02:35:50.781979 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9c814b7c-b62b-4104-8139-8e6cd597d33f","Type":"ContainerStarted","Data":"403ae6ccbcb37c0ca593a01d625226f35de2b194ad9c1acc27d8c046e659a8a0"} Feb 24 02:35:50.782657 master-0 kubenswrapper[31411]: I0224 02:35:50.781998 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9c814b7c-b62b-4104-8139-8e6cd597d33f","Type":"ContainerStarted","Data":"2e7a1645b31c59cee6c4897b97afea71d7c9f06919f6f509a79f58c649c0743c"} Feb 24 02:35:50.784668 master-0 kubenswrapper[31411]: I0224 02:35:50.784605 31411 generic.go:334] "Generic (PLEG): container finished" podID="33d27427-00de-48ea-879d-ff3376adfbae" containerID="44da10972596f19eda3d5b23c65ce994fe9ec7974580fd66084a9f783a62e356" exitCode=0 Feb 24 02:35:50.784812 master-0 kubenswrapper[31411]: I0224 02:35:50.784740 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-d7rmf" event={"ID":"33d27427-00de-48ea-879d-ff3376adfbae","Type":"ContainerDied","Data":"44da10972596f19eda3d5b23c65ce994fe9ec7974580fd66084a9f783a62e356"} Feb 24 02:35:50.788503 master-0 kubenswrapper[31411]: I0224 02:35:50.788445 31411 generic.go:334] "Generic (PLEG): container finished" podID="856ea150-40e7-4381-ab60-83a9974d5ff7" containerID="06f2b62681f4a8848bb5355e114e6e4bb4606b29bbb7762ebb5012cc0a5e1660" exitCode=0 Feb 24 02:35:50.788596 master-0 kubenswrapper[31411]: I0224 02:35:50.788508 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-07e8-account-create-update-4xjm5" event={"ID":"856ea150-40e7-4381-ab60-83a9974d5ff7","Type":"ContainerDied","Data":"06f2b62681f4a8848bb5355e114e6e4bb4606b29bbb7762ebb5012cc0a5e1660"} Feb 24 02:35:50.791335 master-0 kubenswrapper[31411]: I0224 02:35:50.791281 31411 generic.go:334] "Generic (PLEG): container finished" podID="5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25" containerID="2124a3042cb7397e98aa772cf754d14881cec0b76a807be556609d45409f8987" exitCode=0 Feb 24 02:35:50.791435 master-0 kubenswrapper[31411]: I0224 02:35:50.791401 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cjhw4" event={"ID":"5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25","Type":"ContainerDied","Data":"2124a3042cb7397e98aa772cf754d14881cec0b76a807be556609d45409f8987"} Feb 24 02:35:50.793906 master-0 kubenswrapper[31411]: I0224 02:35:50.793858 31411 generic.go:334] "Generic (PLEG): container finished" podID="34321c8e-7008-47b6-99ad-7752b89e8045" containerID="45b1b0a25f45e5b28d4c3d6363eda6915a2219746496fd39b35c97ca94f35908" exitCode=0 Feb 24 02:35:50.793979 master-0 kubenswrapper[31411]: I0224 02:35:50.793923 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5d23-account-create-update-q2xlr" event={"ID":"34321c8e-7008-47b6-99ad-7752b89e8045","Type":"ContainerDied","Data":"45b1b0a25f45e5b28d4c3d6363eda6915a2219746496fd39b35c97ca94f35908"} Feb 24 02:35:51.463101 master-0 kubenswrapper[31411]: I0224 02:35:51.463020 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-pxfms"] Feb 24 02:35:51.476361 master-0 kubenswrapper[31411]: I0224 02:35:51.476277 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-pxfms"] Feb 24 02:35:51.815064 master-0 kubenswrapper[31411]: I0224 02:35:51.814657 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9c814b7c-b62b-4104-8139-8e6cd597d33f","Type":"ContainerStarted","Data":"54ea3689ede00eabfb0559202a9d6e1fa3a281ae1d5fa1d79d4a418fdaef90e7"} Feb 24 02:35:52.037262 master-0 kubenswrapper[31411]: I0224 02:35:52.037131 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-dnhq7"] Feb 24 02:35:52.042922 master-0 kubenswrapper[31411]: E0224 02:35:52.042887 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7" containerName="mariadb-account-create-update" Feb 24 02:35:52.042922 master-0 kubenswrapper[31411]: I0224 02:35:52.042916 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7" containerName="mariadb-account-create-update" Feb 24 02:35:52.043312 master-0 kubenswrapper[31411]: I0224 02:35:52.043286 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7" containerName="mariadb-account-create-update" Feb 24 02:35:52.044301 master-0 kubenswrapper[31411]: I0224 02:35:52.044273 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dnhq7" Feb 24 02:35:52.047207 master-0 kubenswrapper[31411]: I0224 02:35:52.047152 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-8705a-config-data" Feb 24 02:35:52.050944 master-0 kubenswrapper[31411]: I0224 02:35:52.050889 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-dnhq7"] Feb 24 02:35:52.230690 master-0 kubenswrapper[31411]: I0224 02:35:52.226249 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af-combined-ca-bundle\") pod \"glance-db-sync-dnhq7\" (UID: \"1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af\") " pod="openstack/glance-db-sync-dnhq7" Feb 24 02:35:52.230690 master-0 kubenswrapper[31411]: I0224 02:35:52.226406 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlz7k\" (UniqueName: \"kubernetes.io/projected/1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af-kube-api-access-hlz7k\") pod \"glance-db-sync-dnhq7\" (UID: \"1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af\") " pod="openstack/glance-db-sync-dnhq7" Feb 24 02:35:52.230690 master-0 kubenswrapper[31411]: I0224 02:35:52.226434 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af-db-sync-config-data\") pod \"glance-db-sync-dnhq7\" (UID: \"1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af\") " pod="openstack/glance-db-sync-dnhq7" Feb 24 02:35:52.230690 master-0 kubenswrapper[31411]: I0224 02:35:52.226490 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af-config-data\") pod \"glance-db-sync-dnhq7\" (UID: \"1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af\") " pod="openstack/glance-db-sync-dnhq7" Feb 24 02:35:52.286294 master-0 kubenswrapper[31411]: I0224 02:35:52.286250 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cjhw4" Feb 24 02:35:52.344597 master-0 kubenswrapper[31411]: I0224 02:35:52.341385 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlz7k\" (UniqueName: \"kubernetes.io/projected/1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af-kube-api-access-hlz7k\") pod \"glance-db-sync-dnhq7\" (UID: \"1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af\") " pod="openstack/glance-db-sync-dnhq7" Feb 24 02:35:52.344597 master-0 kubenswrapper[31411]: I0224 02:35:52.341465 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af-db-sync-config-data\") pod \"glance-db-sync-dnhq7\" (UID: \"1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af\") " pod="openstack/glance-db-sync-dnhq7" Feb 24 02:35:52.344597 master-0 kubenswrapper[31411]: I0224 02:35:52.341536 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af-config-data\") pod \"glance-db-sync-dnhq7\" (UID: \"1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af\") " pod="openstack/glance-db-sync-dnhq7" Feb 24 02:35:52.344597 master-0 kubenswrapper[31411]: I0224 02:35:52.341793 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af-combined-ca-bundle\") pod \"glance-db-sync-dnhq7\" (UID: \"1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af\") " pod="openstack/glance-db-sync-dnhq7" Feb 24 02:35:52.349608 master-0 kubenswrapper[31411]: I0224 02:35:52.348111 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af-db-sync-config-data\") pod \"glance-db-sync-dnhq7\" (UID: \"1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af\") " pod="openstack/glance-db-sync-dnhq7" Feb 24 02:35:52.349608 master-0 kubenswrapper[31411]: I0224 02:35:52.349206 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af-combined-ca-bundle\") pod \"glance-db-sync-dnhq7\" (UID: \"1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af\") " pod="openstack/glance-db-sync-dnhq7" Feb 24 02:35:52.356014 master-0 kubenswrapper[31411]: I0224 02:35:52.353189 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af-config-data\") pod \"glance-db-sync-dnhq7\" (UID: \"1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af\") " pod="openstack/glance-db-sync-dnhq7" Feb 24 02:35:52.372878 master-0 kubenswrapper[31411]: I0224 02:35:52.372832 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlz7k\" (UniqueName: \"kubernetes.io/projected/1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af-kube-api-access-hlz7k\") pod \"glance-db-sync-dnhq7\" (UID: \"1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af\") " pod="openstack/glance-db-sync-dnhq7" Feb 24 02:35:52.398166 master-0 kubenswrapper[31411]: I0224 02:35:52.396402 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dnhq7" Feb 24 02:35:52.446604 master-0 kubenswrapper[31411]: I0224 02:35:52.443478 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25-operator-scripts\") pod \"5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25\" (UID: \"5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25\") " Feb 24 02:35:52.446604 master-0 kubenswrapper[31411]: I0224 02:35:52.443712 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd25j\" (UniqueName: \"kubernetes.io/projected/5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25-kube-api-access-hd25j\") pod \"5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25\" (UID: \"5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25\") " Feb 24 02:35:52.446604 master-0 kubenswrapper[31411]: I0224 02:35:52.444078 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25" (UID: "5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:35:52.446604 master-0 kubenswrapper[31411]: I0224 02:35:52.444860 31411 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:52.452342 master-0 kubenswrapper[31411]: I0224 02:35:52.449028 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25-kube-api-access-hd25j" (OuterVolumeSpecName: "kube-api-access-hd25j") pod "5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25" (UID: "5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25"). InnerVolumeSpecName "kube-api-access-hd25j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:35:52.555414 master-0 kubenswrapper[31411]: I0224 02:35:52.552629 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd25j\" (UniqueName: \"kubernetes.io/projected/5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25-kube-api-access-hd25j\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:52.752102 master-0 kubenswrapper[31411]: I0224 02:35:52.752051 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5d23-account-create-update-q2xlr" Feb 24 02:35:52.763777 master-0 kubenswrapper[31411]: I0224 02:35:52.763738 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-07e8-account-create-update-4xjm5" Feb 24 02:35:52.873855 master-0 kubenswrapper[31411]: I0224 02:35:52.873787 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34321c8e-7008-47b6-99ad-7752b89e8045-operator-scripts\") pod \"34321c8e-7008-47b6-99ad-7752b89e8045\" (UID: \"34321c8e-7008-47b6-99ad-7752b89e8045\") " Feb 24 02:35:52.874767 master-0 kubenswrapper[31411]: I0224 02:35:52.873993 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/856ea150-40e7-4381-ab60-83a9974d5ff7-operator-scripts\") pod \"856ea150-40e7-4381-ab60-83a9974d5ff7\" (UID: \"856ea150-40e7-4381-ab60-83a9974d5ff7\") " Feb 24 02:35:52.874767 master-0 kubenswrapper[31411]: I0224 02:35:52.874031 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lzbz\" (UniqueName: \"kubernetes.io/projected/856ea150-40e7-4381-ab60-83a9974d5ff7-kube-api-access-7lzbz\") pod \"856ea150-40e7-4381-ab60-83a9974d5ff7\" (UID: \"856ea150-40e7-4381-ab60-83a9974d5ff7\") " Feb 24 02:35:52.874767 master-0 kubenswrapper[31411]: I0224 02:35:52.874183 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lf5m\" (UniqueName: \"kubernetes.io/projected/34321c8e-7008-47b6-99ad-7752b89e8045-kube-api-access-4lf5m\") pod \"34321c8e-7008-47b6-99ad-7752b89e8045\" (UID: \"34321c8e-7008-47b6-99ad-7752b89e8045\") " Feb 24 02:35:52.875038 master-0 kubenswrapper[31411]: I0224 02:35:52.874984 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34321c8e-7008-47b6-99ad-7752b89e8045-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "34321c8e-7008-47b6-99ad-7752b89e8045" (UID: "34321c8e-7008-47b6-99ad-7752b89e8045"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:35:52.876014 master-0 kubenswrapper[31411]: I0224 02:35:52.875982 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/856ea150-40e7-4381-ab60-83a9974d5ff7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "856ea150-40e7-4381-ab60-83a9974d5ff7" (UID: "856ea150-40e7-4381-ab60-83a9974d5ff7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:35:52.876392 master-0 kubenswrapper[31411]: I0224 02:35:52.876366 31411 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34321c8e-7008-47b6-99ad-7752b89e8045-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:52.876392 master-0 kubenswrapper[31411]: I0224 02:35:52.876388 31411 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/856ea150-40e7-4381-ab60-83a9974d5ff7-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:52.878418 master-0 kubenswrapper[31411]: I0224 02:35:52.878347 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9c814b7c-b62b-4104-8139-8e6cd597d33f","Type":"ContainerStarted","Data":"71d38da6a6b5f16f32385f28e570d5ef852dc74f8e286bdd4c313230c81f0f9c"} Feb 24 02:35:52.878488 master-0 kubenswrapper[31411]: I0224 02:35:52.878440 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9c814b7c-b62b-4104-8139-8e6cd597d33f","Type":"ContainerStarted","Data":"451092cb3a31f0e0c9a287de6df627566390cdef2fc84e06d327c2fb0790c218"} Feb 24 02:35:52.878488 master-0 kubenswrapper[31411]: I0224 02:35:52.878453 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9c814b7c-b62b-4104-8139-8e6cd597d33f","Type":"ContainerStarted","Data":"5f934350c0701173ff862b37e765fc5df0d5a25befa93c2ab1ab2eddc91b5640"} Feb 24 02:35:52.878873 master-0 kubenswrapper[31411]: I0224 02:35:52.878831 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/856ea150-40e7-4381-ab60-83a9974d5ff7-kube-api-access-7lzbz" (OuterVolumeSpecName: "kube-api-access-7lzbz") pod "856ea150-40e7-4381-ab60-83a9974d5ff7" (UID: "856ea150-40e7-4381-ab60-83a9974d5ff7"). InnerVolumeSpecName "kube-api-access-7lzbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:35:52.878915 master-0 kubenswrapper[31411]: I0224 02:35:52.878905 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34321c8e-7008-47b6-99ad-7752b89e8045-kube-api-access-4lf5m" (OuterVolumeSpecName: "kube-api-access-4lf5m") pod "34321c8e-7008-47b6-99ad-7752b89e8045" (UID: "34321c8e-7008-47b6-99ad-7752b89e8045"). InnerVolumeSpecName "kube-api-access-4lf5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:35:52.884080 master-0 kubenswrapper[31411]: I0224 02:35:52.884024 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-07e8-account-create-update-4xjm5" event={"ID":"856ea150-40e7-4381-ab60-83a9974d5ff7","Type":"ContainerDied","Data":"547c1c2e9560a0f8f4836ee4ee782b271bab3d0dfa4eaec816ccb446de427484"} Feb 24 02:35:52.884137 master-0 kubenswrapper[31411]: I0224 02:35:52.884082 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="547c1c2e9560a0f8f4836ee4ee782b271bab3d0dfa4eaec816ccb446de427484" Feb 24 02:35:52.884137 master-0 kubenswrapper[31411]: I0224 02:35:52.884055 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-07e8-account-create-update-4xjm5" Feb 24 02:35:52.885700 master-0 kubenswrapper[31411]: I0224 02:35:52.885667 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cjhw4" event={"ID":"5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25","Type":"ContainerDied","Data":"2dbc8fa572fefcb83567e84d3f490f174579fe468dfb37bc4ecbbe09340a2493"} Feb 24 02:35:52.885700 master-0 kubenswrapper[31411]: I0224 02:35:52.885700 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dbc8fa572fefcb83567e84d3f490f174579fe468dfb37bc4ecbbe09340a2493" Feb 24 02:35:52.885795 master-0 kubenswrapper[31411]: I0224 02:35:52.885753 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cjhw4" Feb 24 02:35:52.892061 master-0 kubenswrapper[31411]: I0224 02:35:52.892010 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5d23-account-create-update-q2xlr" event={"ID":"34321c8e-7008-47b6-99ad-7752b89e8045","Type":"ContainerDied","Data":"76d732a42dfd12f21df4edfd548bd6b9fbb9976d9940136097d8902ecee04dd3"} Feb 24 02:35:52.892061 master-0 kubenswrapper[31411]: I0224 02:35:52.892044 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5d23-account-create-update-q2xlr" Feb 24 02:35:52.892164 master-0 kubenswrapper[31411]: I0224 02:35:52.892063 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76d732a42dfd12f21df4edfd548bd6b9fbb9976d9940136097d8902ecee04dd3" Feb 24 02:35:52.936997 master-0 kubenswrapper[31411]: I0224 02:35:52.936964 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-d7rmf" Feb 24 02:35:52.977038 master-0 kubenswrapper[31411]: I0224 02:35:52.976980 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33d27427-00de-48ea-879d-ff3376adfbae-operator-scripts\") pod \"33d27427-00de-48ea-879d-ff3376adfbae\" (UID: \"33d27427-00de-48ea-879d-ff3376adfbae\") " Feb 24 02:35:52.977253 master-0 kubenswrapper[31411]: I0224 02:35:52.977221 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw42h\" (UniqueName: \"kubernetes.io/projected/33d27427-00de-48ea-879d-ff3376adfbae-kube-api-access-vw42h\") pod \"33d27427-00de-48ea-879d-ff3376adfbae\" (UID: \"33d27427-00de-48ea-879d-ff3376adfbae\") " Feb 24 02:35:52.980725 master-0 kubenswrapper[31411]: I0224 02:35:52.977560 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lzbz\" (UniqueName: \"kubernetes.io/projected/856ea150-40e7-4381-ab60-83a9974d5ff7-kube-api-access-7lzbz\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:52.980846 master-0 kubenswrapper[31411]: I0224 02:35:52.980807 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lf5m\" (UniqueName: \"kubernetes.io/projected/34321c8e-7008-47b6-99ad-7752b89e8045-kube-api-access-4lf5m\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:52.980897 master-0 kubenswrapper[31411]: I0224 02:35:52.978040 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33d27427-00de-48ea-879d-ff3376adfbae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33d27427-00de-48ea-879d-ff3376adfbae" (UID: "33d27427-00de-48ea-879d-ff3376adfbae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:35:52.995726 master-0 kubenswrapper[31411]: I0224 02:35:52.995614 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33d27427-00de-48ea-879d-ff3376adfbae-kube-api-access-vw42h" (OuterVolumeSpecName: "kube-api-access-vw42h") pod "33d27427-00de-48ea-879d-ff3376adfbae" (UID: "33d27427-00de-48ea-879d-ff3376adfbae"). InnerVolumeSpecName "kube-api-access-vw42h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:35:53.083781 master-0 kubenswrapper[31411]: I0224 02:35:53.083733 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vw42h\" (UniqueName: \"kubernetes.io/projected/33d27427-00de-48ea-879d-ff3376adfbae-kube-api-access-vw42h\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:53.083781 master-0 kubenswrapper[31411]: I0224 02:35:53.083767 31411 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33d27427-00de-48ea-879d-ff3376adfbae-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:53.122940 master-0 kubenswrapper[31411]: I0224 02:35:53.122437 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7" path="/var/lib/kubelet/pods/bc6a4062-0e87-4d7e-bb7d-8e3a127ea5e7/volumes" Feb 24 02:35:53.128376 master-0 kubenswrapper[31411]: I0224 02:35:53.128312 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-dnhq7"] Feb 24 02:35:53.913522 master-0 kubenswrapper[31411]: I0224 02:35:53.913440 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-d7rmf" event={"ID":"33d27427-00de-48ea-879d-ff3376adfbae","Type":"ContainerDied","Data":"62f4badfbe148bf2a82798c428839babefdca858fdac17bc4392b68af5abd488"} Feb 24 02:35:53.913522 master-0 kubenswrapper[31411]: I0224 02:35:53.913507 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62f4badfbe148bf2a82798c428839babefdca858fdac17bc4392b68af5abd488" Feb 24 02:35:53.914299 master-0 kubenswrapper[31411]: I0224 02:35:53.913628 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-d7rmf" Feb 24 02:35:53.920872 master-0 kubenswrapper[31411]: I0224 02:35:53.918283 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dnhq7" event={"ID":"1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af","Type":"ContainerStarted","Data":"c436687278d8e1f31b859f287d062a78d9ad8078821276e79852e67b19f5d1bf"} Feb 24 02:35:54.502234 master-0 kubenswrapper[31411]: I0224 02:35:54.502149 31411 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-hjmv9" podUID="76252167-d1e5-4ee1-b26f-853eb9e161a7" containerName="ovn-controller" probeResult="failure" output=< Feb 24 02:35:54.502234 master-0 kubenswrapper[31411]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 24 02:35:54.502234 master-0 kubenswrapper[31411]: > Feb 24 02:35:54.569205 master-0 kubenswrapper[31411]: I0224 02:35:54.569159 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-lp2wm" Feb 24 02:35:54.605270 master-0 kubenswrapper[31411]: I0224 02:35:54.605216 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-lp2wm" Feb 24 02:35:54.872689 master-0 kubenswrapper[31411]: I0224 02:35:54.872264 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-hjmv9-config-wk789"] Feb 24 02:35:54.872976 master-0 kubenswrapper[31411]: E0224 02:35:54.872862 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33d27427-00de-48ea-879d-ff3376adfbae" containerName="mariadb-database-create" Feb 24 02:35:54.872976 master-0 kubenswrapper[31411]: I0224 02:35:54.872879 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="33d27427-00de-48ea-879d-ff3376adfbae" containerName="mariadb-database-create" Feb 24 02:35:54.872976 master-0 kubenswrapper[31411]: E0224 02:35:54.872932 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25" containerName="mariadb-database-create" Feb 24 02:35:54.872976 master-0 kubenswrapper[31411]: I0224 02:35:54.872940 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25" containerName="mariadb-database-create" Feb 24 02:35:54.872976 master-0 kubenswrapper[31411]: E0224 02:35:54.872955 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34321c8e-7008-47b6-99ad-7752b89e8045" containerName="mariadb-account-create-update" Feb 24 02:35:54.872976 master-0 kubenswrapper[31411]: I0224 02:35:54.872964 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="34321c8e-7008-47b6-99ad-7752b89e8045" containerName="mariadb-account-create-update" Feb 24 02:35:54.873168 master-0 kubenswrapper[31411]: E0224 02:35:54.872989 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="856ea150-40e7-4381-ab60-83a9974d5ff7" containerName="mariadb-account-create-update" Feb 24 02:35:54.873168 master-0 kubenswrapper[31411]: I0224 02:35:54.872998 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="856ea150-40e7-4381-ab60-83a9974d5ff7" containerName="mariadb-account-create-update" Feb 24 02:35:54.877536 master-0 kubenswrapper[31411]: I0224 02:35:54.873271 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25" containerName="mariadb-database-create" Feb 24 02:35:54.877536 master-0 kubenswrapper[31411]: I0224 02:35:54.873294 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="33d27427-00de-48ea-879d-ff3376adfbae" containerName="mariadb-database-create" Feb 24 02:35:54.877536 master-0 kubenswrapper[31411]: I0224 02:35:54.873329 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="856ea150-40e7-4381-ab60-83a9974d5ff7" containerName="mariadb-account-create-update" Feb 24 02:35:54.877536 master-0 kubenswrapper[31411]: I0224 02:35:54.873348 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="34321c8e-7008-47b6-99ad-7752b89e8045" containerName="mariadb-account-create-update" Feb 24 02:35:54.877536 master-0 kubenswrapper[31411]: I0224 02:35:54.874145 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hjmv9-config-wk789" Feb 24 02:35:54.883094 master-0 kubenswrapper[31411]: I0224 02:35:54.879773 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 24 02:35:54.896596 master-0 kubenswrapper[31411]: I0224 02:35:54.895061 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hjmv9-config-wk789"] Feb 24 02:35:54.947941 master-0 kubenswrapper[31411]: I0224 02:35:54.947889 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9c814b7c-b62b-4104-8139-8e6cd597d33f","Type":"ContainerStarted","Data":"102af0f1913a7d89234c780851179c5f8652e221e0afcb80573dbe6bfa718614"} Feb 24 02:35:54.948301 master-0 kubenswrapper[31411]: I0224 02:35:54.947943 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9c814b7c-b62b-4104-8139-8e6cd597d33f","Type":"ContainerStarted","Data":"192d7d9e7588c736868d6a65171e95080be4edb42992733145f69b14d5c6061e"} Feb 24 02:35:54.948301 master-0 kubenswrapper[31411]: I0224 02:35:54.947956 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9c814b7c-b62b-4104-8139-8e6cd597d33f","Type":"ContainerStarted","Data":"6cceac12474ae0cf780e3acc710549c5aea84c4488c251e5ae05aeadf1d09a07"} Feb 24 02:35:54.948301 master-0 kubenswrapper[31411]: I0224 02:35:54.947965 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9c814b7c-b62b-4104-8139-8e6cd597d33f","Type":"ContainerStarted","Data":"726f0ff621396eec83d26d58351fa69b773048475d8d6fd351a9a9c6f44ff0a0"} Feb 24 02:35:55.039859 master-0 kubenswrapper[31411]: I0224 02:35:55.039772 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b84c9a81-b73a-4542-8445-3dab58e1b67d-var-run\") pod \"ovn-controller-hjmv9-config-wk789\" (UID: \"b84c9a81-b73a-4542-8445-3dab58e1b67d\") " pod="openstack/ovn-controller-hjmv9-config-wk789" Feb 24 02:35:55.039956 master-0 kubenswrapper[31411]: I0224 02:35:55.039875 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b84c9a81-b73a-4542-8445-3dab58e1b67d-scripts\") pod \"ovn-controller-hjmv9-config-wk789\" (UID: \"b84c9a81-b73a-4542-8445-3dab58e1b67d\") " pod="openstack/ovn-controller-hjmv9-config-wk789" Feb 24 02:35:55.039956 master-0 kubenswrapper[31411]: I0224 02:35:55.039934 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b84c9a81-b73a-4542-8445-3dab58e1b67d-var-run-ovn\") pod \"ovn-controller-hjmv9-config-wk789\" (UID: \"b84c9a81-b73a-4542-8445-3dab58e1b67d\") " pod="openstack/ovn-controller-hjmv9-config-wk789" Feb 24 02:35:55.040032 master-0 kubenswrapper[31411]: I0224 02:35:55.039999 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b84c9a81-b73a-4542-8445-3dab58e1b67d-var-log-ovn\") pod \"ovn-controller-hjmv9-config-wk789\" (UID: \"b84c9a81-b73a-4542-8445-3dab58e1b67d\") " pod="openstack/ovn-controller-hjmv9-config-wk789" Feb 24 02:35:55.040387 master-0 kubenswrapper[31411]: I0224 02:35:55.040234 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b84c9a81-b73a-4542-8445-3dab58e1b67d-additional-scripts\") pod \"ovn-controller-hjmv9-config-wk789\" (UID: \"b84c9a81-b73a-4542-8445-3dab58e1b67d\") " pod="openstack/ovn-controller-hjmv9-config-wk789" Feb 24 02:35:55.040387 master-0 kubenswrapper[31411]: I0224 02:35:55.040296 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn5pb\" (UniqueName: \"kubernetes.io/projected/b84c9a81-b73a-4542-8445-3dab58e1b67d-kube-api-access-tn5pb\") pod \"ovn-controller-hjmv9-config-wk789\" (UID: \"b84c9a81-b73a-4542-8445-3dab58e1b67d\") " pod="openstack/ovn-controller-hjmv9-config-wk789" Feb 24 02:35:55.145081 master-0 kubenswrapper[31411]: I0224 02:35:55.145034 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b84c9a81-b73a-4542-8445-3dab58e1b67d-scripts\") pod \"ovn-controller-hjmv9-config-wk789\" (UID: \"b84c9a81-b73a-4542-8445-3dab58e1b67d\") " pod="openstack/ovn-controller-hjmv9-config-wk789" Feb 24 02:35:55.145415 master-0 kubenswrapper[31411]: I0224 02:35:55.145391 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b84c9a81-b73a-4542-8445-3dab58e1b67d-var-run-ovn\") pod \"ovn-controller-hjmv9-config-wk789\" (UID: \"b84c9a81-b73a-4542-8445-3dab58e1b67d\") " pod="openstack/ovn-controller-hjmv9-config-wk789" Feb 24 02:35:55.145669 master-0 kubenswrapper[31411]: I0224 02:35:55.145649 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b84c9a81-b73a-4542-8445-3dab58e1b67d-var-log-ovn\") pod \"ovn-controller-hjmv9-config-wk789\" (UID: \"b84c9a81-b73a-4542-8445-3dab58e1b67d\") " pod="openstack/ovn-controller-hjmv9-config-wk789" Feb 24 02:35:55.145840 master-0 kubenswrapper[31411]: I0224 02:35:55.145822 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b84c9a81-b73a-4542-8445-3dab58e1b67d-additional-scripts\") pod \"ovn-controller-hjmv9-config-wk789\" (UID: \"b84c9a81-b73a-4542-8445-3dab58e1b67d\") " pod="openstack/ovn-controller-hjmv9-config-wk789" Feb 24 02:35:55.145970 master-0 kubenswrapper[31411]: I0224 02:35:55.145950 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn5pb\" (UniqueName: \"kubernetes.io/projected/b84c9a81-b73a-4542-8445-3dab58e1b67d-kube-api-access-tn5pb\") pod \"ovn-controller-hjmv9-config-wk789\" (UID: \"b84c9a81-b73a-4542-8445-3dab58e1b67d\") " pod="openstack/ovn-controller-hjmv9-config-wk789" Feb 24 02:35:55.146191 master-0 kubenswrapper[31411]: I0224 02:35:55.146172 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b84c9a81-b73a-4542-8445-3dab58e1b67d-var-run\") pod \"ovn-controller-hjmv9-config-wk789\" (UID: \"b84c9a81-b73a-4542-8445-3dab58e1b67d\") " pod="openstack/ovn-controller-hjmv9-config-wk789" Feb 24 02:35:55.146525 master-0 kubenswrapper[31411]: I0224 02:35:55.146470 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b84c9a81-b73a-4542-8445-3dab58e1b67d-var-log-ovn\") pod \"ovn-controller-hjmv9-config-wk789\" (UID: \"b84c9a81-b73a-4542-8445-3dab58e1b67d\") " pod="openstack/ovn-controller-hjmv9-config-wk789" Feb 24 02:35:55.147133 master-0 kubenswrapper[31411]: I0224 02:35:55.147110 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b84c9a81-b73a-4542-8445-3dab58e1b67d-var-run-ovn\") pod \"ovn-controller-hjmv9-config-wk789\" (UID: \"b84c9a81-b73a-4542-8445-3dab58e1b67d\") " pod="openstack/ovn-controller-hjmv9-config-wk789" Feb 24 02:35:55.147254 master-0 kubenswrapper[31411]: I0224 02:35:55.147169 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b84c9a81-b73a-4542-8445-3dab58e1b67d-additional-scripts\") pod \"ovn-controller-hjmv9-config-wk789\" (UID: \"b84c9a81-b73a-4542-8445-3dab58e1b67d\") " pod="openstack/ovn-controller-hjmv9-config-wk789" Feb 24 02:35:55.147343 master-0 kubenswrapper[31411]: I0224 02:35:55.147212 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b84c9a81-b73a-4542-8445-3dab58e1b67d-scripts\") pod \"ovn-controller-hjmv9-config-wk789\" (UID: \"b84c9a81-b73a-4542-8445-3dab58e1b67d\") " pod="openstack/ovn-controller-hjmv9-config-wk789" Feb 24 02:35:55.147426 master-0 kubenswrapper[31411]: I0224 02:35:55.147265 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b84c9a81-b73a-4542-8445-3dab58e1b67d-var-run\") pod \"ovn-controller-hjmv9-config-wk789\" (UID: \"b84c9a81-b73a-4542-8445-3dab58e1b67d\") " pod="openstack/ovn-controller-hjmv9-config-wk789" Feb 24 02:35:55.168515 master-0 kubenswrapper[31411]: I0224 02:35:55.168473 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn5pb\" (UniqueName: \"kubernetes.io/projected/b84c9a81-b73a-4542-8445-3dab58e1b67d-kube-api-access-tn5pb\") pod \"ovn-controller-hjmv9-config-wk789\" (UID: \"b84c9a81-b73a-4542-8445-3dab58e1b67d\") " pod="openstack/ovn-controller-hjmv9-config-wk789" Feb 24 02:35:55.201984 master-0 kubenswrapper[31411]: I0224 02:35:55.201925 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hjmv9-config-wk789" Feb 24 02:35:55.763820 master-0 kubenswrapper[31411]: I0224 02:35:55.763758 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hjmv9-config-wk789"] Feb 24 02:35:55.972472 master-0 kubenswrapper[31411]: I0224 02:35:55.972368 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9c814b7c-b62b-4104-8139-8e6cd597d33f","Type":"ContainerStarted","Data":"129ae937b43174de2cf0111b1ccfe4c378f77d657928b1f5ca94ca6d642eb24f"} Feb 24 02:35:55.972472 master-0 kubenswrapper[31411]: I0224 02:35:55.972471 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9c814b7c-b62b-4104-8139-8e6cd597d33f","Type":"ContainerStarted","Data":"31fa4af8a1a19c60dc403c0cda6d67db8edab511b81ec374e51f603ac2517288"} Feb 24 02:35:55.973467 master-0 kubenswrapper[31411]: I0224 02:35:55.972499 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9c814b7c-b62b-4104-8139-8e6cd597d33f","Type":"ContainerStarted","Data":"b38ffd5d60310ab0bd0df536546cb4b742f98314c7c296f06a115755cf287f89"} Feb 24 02:35:55.975281 master-0 kubenswrapper[31411]: I0224 02:35:55.975211 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hjmv9-config-wk789" event={"ID":"b84c9a81-b73a-4542-8445-3dab58e1b67d","Type":"ContainerStarted","Data":"4934ce8e225f64cc043edd4295d6c7548b8615f4f70f54d49c7e41d51b13636c"} Feb 24 02:35:56.037413 master-0 kubenswrapper[31411]: I0224 02:35:56.032353 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=20.961695029 podStartE2EDuration="27.032321594s" podCreationTimestamp="2026-02-24 02:35:29 +0000 UTC" firstStartedPulling="2026-02-24 02:35:47.706785047 +0000 UTC m=+890.923982903" lastFinishedPulling="2026-02-24 02:35:53.777411622 +0000 UTC m=+896.994609468" observedRunningTime="2026-02-24 02:35:56.018839456 +0000 UTC m=+899.236037312" watchObservedRunningTime="2026-02-24 02:35:56.032321594 +0000 UTC m=+899.249519450" Feb 24 02:35:56.375533 master-0 kubenswrapper[31411]: I0224 02:35:56.372340 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84556f859-6lpst"] Feb 24 02:35:56.379623 master-0 kubenswrapper[31411]: I0224 02:35:56.376583 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84556f859-6lpst" Feb 24 02:35:56.379623 master-0 kubenswrapper[31411]: I0224 02:35:56.378765 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 24 02:35:56.409601 master-0 kubenswrapper[31411]: I0224 02:35:56.409378 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84556f859-6lpst"] Feb 24 02:35:56.426616 master-0 kubenswrapper[31411]: I0224 02:35:56.423023 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49237a00-8280-4287-a6fd-9fdf6a486c95-dns-swift-storage-0\") pod \"dnsmasq-dns-84556f859-6lpst\" (UID: \"49237a00-8280-4287-a6fd-9fdf6a486c95\") " pod="openstack/dnsmasq-dns-84556f859-6lpst" Feb 24 02:35:56.426616 master-0 kubenswrapper[31411]: I0224 02:35:56.423101 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcvh2\" (UniqueName: \"kubernetes.io/projected/49237a00-8280-4287-a6fd-9fdf6a486c95-kube-api-access-vcvh2\") pod \"dnsmasq-dns-84556f859-6lpst\" (UID: \"49237a00-8280-4287-a6fd-9fdf6a486c95\") " pod="openstack/dnsmasq-dns-84556f859-6lpst" Feb 24 02:35:56.426616 master-0 kubenswrapper[31411]: I0224 02:35:56.423140 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49237a00-8280-4287-a6fd-9fdf6a486c95-ovsdbserver-nb\") pod \"dnsmasq-dns-84556f859-6lpst\" (UID: \"49237a00-8280-4287-a6fd-9fdf6a486c95\") " pod="openstack/dnsmasq-dns-84556f859-6lpst" Feb 24 02:35:56.426616 master-0 kubenswrapper[31411]: I0224 02:35:56.423169 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49237a00-8280-4287-a6fd-9fdf6a486c95-config\") pod \"dnsmasq-dns-84556f859-6lpst\" (UID: \"49237a00-8280-4287-a6fd-9fdf6a486c95\") " pod="openstack/dnsmasq-dns-84556f859-6lpst" Feb 24 02:35:56.426616 master-0 kubenswrapper[31411]: I0224 02:35:56.423192 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49237a00-8280-4287-a6fd-9fdf6a486c95-dns-svc\") pod \"dnsmasq-dns-84556f859-6lpst\" (UID: \"49237a00-8280-4287-a6fd-9fdf6a486c95\") " pod="openstack/dnsmasq-dns-84556f859-6lpst" Feb 24 02:35:56.426616 master-0 kubenswrapper[31411]: I0224 02:35:56.423234 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49237a00-8280-4287-a6fd-9fdf6a486c95-ovsdbserver-sb\") pod \"dnsmasq-dns-84556f859-6lpst\" (UID: \"49237a00-8280-4287-a6fd-9fdf6a486c95\") " pod="openstack/dnsmasq-dns-84556f859-6lpst" Feb 24 02:35:56.515285 master-0 kubenswrapper[31411]: I0224 02:35:56.515210 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-klrwt"] Feb 24 02:35:56.516762 master-0 kubenswrapper[31411]: I0224 02:35:56.516734 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-klrwt" Feb 24 02:35:56.522422 master-0 kubenswrapper[31411]: I0224 02:35:56.519691 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 24 02:35:56.527514 master-0 kubenswrapper[31411]: I0224 02:35:56.526254 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49237a00-8280-4287-a6fd-9fdf6a486c95-dns-swift-storage-0\") pod \"dnsmasq-dns-84556f859-6lpst\" (UID: \"49237a00-8280-4287-a6fd-9fdf6a486c95\") " pod="openstack/dnsmasq-dns-84556f859-6lpst" Feb 24 02:35:56.527514 master-0 kubenswrapper[31411]: I0224 02:35:56.526333 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcvh2\" (UniqueName: \"kubernetes.io/projected/49237a00-8280-4287-a6fd-9fdf6a486c95-kube-api-access-vcvh2\") pod \"dnsmasq-dns-84556f859-6lpst\" (UID: \"49237a00-8280-4287-a6fd-9fdf6a486c95\") " pod="openstack/dnsmasq-dns-84556f859-6lpst" Feb 24 02:35:56.527514 master-0 kubenswrapper[31411]: I0224 02:35:56.526371 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49237a00-8280-4287-a6fd-9fdf6a486c95-ovsdbserver-nb\") pod \"dnsmasq-dns-84556f859-6lpst\" (UID: \"49237a00-8280-4287-a6fd-9fdf6a486c95\") " pod="openstack/dnsmasq-dns-84556f859-6lpst" Feb 24 02:35:56.527514 master-0 kubenswrapper[31411]: I0224 02:35:56.526409 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49237a00-8280-4287-a6fd-9fdf6a486c95-config\") pod \"dnsmasq-dns-84556f859-6lpst\" (UID: \"49237a00-8280-4287-a6fd-9fdf6a486c95\") " pod="openstack/dnsmasq-dns-84556f859-6lpst" Feb 24 02:35:56.527514 master-0 kubenswrapper[31411]: I0224 02:35:56.526430 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49237a00-8280-4287-a6fd-9fdf6a486c95-dns-svc\") pod \"dnsmasq-dns-84556f859-6lpst\" (UID: \"49237a00-8280-4287-a6fd-9fdf6a486c95\") " pod="openstack/dnsmasq-dns-84556f859-6lpst" Feb 24 02:35:56.527514 master-0 kubenswrapper[31411]: I0224 02:35:56.526475 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49237a00-8280-4287-a6fd-9fdf6a486c95-ovsdbserver-sb\") pod \"dnsmasq-dns-84556f859-6lpst\" (UID: \"49237a00-8280-4287-a6fd-9fdf6a486c95\") " pod="openstack/dnsmasq-dns-84556f859-6lpst" Feb 24 02:35:56.527514 master-0 kubenswrapper[31411]: I0224 02:35:56.527349 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49237a00-8280-4287-a6fd-9fdf6a486c95-ovsdbserver-sb\") pod \"dnsmasq-dns-84556f859-6lpst\" (UID: \"49237a00-8280-4287-a6fd-9fdf6a486c95\") " pod="openstack/dnsmasq-dns-84556f859-6lpst" Feb 24 02:35:56.531590 master-0 kubenswrapper[31411]: I0224 02:35:56.528058 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49237a00-8280-4287-a6fd-9fdf6a486c95-dns-svc\") pod \"dnsmasq-dns-84556f859-6lpst\" (UID: \"49237a00-8280-4287-a6fd-9fdf6a486c95\") " pod="openstack/dnsmasq-dns-84556f859-6lpst" Feb 24 02:35:56.531590 master-0 kubenswrapper[31411]: I0224 02:35:56.528476 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49237a00-8280-4287-a6fd-9fdf6a486c95-config\") pod \"dnsmasq-dns-84556f859-6lpst\" (UID: \"49237a00-8280-4287-a6fd-9fdf6a486c95\") " pod="openstack/dnsmasq-dns-84556f859-6lpst" Feb 24 02:35:56.531590 master-0 kubenswrapper[31411]: I0224 02:35:56.528557 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49237a00-8280-4287-a6fd-9fdf6a486c95-dns-swift-storage-0\") pod \"dnsmasq-dns-84556f859-6lpst\" (UID: \"49237a00-8280-4287-a6fd-9fdf6a486c95\") " pod="openstack/dnsmasq-dns-84556f859-6lpst" Feb 24 02:35:56.531590 master-0 kubenswrapper[31411]: I0224 02:35:56.528954 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49237a00-8280-4287-a6fd-9fdf6a486c95-ovsdbserver-nb\") pod \"dnsmasq-dns-84556f859-6lpst\" (UID: \"49237a00-8280-4287-a6fd-9fdf6a486c95\") " pod="openstack/dnsmasq-dns-84556f859-6lpst" Feb 24 02:35:56.533330 master-0 kubenswrapper[31411]: I0224 02:35:56.533257 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-klrwt"] Feb 24 02:35:56.548755 master-0 kubenswrapper[31411]: I0224 02:35:56.548703 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcvh2\" (UniqueName: \"kubernetes.io/projected/49237a00-8280-4287-a6fd-9fdf6a486c95-kube-api-access-vcvh2\") pod \"dnsmasq-dns-84556f859-6lpst\" (UID: \"49237a00-8280-4287-a6fd-9fdf6a486c95\") " pod="openstack/dnsmasq-dns-84556f859-6lpst" Feb 24 02:35:56.630145 master-0 kubenswrapper[31411]: I0224 02:35:56.629984 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxkr6\" (UniqueName: \"kubernetes.io/projected/dacf13b3-b36b-40db-8824-550ffb0b6cbd-kube-api-access-nxkr6\") pod \"root-account-create-update-klrwt\" (UID: \"dacf13b3-b36b-40db-8824-550ffb0b6cbd\") " pod="openstack/root-account-create-update-klrwt" Feb 24 02:35:56.630462 master-0 kubenswrapper[31411]: I0224 02:35:56.630418 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dacf13b3-b36b-40db-8824-550ffb0b6cbd-operator-scripts\") pod \"root-account-create-update-klrwt\" (UID: \"dacf13b3-b36b-40db-8824-550ffb0b6cbd\") " pod="openstack/root-account-create-update-klrwt" Feb 24 02:35:56.754601 master-0 kubenswrapper[31411]: I0224 02:35:56.739171 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dacf13b3-b36b-40db-8824-550ffb0b6cbd-operator-scripts\") pod \"root-account-create-update-klrwt\" (UID: \"dacf13b3-b36b-40db-8824-550ffb0b6cbd\") " pod="openstack/root-account-create-update-klrwt" Feb 24 02:35:56.754601 master-0 kubenswrapper[31411]: I0224 02:35:56.739552 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxkr6\" (UniqueName: \"kubernetes.io/projected/dacf13b3-b36b-40db-8824-550ffb0b6cbd-kube-api-access-nxkr6\") pod \"root-account-create-update-klrwt\" (UID: \"dacf13b3-b36b-40db-8824-550ffb0b6cbd\") " pod="openstack/root-account-create-update-klrwt" Feb 24 02:35:56.754601 master-0 kubenswrapper[31411]: I0224 02:35:56.741549 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dacf13b3-b36b-40db-8824-550ffb0b6cbd-operator-scripts\") pod \"root-account-create-update-klrwt\" (UID: \"dacf13b3-b36b-40db-8824-550ffb0b6cbd\") " pod="openstack/root-account-create-update-klrwt" Feb 24 02:35:56.754601 master-0 kubenswrapper[31411]: I0224 02:35:56.752342 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84556f859-6lpst" Feb 24 02:35:56.763000 master-0 kubenswrapper[31411]: I0224 02:35:56.762945 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxkr6\" (UniqueName: \"kubernetes.io/projected/dacf13b3-b36b-40db-8824-550ffb0b6cbd-kube-api-access-nxkr6\") pod \"root-account-create-update-klrwt\" (UID: \"dacf13b3-b36b-40db-8824-550ffb0b6cbd\") " pod="openstack/root-account-create-update-klrwt" Feb 24 02:35:56.836807 master-0 kubenswrapper[31411]: I0224 02:35:56.836713 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-klrwt" Feb 24 02:35:57.042584 master-0 kubenswrapper[31411]: I0224 02:35:57.042050 31411 generic.go:334] "Generic (PLEG): container finished" podID="b84c9a81-b73a-4542-8445-3dab58e1b67d" containerID="ad0ba8e89adf8f5cee99a82e7af1422d1983fe7b26bd545688e3c013c7c9b9b7" exitCode=0 Feb 24 02:35:57.043292 master-0 kubenswrapper[31411]: I0224 02:35:57.042747 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hjmv9-config-wk789" event={"ID":"b84c9a81-b73a-4542-8445-3dab58e1b67d","Type":"ContainerDied","Data":"ad0ba8e89adf8f5cee99a82e7af1422d1983fe7b26bd545688e3c013c7c9b9b7"} Feb 24 02:35:57.326366 master-0 kubenswrapper[31411]: I0224 02:35:57.326286 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84556f859-6lpst"] Feb 24 02:35:57.326892 master-0 kubenswrapper[31411]: W0224 02:35:57.326824 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49237a00_8280_4287_a6fd_9fdf6a486c95.slice/crio-764312acfdf8be88a514b6108c4ef989edfe6f6f49b33150c19cd99094c6c3a3 WatchSource:0}: Error finding container 764312acfdf8be88a514b6108c4ef989edfe6f6f49b33150c19cd99094c6c3a3: Status 404 returned error can't find the container with id 764312acfdf8be88a514b6108c4ef989edfe6f6f49b33150c19cd99094c6c3a3 Feb 24 02:35:57.432219 master-0 kubenswrapper[31411]: I0224 02:35:57.432150 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-klrwt"] Feb 24 02:35:57.436415 master-0 kubenswrapper[31411]: W0224 02:35:57.436336 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddacf13b3_b36b_40db_8824_550ffb0b6cbd.slice/crio-d1b0e16e1983a513700f5427907f163185e1cdea0eea7cc7ae753e22aedb89ac WatchSource:0}: Error finding container d1b0e16e1983a513700f5427907f163185e1cdea0eea7cc7ae753e22aedb89ac: Status 404 returned error can't find the container with id d1b0e16e1983a513700f5427907f163185e1cdea0eea7cc7ae753e22aedb89ac Feb 24 02:35:57.454552 master-0 kubenswrapper[31411]: I0224 02:35:57.454512 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 24 02:35:58.069150 master-0 kubenswrapper[31411]: I0224 02:35:58.063008 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-klrwt" event={"ID":"dacf13b3-b36b-40db-8824-550ffb0b6cbd","Type":"ContainerStarted","Data":"0f8014e374704ae7a775f025c14900e36ea34a3ac04cdda51b844d6643cd40a8"} Feb 24 02:35:58.069150 master-0 kubenswrapper[31411]: I0224 02:35:58.063069 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-klrwt" event={"ID":"dacf13b3-b36b-40db-8824-550ffb0b6cbd","Type":"ContainerStarted","Data":"d1b0e16e1983a513700f5427907f163185e1cdea0eea7cc7ae753e22aedb89ac"} Feb 24 02:35:58.069150 master-0 kubenswrapper[31411]: I0224 02:35:58.066903 31411 generic.go:334] "Generic (PLEG): container finished" podID="49237a00-8280-4287-a6fd-9fdf6a486c95" containerID="a61e37267045d9a0a6799656e5f6d8df89b360970163b968a58b7750b629bd84" exitCode=0 Feb 24 02:35:58.069150 master-0 kubenswrapper[31411]: I0224 02:35:58.067091 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84556f859-6lpst" event={"ID":"49237a00-8280-4287-a6fd-9fdf6a486c95","Type":"ContainerDied","Data":"a61e37267045d9a0a6799656e5f6d8df89b360970163b968a58b7750b629bd84"} Feb 24 02:35:58.069150 master-0 kubenswrapper[31411]: I0224 02:35:58.067114 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84556f859-6lpst" event={"ID":"49237a00-8280-4287-a6fd-9fdf6a486c95","Type":"ContainerStarted","Data":"764312acfdf8be88a514b6108c4ef989edfe6f6f49b33150c19cd99094c6c3a3"} Feb 24 02:35:58.134002 master-0 kubenswrapper[31411]: I0224 02:35:58.132595 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-klrwt" podStartSLOduration=2.132548239 podStartE2EDuration="2.132548239s" podCreationTimestamp="2026-02-24 02:35:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:35:58.099314887 +0000 UTC m=+901.316512753" watchObservedRunningTime="2026-02-24 02:35:58.132548239 +0000 UTC m=+901.349746095" Feb 24 02:35:58.495697 master-0 kubenswrapper[31411]: I0224 02:35:58.495623 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hjmv9-config-wk789" Feb 24 02:35:58.602071 master-0 kubenswrapper[31411]: I0224 02:35:58.597542 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b84c9a81-b73a-4542-8445-3dab58e1b67d-var-log-ovn\") pod \"b84c9a81-b73a-4542-8445-3dab58e1b67d\" (UID: \"b84c9a81-b73a-4542-8445-3dab58e1b67d\") " Feb 24 02:35:58.602071 master-0 kubenswrapper[31411]: I0224 02:35:58.597762 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn5pb\" (UniqueName: \"kubernetes.io/projected/b84c9a81-b73a-4542-8445-3dab58e1b67d-kube-api-access-tn5pb\") pod \"b84c9a81-b73a-4542-8445-3dab58e1b67d\" (UID: \"b84c9a81-b73a-4542-8445-3dab58e1b67d\") " Feb 24 02:35:58.602071 master-0 kubenswrapper[31411]: I0224 02:35:58.597801 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b84c9a81-b73a-4542-8445-3dab58e1b67d-additional-scripts\") pod \"b84c9a81-b73a-4542-8445-3dab58e1b67d\" (UID: \"b84c9a81-b73a-4542-8445-3dab58e1b67d\") " Feb 24 02:35:58.602071 master-0 kubenswrapper[31411]: I0224 02:35:58.597900 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b84c9a81-b73a-4542-8445-3dab58e1b67d-var-run-ovn\") pod \"b84c9a81-b73a-4542-8445-3dab58e1b67d\" (UID: \"b84c9a81-b73a-4542-8445-3dab58e1b67d\") " Feb 24 02:35:58.602071 master-0 kubenswrapper[31411]: I0224 02:35:58.597911 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b84c9a81-b73a-4542-8445-3dab58e1b67d-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "b84c9a81-b73a-4542-8445-3dab58e1b67d" (UID: "b84c9a81-b73a-4542-8445-3dab58e1b67d"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:35:58.602071 master-0 kubenswrapper[31411]: I0224 02:35:58.597959 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b84c9a81-b73a-4542-8445-3dab58e1b67d-var-run\") pod \"b84c9a81-b73a-4542-8445-3dab58e1b67d\" (UID: \"b84c9a81-b73a-4542-8445-3dab58e1b67d\") " Feb 24 02:35:58.602071 master-0 kubenswrapper[31411]: I0224 02:35:58.598009 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b84c9a81-b73a-4542-8445-3dab58e1b67d-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "b84c9a81-b73a-4542-8445-3dab58e1b67d" (UID: "b84c9a81-b73a-4542-8445-3dab58e1b67d"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:35:58.602071 master-0 kubenswrapper[31411]: I0224 02:35:58.598042 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b84c9a81-b73a-4542-8445-3dab58e1b67d-scripts\") pod \"b84c9a81-b73a-4542-8445-3dab58e1b67d\" (UID: \"b84c9a81-b73a-4542-8445-3dab58e1b67d\") " Feb 24 02:35:58.602071 master-0 kubenswrapper[31411]: I0224 02:35:58.598050 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b84c9a81-b73a-4542-8445-3dab58e1b67d-var-run" (OuterVolumeSpecName: "var-run") pod "b84c9a81-b73a-4542-8445-3dab58e1b67d" (UID: "b84c9a81-b73a-4542-8445-3dab58e1b67d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:35:58.602071 master-0 kubenswrapper[31411]: I0224 02:35:58.598531 31411 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b84c9a81-b73a-4542-8445-3dab58e1b67d-var-run\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:58.602071 master-0 kubenswrapper[31411]: I0224 02:35:58.598546 31411 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b84c9a81-b73a-4542-8445-3dab58e1b67d-var-log-ovn\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:58.602071 master-0 kubenswrapper[31411]: I0224 02:35:58.598557 31411 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b84c9a81-b73a-4542-8445-3dab58e1b67d-var-run-ovn\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:58.602071 master-0 kubenswrapper[31411]: I0224 02:35:58.598553 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b84c9a81-b73a-4542-8445-3dab58e1b67d-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "b84c9a81-b73a-4542-8445-3dab58e1b67d" (UID: "b84c9a81-b73a-4542-8445-3dab58e1b67d"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:35:58.602071 master-0 kubenswrapper[31411]: I0224 02:35:58.598964 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b84c9a81-b73a-4542-8445-3dab58e1b67d-scripts" (OuterVolumeSpecName: "scripts") pod "b84c9a81-b73a-4542-8445-3dab58e1b67d" (UID: "b84c9a81-b73a-4542-8445-3dab58e1b67d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:35:58.602840 master-0 kubenswrapper[31411]: I0224 02:35:58.602194 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b84c9a81-b73a-4542-8445-3dab58e1b67d-kube-api-access-tn5pb" (OuterVolumeSpecName: "kube-api-access-tn5pb") pod "b84c9a81-b73a-4542-8445-3dab58e1b67d" (UID: "b84c9a81-b73a-4542-8445-3dab58e1b67d"). InnerVolumeSpecName "kube-api-access-tn5pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:35:58.703561 master-0 kubenswrapper[31411]: I0224 02:35:58.703468 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tn5pb\" (UniqueName: \"kubernetes.io/projected/b84c9a81-b73a-4542-8445-3dab58e1b67d-kube-api-access-tn5pb\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:58.703561 master-0 kubenswrapper[31411]: I0224 02:35:58.703540 31411 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b84c9a81-b73a-4542-8445-3dab58e1b67d-additional-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:58.703561 master-0 kubenswrapper[31411]: I0224 02:35:58.703559 31411 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b84c9a81-b73a-4542-8445-3dab58e1b67d-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:35:59.081617 master-0 kubenswrapper[31411]: I0224 02:35:59.081538 31411 generic.go:334] "Generic (PLEG): container finished" podID="dacf13b3-b36b-40db-8824-550ffb0b6cbd" containerID="0f8014e374704ae7a775f025c14900e36ea34a3ac04cdda51b844d6643cd40a8" exitCode=0 Feb 24 02:35:59.082167 master-0 kubenswrapper[31411]: I0224 02:35:59.081635 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-klrwt" event={"ID":"dacf13b3-b36b-40db-8824-550ffb0b6cbd","Type":"ContainerDied","Data":"0f8014e374704ae7a775f025c14900e36ea34a3ac04cdda51b844d6643cd40a8"} Feb 24 02:35:59.094878 master-0 kubenswrapper[31411]: I0224 02:35:59.094845 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hjmv9-config-wk789" Feb 24 02:35:59.123949 master-0 kubenswrapper[31411]: I0224 02:35:59.123865 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84556f859-6lpst" event={"ID":"49237a00-8280-4287-a6fd-9fdf6a486c95","Type":"ContainerStarted","Data":"bc0d39e514b6aaeeaeac1c835533735f6eb360a0428ecb221dc08d014e43c8fb"} Feb 24 02:35:59.123949 master-0 kubenswrapper[31411]: I0224 02:35:59.123932 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84556f859-6lpst" Feb 24 02:35:59.125038 master-0 kubenswrapper[31411]: I0224 02:35:59.124962 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hjmv9-config-wk789" event={"ID":"b84c9a81-b73a-4542-8445-3dab58e1b67d","Type":"ContainerDied","Data":"4934ce8e225f64cc043edd4295d6c7548b8615f4f70f54d49c7e41d51b13636c"} Feb 24 02:35:59.125038 master-0 kubenswrapper[31411]: I0224 02:35:59.125013 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4934ce8e225f64cc043edd4295d6c7548b8615f4f70f54d49c7e41d51b13636c" Feb 24 02:35:59.182705 master-0 kubenswrapper[31411]: I0224 02:35:59.182605 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84556f859-6lpst" podStartSLOduration=3.182556224 podStartE2EDuration="3.182556224s" podCreationTimestamp="2026-02-24 02:35:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:35:59.169789886 +0000 UTC m=+902.386987742" watchObservedRunningTime="2026-02-24 02:35:59.182556224 +0000 UTC m=+902.399754070" Feb 24 02:35:59.476836 master-0 kubenswrapper[31411]: I0224 02:35:59.476772 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-hjmv9" Feb 24 02:35:59.651541 master-0 kubenswrapper[31411]: I0224 02:35:59.651452 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-hjmv9-config-wk789"] Feb 24 02:35:59.672466 master-0 kubenswrapper[31411]: I0224 02:35:59.672360 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-hjmv9-config-wk789"] Feb 24 02:35:59.790421 master-0 kubenswrapper[31411]: I0224 02:35:59.790208 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-hjmv9-config-rhxl6"] Feb 24 02:35:59.790804 master-0 kubenswrapper[31411]: E0224 02:35:59.790709 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b84c9a81-b73a-4542-8445-3dab58e1b67d" containerName="ovn-config" Feb 24 02:35:59.790804 master-0 kubenswrapper[31411]: I0224 02:35:59.790725 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="b84c9a81-b73a-4542-8445-3dab58e1b67d" containerName="ovn-config" Feb 24 02:35:59.790964 master-0 kubenswrapper[31411]: I0224 02:35:59.790940 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="b84c9a81-b73a-4542-8445-3dab58e1b67d" containerName="ovn-config" Feb 24 02:35:59.791753 master-0 kubenswrapper[31411]: I0224 02:35:59.791707 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hjmv9-config-rhxl6" Feb 24 02:35:59.794057 master-0 kubenswrapper[31411]: I0224 02:35:59.793991 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 24 02:35:59.810301 master-0 kubenswrapper[31411]: I0224 02:35:59.810219 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hjmv9-config-rhxl6"] Feb 24 02:35:59.944849 master-0 kubenswrapper[31411]: I0224 02:35:59.944774 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/23c2f7f9-8374-47bd-b845-2b415f499ba3-var-run\") pod \"ovn-controller-hjmv9-config-rhxl6\" (UID: \"23c2f7f9-8374-47bd-b845-2b415f499ba3\") " pod="openstack/ovn-controller-hjmv9-config-rhxl6" Feb 24 02:35:59.945123 master-0 kubenswrapper[31411]: I0224 02:35:59.944891 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23c2f7f9-8374-47bd-b845-2b415f499ba3-scripts\") pod \"ovn-controller-hjmv9-config-rhxl6\" (UID: \"23c2f7f9-8374-47bd-b845-2b415f499ba3\") " pod="openstack/ovn-controller-hjmv9-config-rhxl6" Feb 24 02:35:59.945123 master-0 kubenswrapper[31411]: I0224 02:35:59.944928 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw296\" (UniqueName: \"kubernetes.io/projected/23c2f7f9-8374-47bd-b845-2b415f499ba3-kube-api-access-pw296\") pod \"ovn-controller-hjmv9-config-rhxl6\" (UID: \"23c2f7f9-8374-47bd-b845-2b415f499ba3\") " pod="openstack/ovn-controller-hjmv9-config-rhxl6" Feb 24 02:35:59.945123 master-0 kubenswrapper[31411]: I0224 02:35:59.945028 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/23c2f7f9-8374-47bd-b845-2b415f499ba3-var-run-ovn\") pod \"ovn-controller-hjmv9-config-rhxl6\" (UID: \"23c2f7f9-8374-47bd-b845-2b415f499ba3\") " pod="openstack/ovn-controller-hjmv9-config-rhxl6" Feb 24 02:35:59.945521 master-0 kubenswrapper[31411]: I0224 02:35:59.945437 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/23c2f7f9-8374-47bd-b845-2b415f499ba3-var-log-ovn\") pod \"ovn-controller-hjmv9-config-rhxl6\" (UID: \"23c2f7f9-8374-47bd-b845-2b415f499ba3\") " pod="openstack/ovn-controller-hjmv9-config-rhxl6" Feb 24 02:35:59.945797 master-0 kubenswrapper[31411]: I0224 02:35:59.945762 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/23c2f7f9-8374-47bd-b845-2b415f499ba3-additional-scripts\") pod \"ovn-controller-hjmv9-config-rhxl6\" (UID: \"23c2f7f9-8374-47bd-b845-2b415f499ba3\") " pod="openstack/ovn-controller-hjmv9-config-rhxl6" Feb 24 02:36:00.048437 master-0 kubenswrapper[31411]: I0224 02:36:00.048185 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23c2f7f9-8374-47bd-b845-2b415f499ba3-scripts\") pod \"ovn-controller-hjmv9-config-rhxl6\" (UID: \"23c2f7f9-8374-47bd-b845-2b415f499ba3\") " pod="openstack/ovn-controller-hjmv9-config-rhxl6" Feb 24 02:36:00.048437 master-0 kubenswrapper[31411]: I0224 02:36:00.048270 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw296\" (UniqueName: \"kubernetes.io/projected/23c2f7f9-8374-47bd-b845-2b415f499ba3-kube-api-access-pw296\") pod \"ovn-controller-hjmv9-config-rhxl6\" (UID: \"23c2f7f9-8374-47bd-b845-2b415f499ba3\") " pod="openstack/ovn-controller-hjmv9-config-rhxl6" Feb 24 02:36:00.048437 master-0 kubenswrapper[31411]: I0224 02:36:00.048385 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/23c2f7f9-8374-47bd-b845-2b415f499ba3-var-run-ovn\") pod \"ovn-controller-hjmv9-config-rhxl6\" (UID: \"23c2f7f9-8374-47bd-b845-2b415f499ba3\") " pod="openstack/ovn-controller-hjmv9-config-rhxl6" Feb 24 02:36:00.048867 master-0 kubenswrapper[31411]: I0224 02:36:00.048460 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/23c2f7f9-8374-47bd-b845-2b415f499ba3-var-log-ovn\") pod \"ovn-controller-hjmv9-config-rhxl6\" (UID: \"23c2f7f9-8374-47bd-b845-2b415f499ba3\") " pod="openstack/ovn-controller-hjmv9-config-rhxl6" Feb 24 02:36:00.048867 master-0 kubenswrapper[31411]: I0224 02:36:00.048674 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/23c2f7f9-8374-47bd-b845-2b415f499ba3-var-run-ovn\") pod \"ovn-controller-hjmv9-config-rhxl6\" (UID: \"23c2f7f9-8374-47bd-b845-2b415f499ba3\") " pod="openstack/ovn-controller-hjmv9-config-rhxl6" Feb 24 02:36:00.048867 master-0 kubenswrapper[31411]: I0224 02:36:00.048776 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/23c2f7f9-8374-47bd-b845-2b415f499ba3-additional-scripts\") pod \"ovn-controller-hjmv9-config-rhxl6\" (UID: \"23c2f7f9-8374-47bd-b845-2b415f499ba3\") " pod="openstack/ovn-controller-hjmv9-config-rhxl6" Feb 24 02:36:00.049033 master-0 kubenswrapper[31411]: I0224 02:36:00.048990 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/23c2f7f9-8374-47bd-b845-2b415f499ba3-var-log-ovn\") pod \"ovn-controller-hjmv9-config-rhxl6\" (UID: \"23c2f7f9-8374-47bd-b845-2b415f499ba3\") " pod="openstack/ovn-controller-hjmv9-config-rhxl6" Feb 24 02:36:00.049087 master-0 kubenswrapper[31411]: I0224 02:36:00.049043 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/23c2f7f9-8374-47bd-b845-2b415f499ba3-var-run\") pod \"ovn-controller-hjmv9-config-rhxl6\" (UID: \"23c2f7f9-8374-47bd-b845-2b415f499ba3\") " pod="openstack/ovn-controller-hjmv9-config-rhxl6" Feb 24 02:36:00.049271 master-0 kubenswrapper[31411]: I0224 02:36:00.049171 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/23c2f7f9-8374-47bd-b845-2b415f499ba3-var-run\") pod \"ovn-controller-hjmv9-config-rhxl6\" (UID: \"23c2f7f9-8374-47bd-b845-2b415f499ba3\") " pod="openstack/ovn-controller-hjmv9-config-rhxl6" Feb 24 02:36:00.050852 master-0 kubenswrapper[31411]: I0224 02:36:00.050793 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/23c2f7f9-8374-47bd-b845-2b415f499ba3-additional-scripts\") pod \"ovn-controller-hjmv9-config-rhxl6\" (UID: \"23c2f7f9-8374-47bd-b845-2b415f499ba3\") " pod="openstack/ovn-controller-hjmv9-config-rhxl6" Feb 24 02:36:00.052844 master-0 kubenswrapper[31411]: I0224 02:36:00.052795 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23c2f7f9-8374-47bd-b845-2b415f499ba3-scripts\") pod \"ovn-controller-hjmv9-config-rhxl6\" (UID: \"23c2f7f9-8374-47bd-b845-2b415f499ba3\") " pod="openstack/ovn-controller-hjmv9-config-rhxl6" Feb 24 02:36:00.068048 master-0 kubenswrapper[31411]: I0224 02:36:00.067991 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw296\" (UniqueName: \"kubernetes.io/projected/23c2f7f9-8374-47bd-b845-2b415f499ba3-kube-api-access-pw296\") pod \"ovn-controller-hjmv9-config-rhxl6\" (UID: \"23c2f7f9-8374-47bd-b845-2b415f499ba3\") " pod="openstack/ovn-controller-hjmv9-config-rhxl6" Feb 24 02:36:00.116616 master-0 kubenswrapper[31411]: I0224 02:36:00.114852 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hjmv9-config-rhxl6" Feb 24 02:36:00.814918 master-0 kubenswrapper[31411]: I0224 02:36:00.814837 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hjmv9-config-rhxl6"] Feb 24 02:36:01.125983 master-0 kubenswrapper[31411]: I0224 02:36:01.125905 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b84c9a81-b73a-4542-8445-3dab58e1b67d" path="/var/lib/kubelet/pods/b84c9a81-b73a-4542-8445-3dab58e1b67d/volumes" Feb 24 02:36:01.840940 master-0 kubenswrapper[31411]: I0224 02:36:01.840867 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 24 02:36:02.244820 master-0 kubenswrapper[31411]: I0224 02:36:02.241697 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-vptkz"] Feb 24 02:36:02.244820 master-0 kubenswrapper[31411]: I0224 02:36:02.244456 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vptkz" Feb 24 02:36:02.255646 master-0 kubenswrapper[31411]: I0224 02:36:02.253745 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-vptkz"] Feb 24 02:36:02.334675 master-0 kubenswrapper[31411]: I0224 02:36:02.334593 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzv7x\" (UniqueName: \"kubernetes.io/projected/6bdc9612-8122-43dd-b0bd-3a2044dc4848-kube-api-access-dzv7x\") pod \"cinder-db-create-vptkz\" (UID: \"6bdc9612-8122-43dd-b0bd-3a2044dc4848\") " pod="openstack/cinder-db-create-vptkz" Feb 24 02:36:02.334942 master-0 kubenswrapper[31411]: I0224 02:36:02.334823 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bdc9612-8122-43dd-b0bd-3a2044dc4848-operator-scripts\") pod \"cinder-db-create-vptkz\" (UID: \"6bdc9612-8122-43dd-b0bd-3a2044dc4848\") " pod="openstack/cinder-db-create-vptkz" Feb 24 02:36:02.368612 master-0 kubenswrapper[31411]: I0224 02:36:02.368510 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ed6f-account-create-update-kn7d6"] Feb 24 02:36:02.370494 master-0 kubenswrapper[31411]: I0224 02:36:02.370464 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ed6f-account-create-update-kn7d6" Feb 24 02:36:02.375536 master-0 kubenswrapper[31411]: I0224 02:36:02.375350 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 24 02:36:02.386107 master-0 kubenswrapper[31411]: I0224 02:36:02.385214 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ed6f-account-create-update-kn7d6"] Feb 24 02:36:02.442345 master-0 kubenswrapper[31411]: I0224 02:36:02.442212 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bdc9612-8122-43dd-b0bd-3a2044dc4848-operator-scripts\") pod \"cinder-db-create-vptkz\" (UID: \"6bdc9612-8122-43dd-b0bd-3a2044dc4848\") " pod="openstack/cinder-db-create-vptkz" Feb 24 02:36:02.442624 master-0 kubenswrapper[31411]: I0224 02:36:02.442486 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p54xd\" (UniqueName: \"kubernetes.io/projected/5c7949ba-1c60-4a1d-872b-cc388aba1adc-kube-api-access-p54xd\") pod \"cinder-ed6f-account-create-update-kn7d6\" (UID: \"5c7949ba-1c60-4a1d-872b-cc388aba1adc\") " pod="openstack/cinder-ed6f-account-create-update-kn7d6" Feb 24 02:36:02.442624 master-0 kubenswrapper[31411]: I0224 02:36:02.442543 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c7949ba-1c60-4a1d-872b-cc388aba1adc-operator-scripts\") pod \"cinder-ed6f-account-create-update-kn7d6\" (UID: \"5c7949ba-1c60-4a1d-872b-cc388aba1adc\") " pod="openstack/cinder-ed6f-account-create-update-kn7d6" Feb 24 02:36:02.442624 master-0 kubenswrapper[31411]: I0224 02:36:02.442614 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzv7x\" (UniqueName: \"kubernetes.io/projected/6bdc9612-8122-43dd-b0bd-3a2044dc4848-kube-api-access-dzv7x\") pod \"cinder-db-create-vptkz\" (UID: \"6bdc9612-8122-43dd-b0bd-3a2044dc4848\") " pod="openstack/cinder-db-create-vptkz" Feb 24 02:36:02.443812 master-0 kubenswrapper[31411]: I0224 02:36:02.443780 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bdc9612-8122-43dd-b0bd-3a2044dc4848-operator-scripts\") pod \"cinder-db-create-vptkz\" (UID: \"6bdc9612-8122-43dd-b0bd-3a2044dc4848\") " pod="openstack/cinder-db-create-vptkz" Feb 24 02:36:02.469770 master-0 kubenswrapper[31411]: I0224 02:36:02.466555 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzv7x\" (UniqueName: \"kubernetes.io/projected/6bdc9612-8122-43dd-b0bd-3a2044dc4848-kube-api-access-dzv7x\") pod \"cinder-db-create-vptkz\" (UID: \"6bdc9612-8122-43dd-b0bd-3a2044dc4848\") " pod="openstack/cinder-db-create-vptkz" Feb 24 02:36:02.545873 master-0 kubenswrapper[31411]: I0224 02:36:02.545805 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p54xd\" (UniqueName: \"kubernetes.io/projected/5c7949ba-1c60-4a1d-872b-cc388aba1adc-kube-api-access-p54xd\") pod \"cinder-ed6f-account-create-update-kn7d6\" (UID: \"5c7949ba-1c60-4a1d-872b-cc388aba1adc\") " pod="openstack/cinder-ed6f-account-create-update-kn7d6" Feb 24 02:36:02.546749 master-0 kubenswrapper[31411]: I0224 02:36:02.545883 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c7949ba-1c60-4a1d-872b-cc388aba1adc-operator-scripts\") pod \"cinder-ed6f-account-create-update-kn7d6\" (UID: \"5c7949ba-1c60-4a1d-872b-cc388aba1adc\") " pod="openstack/cinder-ed6f-account-create-update-kn7d6" Feb 24 02:36:02.546806 master-0 kubenswrapper[31411]: I0224 02:36:02.546727 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c7949ba-1c60-4a1d-872b-cc388aba1adc-operator-scripts\") pod \"cinder-ed6f-account-create-update-kn7d6\" (UID: \"5c7949ba-1c60-4a1d-872b-cc388aba1adc\") " pod="openstack/cinder-ed6f-account-create-update-kn7d6" Feb 24 02:36:02.566472 master-0 kubenswrapper[31411]: I0224 02:36:02.566407 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vptkz" Feb 24 02:36:02.931835 master-0 kubenswrapper[31411]: I0224 02:36:02.931743 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-4z2pz"] Feb 24 02:36:02.933705 master-0 kubenswrapper[31411]: I0224 02:36:02.933675 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4z2pz" Feb 24 02:36:02.936093 master-0 kubenswrapper[31411]: I0224 02:36:02.936042 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p54xd\" (UniqueName: \"kubernetes.io/projected/5c7949ba-1c60-4a1d-872b-cc388aba1adc-kube-api-access-p54xd\") pod \"cinder-ed6f-account-create-update-kn7d6\" (UID: \"5c7949ba-1c60-4a1d-872b-cc388aba1adc\") " pod="openstack/cinder-ed6f-account-create-update-kn7d6" Feb 24 02:36:02.945004 master-0 kubenswrapper[31411]: I0224 02:36:02.944943 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 24 02:36:02.945269 master-0 kubenswrapper[31411]: I0224 02:36:02.945203 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 24 02:36:02.946105 master-0 kubenswrapper[31411]: I0224 02:36:02.945361 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 24 02:36:02.970387 master-0 kubenswrapper[31411]: I0224 02:36:02.967655 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-x5qn7"] Feb 24 02:36:02.970387 master-0 kubenswrapper[31411]: I0224 02:36:02.969498 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x5qn7" Feb 24 02:36:02.982078 master-0 kubenswrapper[31411]: I0224 02:36:02.982004 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4z2pz"] Feb 24 02:36:02.993595 master-0 kubenswrapper[31411]: I0224 02:36:02.991678 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ed6f-account-create-update-kn7d6" Feb 24 02:36:03.038710 master-0 kubenswrapper[31411]: I0224 02:36:03.038641 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-x5qn7"] Feb 24 02:36:03.062990 master-0 kubenswrapper[31411]: I0224 02:36:03.062920 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4459f0c8-08d6-4c34-8144-f36dc7408608-operator-scripts\") pod \"neutron-db-create-x5qn7\" (UID: \"4459f0c8-08d6-4c34-8144-f36dc7408608\") " pod="openstack/neutron-db-create-x5qn7" Feb 24 02:36:03.063372 master-0 kubenswrapper[31411]: I0224 02:36:03.063352 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r46wh\" (UniqueName: \"kubernetes.io/projected/4459f0c8-08d6-4c34-8144-f36dc7408608-kube-api-access-r46wh\") pod \"neutron-db-create-x5qn7\" (UID: \"4459f0c8-08d6-4c34-8144-f36dc7408608\") " pod="openstack/neutron-db-create-x5qn7" Feb 24 02:36:03.063517 master-0 kubenswrapper[31411]: I0224 02:36:03.063496 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bdbc28e-3ed7-4454-8421-043a9d2864c8-combined-ca-bundle\") pod \"keystone-db-sync-4z2pz\" (UID: \"0bdbc28e-3ed7-4454-8421-043a9d2864c8\") " pod="openstack/keystone-db-sync-4z2pz" Feb 24 02:36:03.063720 master-0 kubenswrapper[31411]: I0224 02:36:03.063702 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lv8w\" (UniqueName: \"kubernetes.io/projected/0bdbc28e-3ed7-4454-8421-043a9d2864c8-kube-api-access-7lv8w\") pod \"keystone-db-sync-4z2pz\" (UID: \"0bdbc28e-3ed7-4454-8421-043a9d2864c8\") " pod="openstack/keystone-db-sync-4z2pz" Feb 24 02:36:03.063821 master-0 kubenswrapper[31411]: I0224 02:36:03.063806 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bdbc28e-3ed7-4454-8421-043a9d2864c8-config-data\") pod \"keystone-db-sync-4z2pz\" (UID: \"0bdbc28e-3ed7-4454-8421-043a9d2864c8\") " pod="openstack/keystone-db-sync-4z2pz" Feb 24 02:36:03.083550 master-0 kubenswrapper[31411]: I0224 02:36:03.083471 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7051-account-create-update-2j7gx"] Feb 24 02:36:03.086106 master-0 kubenswrapper[31411]: I0224 02:36:03.086085 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7051-account-create-update-2j7gx" Feb 24 02:36:03.089447 master-0 kubenswrapper[31411]: I0224 02:36:03.089381 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 24 02:36:03.119699 master-0 kubenswrapper[31411]: I0224 02:36:03.118543 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7051-account-create-update-2j7gx"] Feb 24 02:36:03.166290 master-0 kubenswrapper[31411]: I0224 02:36:03.166228 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4459f0c8-08d6-4c34-8144-f36dc7408608-operator-scripts\") pod \"neutron-db-create-x5qn7\" (UID: \"4459f0c8-08d6-4c34-8144-f36dc7408608\") " pod="openstack/neutron-db-create-x5qn7" Feb 24 02:36:03.166972 master-0 kubenswrapper[31411]: I0224 02:36:03.166904 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r46wh\" (UniqueName: \"kubernetes.io/projected/4459f0c8-08d6-4c34-8144-f36dc7408608-kube-api-access-r46wh\") pod \"neutron-db-create-x5qn7\" (UID: \"4459f0c8-08d6-4c34-8144-f36dc7408608\") " pod="openstack/neutron-db-create-x5qn7" Feb 24 02:36:03.167272 master-0 kubenswrapper[31411]: I0224 02:36:03.167245 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bdbc28e-3ed7-4454-8421-043a9d2864c8-combined-ca-bundle\") pod \"keystone-db-sync-4z2pz\" (UID: \"0bdbc28e-3ed7-4454-8421-043a9d2864c8\") " pod="openstack/keystone-db-sync-4z2pz" Feb 24 02:36:03.167983 master-0 kubenswrapper[31411]: I0224 02:36:03.167924 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4459f0c8-08d6-4c34-8144-f36dc7408608-operator-scripts\") pod \"neutron-db-create-x5qn7\" (UID: \"4459f0c8-08d6-4c34-8144-f36dc7408608\") " pod="openstack/neutron-db-create-x5qn7" Feb 24 02:36:03.168046 master-0 kubenswrapper[31411]: I0224 02:36:03.167942 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lv8w\" (UniqueName: \"kubernetes.io/projected/0bdbc28e-3ed7-4454-8421-043a9d2864c8-kube-api-access-7lv8w\") pod \"keystone-db-sync-4z2pz\" (UID: \"0bdbc28e-3ed7-4454-8421-043a9d2864c8\") " pod="openstack/keystone-db-sync-4z2pz" Feb 24 02:36:03.168088 master-0 kubenswrapper[31411]: I0224 02:36:03.168051 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gs9d\" (UniqueName: \"kubernetes.io/projected/6ac75776-ea65-4171-9504-a54ece21245f-kube-api-access-7gs9d\") pod \"neutron-7051-account-create-update-2j7gx\" (UID: \"6ac75776-ea65-4171-9504-a54ece21245f\") " pod="openstack/neutron-7051-account-create-update-2j7gx" Feb 24 02:36:03.168126 master-0 kubenswrapper[31411]: I0224 02:36:03.168109 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bdbc28e-3ed7-4454-8421-043a9d2864c8-config-data\") pod \"keystone-db-sync-4z2pz\" (UID: \"0bdbc28e-3ed7-4454-8421-043a9d2864c8\") " pod="openstack/keystone-db-sync-4z2pz" Feb 24 02:36:03.168242 master-0 kubenswrapper[31411]: I0224 02:36:03.168190 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ac75776-ea65-4171-9504-a54ece21245f-operator-scripts\") pod \"neutron-7051-account-create-update-2j7gx\" (UID: \"6ac75776-ea65-4171-9504-a54ece21245f\") " pod="openstack/neutron-7051-account-create-update-2j7gx" Feb 24 02:36:03.173513 master-0 kubenswrapper[31411]: I0224 02:36:03.171983 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bdbc28e-3ed7-4454-8421-043a9d2864c8-combined-ca-bundle\") pod \"keystone-db-sync-4z2pz\" (UID: \"0bdbc28e-3ed7-4454-8421-043a9d2864c8\") " pod="openstack/keystone-db-sync-4z2pz" Feb 24 02:36:03.173617 master-0 kubenswrapper[31411]: I0224 02:36:03.173510 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bdbc28e-3ed7-4454-8421-043a9d2864c8-config-data\") pod \"keystone-db-sync-4z2pz\" (UID: \"0bdbc28e-3ed7-4454-8421-043a9d2864c8\") " pod="openstack/keystone-db-sync-4z2pz" Feb 24 02:36:03.186700 master-0 kubenswrapper[31411]: I0224 02:36:03.186622 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r46wh\" (UniqueName: \"kubernetes.io/projected/4459f0c8-08d6-4c34-8144-f36dc7408608-kube-api-access-r46wh\") pod \"neutron-db-create-x5qn7\" (UID: \"4459f0c8-08d6-4c34-8144-f36dc7408608\") " pod="openstack/neutron-db-create-x5qn7" Feb 24 02:36:03.189635 master-0 kubenswrapper[31411]: I0224 02:36:03.189608 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lv8w\" (UniqueName: \"kubernetes.io/projected/0bdbc28e-3ed7-4454-8421-043a9d2864c8-kube-api-access-7lv8w\") pod \"keystone-db-sync-4z2pz\" (UID: \"0bdbc28e-3ed7-4454-8421-043a9d2864c8\") " pod="openstack/keystone-db-sync-4z2pz" Feb 24 02:36:03.270278 master-0 kubenswrapper[31411]: I0224 02:36:03.270218 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gs9d\" (UniqueName: \"kubernetes.io/projected/6ac75776-ea65-4171-9504-a54ece21245f-kube-api-access-7gs9d\") pod \"neutron-7051-account-create-update-2j7gx\" (UID: \"6ac75776-ea65-4171-9504-a54ece21245f\") " pod="openstack/neutron-7051-account-create-update-2j7gx" Feb 24 02:36:03.270278 master-0 kubenswrapper[31411]: I0224 02:36:03.270283 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ac75776-ea65-4171-9504-a54ece21245f-operator-scripts\") pod \"neutron-7051-account-create-update-2j7gx\" (UID: \"6ac75776-ea65-4171-9504-a54ece21245f\") " pod="openstack/neutron-7051-account-create-update-2j7gx" Feb 24 02:36:03.271816 master-0 kubenswrapper[31411]: I0224 02:36:03.271753 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ac75776-ea65-4171-9504-a54ece21245f-operator-scripts\") pod \"neutron-7051-account-create-update-2j7gx\" (UID: \"6ac75776-ea65-4171-9504-a54ece21245f\") " pod="openstack/neutron-7051-account-create-update-2j7gx" Feb 24 02:36:03.294110 master-0 kubenswrapper[31411]: I0224 02:36:03.294051 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 24 02:36:03.301014 master-0 kubenswrapper[31411]: I0224 02:36:03.300980 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gs9d\" (UniqueName: \"kubernetes.io/projected/6ac75776-ea65-4171-9504-a54ece21245f-kube-api-access-7gs9d\") pod \"neutron-7051-account-create-update-2j7gx\" (UID: \"6ac75776-ea65-4171-9504-a54ece21245f\") " pod="openstack/neutron-7051-account-create-update-2j7gx" Feb 24 02:36:03.324292 master-0 kubenswrapper[31411]: I0224 02:36:03.323642 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4z2pz" Feb 24 02:36:03.350148 master-0 kubenswrapper[31411]: I0224 02:36:03.350089 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x5qn7" Feb 24 02:36:03.461881 master-0 kubenswrapper[31411]: I0224 02:36:03.461822 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7051-account-create-update-2j7gx" Feb 24 02:36:06.756746 master-0 kubenswrapper[31411]: I0224 02:36:06.756674 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84556f859-6lpst" Feb 24 02:36:07.087296 master-0 kubenswrapper[31411]: I0224 02:36:07.087114 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b55dc5f67-k2lcw"] Feb 24 02:36:07.087564 master-0 kubenswrapper[31411]: I0224 02:36:07.087429 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b55dc5f67-k2lcw" podUID="42530b69-b5be-4699-873f-51cf3161c255" containerName="dnsmasq-dns" containerID="cri-o://b47802dc03ac933559f7cacd5354414ae6b343727c0f8b39b0f31fbc1f75c829" gracePeriod=10 Feb 24 02:36:07.324869 master-0 kubenswrapper[31411]: I0224 02:36:07.316124 31411 generic.go:334] "Generic (PLEG): container finished" podID="42530b69-b5be-4699-873f-51cf3161c255" containerID="b47802dc03ac933559f7cacd5354414ae6b343727c0f8b39b0f31fbc1f75c829" exitCode=0 Feb 24 02:36:07.324869 master-0 kubenswrapper[31411]: I0224 02:36:07.316181 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b55dc5f67-k2lcw" event={"ID":"42530b69-b5be-4699-873f-51cf3161c255","Type":"ContainerDied","Data":"b47802dc03ac933559f7cacd5354414ae6b343727c0f8b39b0f31fbc1f75c829"} Feb 24 02:36:09.468592 master-0 kubenswrapper[31411]: I0224 02:36:09.468457 31411 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b55dc5f67-k2lcw" podUID="42530b69-b5be-4699-873f-51cf3161c255" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.182:5353: connect: connection refused" Feb 24 02:36:11.501119 master-0 kubenswrapper[31411]: W0224 02:36:11.500941 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23c2f7f9_8374_47bd_b845_2b415f499ba3.slice/crio-20ce02cf5613e36f1adaebe09eac44e054aa460f35ec585150f0934069692069 WatchSource:0}: Error finding container 20ce02cf5613e36f1adaebe09eac44e054aa460f35ec585150f0934069692069: Status 404 returned error can't find the container with id 20ce02cf5613e36f1adaebe09eac44e054aa460f35ec585150f0934069692069 Feb 24 02:36:11.620756 master-0 kubenswrapper[31411]: I0224 02:36:11.620684 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-klrwt" Feb 24 02:36:11.750161 master-0 kubenswrapper[31411]: I0224 02:36:11.750042 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxkr6\" (UniqueName: \"kubernetes.io/projected/dacf13b3-b36b-40db-8824-550ffb0b6cbd-kube-api-access-nxkr6\") pod \"dacf13b3-b36b-40db-8824-550ffb0b6cbd\" (UID: \"dacf13b3-b36b-40db-8824-550ffb0b6cbd\") " Feb 24 02:36:11.750323 master-0 kubenswrapper[31411]: I0224 02:36:11.750173 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dacf13b3-b36b-40db-8824-550ffb0b6cbd-operator-scripts\") pod \"dacf13b3-b36b-40db-8824-550ffb0b6cbd\" (UID: \"dacf13b3-b36b-40db-8824-550ffb0b6cbd\") " Feb 24 02:36:11.751667 master-0 kubenswrapper[31411]: I0224 02:36:11.751600 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dacf13b3-b36b-40db-8824-550ffb0b6cbd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dacf13b3-b36b-40db-8824-550ffb0b6cbd" (UID: "dacf13b3-b36b-40db-8824-550ffb0b6cbd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:36:11.756922 master-0 kubenswrapper[31411]: I0224 02:36:11.756825 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dacf13b3-b36b-40db-8824-550ffb0b6cbd-kube-api-access-nxkr6" (OuterVolumeSpecName: "kube-api-access-nxkr6") pod "dacf13b3-b36b-40db-8824-550ffb0b6cbd" (UID: "dacf13b3-b36b-40db-8824-550ffb0b6cbd"). InnerVolumeSpecName "kube-api-access-nxkr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:36:11.859507 master-0 kubenswrapper[31411]: I0224 02:36:11.859361 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxkr6\" (UniqueName: \"kubernetes.io/projected/dacf13b3-b36b-40db-8824-550ffb0b6cbd-kube-api-access-nxkr6\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:11.859507 master-0 kubenswrapper[31411]: I0224 02:36:11.859405 31411 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dacf13b3-b36b-40db-8824-550ffb0b6cbd-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:11.981263 master-0 kubenswrapper[31411]: I0224 02:36:11.981207 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b55dc5f67-k2lcw" Feb 24 02:36:12.170557 master-0 kubenswrapper[31411]: I0224 02:36:12.170483 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42530b69-b5be-4699-873f-51cf3161c255-dns-svc\") pod \"42530b69-b5be-4699-873f-51cf3161c255\" (UID: \"42530b69-b5be-4699-873f-51cf3161c255\") " Feb 24 02:36:12.170704 master-0 kubenswrapper[31411]: I0224 02:36:12.170557 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42530b69-b5be-4699-873f-51cf3161c255-ovsdbserver-nb\") pod \"42530b69-b5be-4699-873f-51cf3161c255\" (UID: \"42530b69-b5be-4699-873f-51cf3161c255\") " Feb 24 02:36:12.170787 master-0 kubenswrapper[31411]: I0224 02:36:12.170762 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmzvk\" (UniqueName: \"kubernetes.io/projected/42530b69-b5be-4699-873f-51cf3161c255-kube-api-access-hmzvk\") pod \"42530b69-b5be-4699-873f-51cf3161c255\" (UID: \"42530b69-b5be-4699-873f-51cf3161c255\") " Feb 24 02:36:12.170854 master-0 kubenswrapper[31411]: I0224 02:36:12.170830 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42530b69-b5be-4699-873f-51cf3161c255-ovsdbserver-sb\") pod \"42530b69-b5be-4699-873f-51cf3161c255\" (UID: \"42530b69-b5be-4699-873f-51cf3161c255\") " Feb 24 02:36:12.171015 master-0 kubenswrapper[31411]: I0224 02:36:12.170992 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42530b69-b5be-4699-873f-51cf3161c255-config\") pod \"42530b69-b5be-4699-873f-51cf3161c255\" (UID: \"42530b69-b5be-4699-873f-51cf3161c255\") " Feb 24 02:36:12.175757 master-0 kubenswrapper[31411]: I0224 02:36:12.175696 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42530b69-b5be-4699-873f-51cf3161c255-kube-api-access-hmzvk" (OuterVolumeSpecName: "kube-api-access-hmzvk") pod "42530b69-b5be-4699-873f-51cf3161c255" (UID: "42530b69-b5be-4699-873f-51cf3161c255"). InnerVolumeSpecName "kube-api-access-hmzvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:36:12.227048 master-0 kubenswrapper[31411]: I0224 02:36:12.226650 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42530b69-b5be-4699-873f-51cf3161c255-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "42530b69-b5be-4699-873f-51cf3161c255" (UID: "42530b69-b5be-4699-873f-51cf3161c255"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:36:12.233566 master-0 kubenswrapper[31411]: I0224 02:36:12.233491 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42530b69-b5be-4699-873f-51cf3161c255-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "42530b69-b5be-4699-873f-51cf3161c255" (UID: "42530b69-b5be-4699-873f-51cf3161c255"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:36:12.238641 master-0 kubenswrapper[31411]: I0224 02:36:12.238151 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42530b69-b5be-4699-873f-51cf3161c255-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "42530b69-b5be-4699-873f-51cf3161c255" (UID: "42530b69-b5be-4699-873f-51cf3161c255"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:36:12.246499 master-0 kubenswrapper[31411]: I0224 02:36:12.246417 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42530b69-b5be-4699-873f-51cf3161c255-config" (OuterVolumeSpecName: "config") pod "42530b69-b5be-4699-873f-51cf3161c255" (UID: "42530b69-b5be-4699-873f-51cf3161c255"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:36:12.282650 master-0 kubenswrapper[31411]: I0224 02:36:12.282128 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmzvk\" (UniqueName: \"kubernetes.io/projected/42530b69-b5be-4699-873f-51cf3161c255-kube-api-access-hmzvk\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:12.282650 master-0 kubenswrapper[31411]: I0224 02:36:12.282175 31411 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42530b69-b5be-4699-873f-51cf3161c255-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:12.282650 master-0 kubenswrapper[31411]: I0224 02:36:12.282191 31411 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42530b69-b5be-4699-873f-51cf3161c255-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:12.282650 master-0 kubenswrapper[31411]: I0224 02:36:12.282207 31411 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42530b69-b5be-4699-873f-51cf3161c255-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:12.282650 master-0 kubenswrapper[31411]: I0224 02:36:12.282220 31411 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42530b69-b5be-4699-873f-51cf3161c255-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:12.378715 master-0 kubenswrapper[31411]: I0224 02:36:12.378626 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7051-account-create-update-2j7gx"] Feb 24 02:36:12.391915 master-0 kubenswrapper[31411]: I0224 02:36:12.391878 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4z2pz"] Feb 24 02:36:12.401509 master-0 kubenswrapper[31411]: I0224 02:36:12.401459 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-vptkz"] Feb 24 02:36:12.403799 master-0 kubenswrapper[31411]: I0224 02:36:12.403751 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b55dc5f67-k2lcw" event={"ID":"42530b69-b5be-4699-873f-51cf3161c255","Type":"ContainerDied","Data":"3a947c896bab121a3fb3c5c6a8f0566d90ecf16668da27679d5113ef5e5d3efa"} Feb 24 02:36:12.403868 master-0 kubenswrapper[31411]: I0224 02:36:12.403812 31411 scope.go:117] "RemoveContainer" containerID="b47802dc03ac933559f7cacd5354414ae6b343727c0f8b39b0f31fbc1f75c829" Feb 24 02:36:12.403868 master-0 kubenswrapper[31411]: I0224 02:36:12.403810 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b55dc5f67-k2lcw" Feb 24 02:36:12.410234 master-0 kubenswrapper[31411]: W0224 02:36:12.409895 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ac75776_ea65_4171_9504_a54ece21245f.slice/crio-5494f8597ce1d6f1d7c1c2c6b75b094de5ffb79b89537af4e280a4cae3114ac2 WatchSource:0}: Error finding container 5494f8597ce1d6f1d7c1c2c6b75b094de5ffb79b89537af4e280a4cae3114ac2: Status 404 returned error can't find the container with id 5494f8597ce1d6f1d7c1c2c6b75b094de5ffb79b89537af4e280a4cae3114ac2 Feb 24 02:36:12.411022 master-0 kubenswrapper[31411]: I0224 02:36:12.410994 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-klrwt" Feb 24 02:36:12.411120 master-0 kubenswrapper[31411]: I0224 02:36:12.410972 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-klrwt" event={"ID":"dacf13b3-b36b-40db-8824-550ffb0b6cbd","Type":"ContainerDied","Data":"d1b0e16e1983a513700f5427907f163185e1cdea0eea7cc7ae753e22aedb89ac"} Feb 24 02:36:12.411189 master-0 kubenswrapper[31411]: I0224 02:36:12.411149 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1b0e16e1983a513700f5427907f163185e1cdea0eea7cc7ae753e22aedb89ac" Feb 24 02:36:12.413769 master-0 kubenswrapper[31411]: I0224 02:36:12.413719 31411 generic.go:334] "Generic (PLEG): container finished" podID="23c2f7f9-8374-47bd-b845-2b415f499ba3" containerID="076b9809054f4fc762518eee1e3e36bf62481bd76b5142ee7d69a6bd72ff5f8a" exitCode=0 Feb 24 02:36:12.413854 master-0 kubenswrapper[31411]: I0224 02:36:12.413783 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hjmv9-config-rhxl6" event={"ID":"23c2f7f9-8374-47bd-b845-2b415f499ba3","Type":"ContainerDied","Data":"076b9809054f4fc762518eee1e3e36bf62481bd76b5142ee7d69a6bd72ff5f8a"} Feb 24 02:36:12.413854 master-0 kubenswrapper[31411]: I0224 02:36:12.413820 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hjmv9-config-rhxl6" event={"ID":"23c2f7f9-8374-47bd-b845-2b415f499ba3","Type":"ContainerStarted","Data":"20ce02cf5613e36f1adaebe09eac44e054aa460f35ec585150f0934069692069"} Feb 24 02:36:12.414638 master-0 kubenswrapper[31411]: W0224 02:36:12.414602 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bdc9612_8122_43dd_b0bd_3a2044dc4848.slice/crio-084f2a636ee33fad2b59b7bce1af0e60305e62ac736603a8577259ee3695da82 WatchSource:0}: Error finding container 084f2a636ee33fad2b59b7bce1af0e60305e62ac736603a8577259ee3695da82: Status 404 returned error can't find the container with id 084f2a636ee33fad2b59b7bce1af0e60305e62ac736603a8577259ee3695da82 Feb 24 02:36:12.425080 master-0 kubenswrapper[31411]: W0224 02:36:12.425020 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bdbc28e_3ed7_4454_8421_043a9d2864c8.slice/crio-9ac9ef783b1a6de761142fb669f8b9a9e50e063b51924d5224aaae99e695f154 WatchSource:0}: Error finding container 9ac9ef783b1a6de761142fb669f8b9a9e50e063b51924d5224aaae99e695f154: Status 404 returned error can't find the container with id 9ac9ef783b1a6de761142fb669f8b9a9e50e063b51924d5224aaae99e695f154 Feb 24 02:36:12.491300 master-0 kubenswrapper[31411]: I0224 02:36:12.491247 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b55dc5f67-k2lcw"] Feb 24 02:36:12.506747 master-0 kubenswrapper[31411]: I0224 02:36:12.506695 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b55dc5f67-k2lcw"] Feb 24 02:36:12.509211 master-0 kubenswrapper[31411]: I0224 02:36:12.509182 31411 scope.go:117] "RemoveContainer" containerID="529e8c557b8008f4c2b0087c1c9534f3ded60d53f6160c927fd52e9aa98352ca" Feb 24 02:36:12.594618 master-0 kubenswrapper[31411]: I0224 02:36:12.594546 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ed6f-account-create-update-kn7d6"] Feb 24 02:36:12.614661 master-0 kubenswrapper[31411]: I0224 02:36:12.614628 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-x5qn7"] Feb 24 02:36:13.111363 master-0 kubenswrapper[31411]: I0224 02:36:13.111212 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42530b69-b5be-4699-873f-51cf3161c255" path="/var/lib/kubelet/pods/42530b69-b5be-4699-873f-51cf3161c255/volumes" Feb 24 02:36:13.431771 master-0 kubenswrapper[31411]: I0224 02:36:13.431708 31411 generic.go:334] "Generic (PLEG): container finished" podID="6ac75776-ea65-4171-9504-a54ece21245f" containerID="83cfaa12b349d53aaf1ea3130516915b5cfe8919f9c34403280f1d2f62e20ad8" exitCode=0 Feb 24 02:36:13.431926 master-0 kubenswrapper[31411]: I0224 02:36:13.431779 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7051-account-create-update-2j7gx" event={"ID":"6ac75776-ea65-4171-9504-a54ece21245f","Type":"ContainerDied","Data":"83cfaa12b349d53aaf1ea3130516915b5cfe8919f9c34403280f1d2f62e20ad8"} Feb 24 02:36:13.431926 master-0 kubenswrapper[31411]: I0224 02:36:13.431861 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7051-account-create-update-2j7gx" event={"ID":"6ac75776-ea65-4171-9504-a54ece21245f","Type":"ContainerStarted","Data":"5494f8597ce1d6f1d7c1c2c6b75b094de5ffb79b89537af4e280a4cae3114ac2"} Feb 24 02:36:13.436445 master-0 kubenswrapper[31411]: I0224 02:36:13.436409 31411 generic.go:334] "Generic (PLEG): container finished" podID="4459f0c8-08d6-4c34-8144-f36dc7408608" containerID="6d58f61c32f49c831e3aebfad2f2f326bbd6c8aa7cd7a0c8677d74e92febc7b3" exitCode=0 Feb 24 02:36:13.436540 master-0 kubenswrapper[31411]: I0224 02:36:13.436484 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-x5qn7" event={"ID":"4459f0c8-08d6-4c34-8144-f36dc7408608","Type":"ContainerDied","Data":"6d58f61c32f49c831e3aebfad2f2f326bbd6c8aa7cd7a0c8677d74e92febc7b3"} Feb 24 02:36:13.436540 master-0 kubenswrapper[31411]: I0224 02:36:13.436507 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-x5qn7" event={"ID":"4459f0c8-08d6-4c34-8144-f36dc7408608","Type":"ContainerStarted","Data":"aa80193a481b5863dac7d9c3fe7d2bf9f557d551a3b914564008959e4fa27f3d"} Feb 24 02:36:13.444204 master-0 kubenswrapper[31411]: I0224 02:36:13.443448 31411 generic.go:334] "Generic (PLEG): container finished" podID="6bdc9612-8122-43dd-b0bd-3a2044dc4848" containerID="2338ebf21823338d8822ab61432c537f4a6ea8a833dcb6a268f84a3109e156a9" exitCode=0 Feb 24 02:36:13.444204 master-0 kubenswrapper[31411]: I0224 02:36:13.443555 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vptkz" event={"ID":"6bdc9612-8122-43dd-b0bd-3a2044dc4848","Type":"ContainerDied","Data":"2338ebf21823338d8822ab61432c537f4a6ea8a833dcb6a268f84a3109e156a9"} Feb 24 02:36:13.444204 master-0 kubenswrapper[31411]: I0224 02:36:13.443612 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vptkz" event={"ID":"6bdc9612-8122-43dd-b0bd-3a2044dc4848","Type":"ContainerStarted","Data":"084f2a636ee33fad2b59b7bce1af0e60305e62ac736603a8577259ee3695da82"} Feb 24 02:36:13.456325 master-0 kubenswrapper[31411]: I0224 02:36:13.455133 31411 generic.go:334] "Generic (PLEG): container finished" podID="5c7949ba-1c60-4a1d-872b-cc388aba1adc" containerID="2d56cef06e1f16125a76fa9bc35f5f7ce7db7f550944fcbb2ee3e2af80254880" exitCode=0 Feb 24 02:36:13.456325 master-0 kubenswrapper[31411]: I0224 02:36:13.455360 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ed6f-account-create-update-kn7d6" event={"ID":"5c7949ba-1c60-4a1d-872b-cc388aba1adc","Type":"ContainerDied","Data":"2d56cef06e1f16125a76fa9bc35f5f7ce7db7f550944fcbb2ee3e2af80254880"} Feb 24 02:36:13.456325 master-0 kubenswrapper[31411]: I0224 02:36:13.455419 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ed6f-account-create-update-kn7d6" event={"ID":"5c7949ba-1c60-4a1d-872b-cc388aba1adc","Type":"ContainerStarted","Data":"f61c163aec07b75cf21441b6e5d93ec2d98e4fcc50badb3850b2eac47543b252"} Feb 24 02:36:13.458284 master-0 kubenswrapper[31411]: I0224 02:36:13.458227 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4z2pz" event={"ID":"0bdbc28e-3ed7-4454-8421-043a9d2864c8","Type":"ContainerStarted","Data":"9ac9ef783b1a6de761142fb669f8b9a9e50e063b51924d5224aaae99e695f154"} Feb 24 02:36:13.466303 master-0 kubenswrapper[31411]: I0224 02:36:13.466236 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dnhq7" event={"ID":"1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af","Type":"ContainerStarted","Data":"c0ed8ed657a2640a5b93cec82ca016e1b8eaeacb56980d510576e81cb6580f18"} Feb 24 02:36:13.520290 master-0 kubenswrapper[31411]: I0224 02:36:13.520171 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-dnhq7" podStartSLOduration=3.838323502 podStartE2EDuration="22.520146155s" podCreationTimestamp="2026-02-24 02:35:51 +0000 UTC" firstStartedPulling="2026-02-24 02:35:53.120402624 +0000 UTC m=+896.337600470" lastFinishedPulling="2026-02-24 02:36:11.802225277 +0000 UTC m=+915.019423123" observedRunningTime="2026-02-24 02:36:13.515004471 +0000 UTC m=+916.732202317" watchObservedRunningTime="2026-02-24 02:36:13.520146155 +0000 UTC m=+916.737344001" Feb 24 02:36:13.971148 master-0 kubenswrapper[31411]: I0224 02:36:13.970583 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hjmv9-config-rhxl6" Feb 24 02:36:14.139536 master-0 kubenswrapper[31411]: I0224 02:36:14.139446 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/23c2f7f9-8374-47bd-b845-2b415f499ba3-additional-scripts\") pod \"23c2f7f9-8374-47bd-b845-2b415f499ba3\" (UID: \"23c2f7f9-8374-47bd-b845-2b415f499ba3\") " Feb 24 02:36:14.139897 master-0 kubenswrapper[31411]: I0224 02:36:14.139703 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/23c2f7f9-8374-47bd-b845-2b415f499ba3-var-run-ovn\") pod \"23c2f7f9-8374-47bd-b845-2b415f499ba3\" (UID: \"23c2f7f9-8374-47bd-b845-2b415f499ba3\") " Feb 24 02:36:14.139897 master-0 kubenswrapper[31411]: I0224 02:36:14.139734 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/23c2f7f9-8374-47bd-b845-2b415f499ba3-var-run\") pod \"23c2f7f9-8374-47bd-b845-2b415f499ba3\" (UID: \"23c2f7f9-8374-47bd-b845-2b415f499ba3\") " Feb 24 02:36:14.139897 master-0 kubenswrapper[31411]: I0224 02:36:14.139776 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/23c2f7f9-8374-47bd-b845-2b415f499ba3-var-log-ovn\") pod \"23c2f7f9-8374-47bd-b845-2b415f499ba3\" (UID: \"23c2f7f9-8374-47bd-b845-2b415f499ba3\") " Feb 24 02:36:14.139897 master-0 kubenswrapper[31411]: I0224 02:36:14.139897 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pw296\" (UniqueName: \"kubernetes.io/projected/23c2f7f9-8374-47bd-b845-2b415f499ba3-kube-api-access-pw296\") pod \"23c2f7f9-8374-47bd-b845-2b415f499ba3\" (UID: \"23c2f7f9-8374-47bd-b845-2b415f499ba3\") " Feb 24 02:36:14.140093 master-0 kubenswrapper[31411]: I0224 02:36:14.139959 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23c2f7f9-8374-47bd-b845-2b415f499ba3-scripts\") pod \"23c2f7f9-8374-47bd-b845-2b415f499ba3\" (UID: \"23c2f7f9-8374-47bd-b845-2b415f499ba3\") " Feb 24 02:36:14.140298 master-0 kubenswrapper[31411]: I0224 02:36:14.140244 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23c2f7f9-8374-47bd-b845-2b415f499ba3-var-run" (OuterVolumeSpecName: "var-run") pod "23c2f7f9-8374-47bd-b845-2b415f499ba3" (UID: "23c2f7f9-8374-47bd-b845-2b415f499ba3"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:36:14.140392 master-0 kubenswrapper[31411]: I0224 02:36:14.140291 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23c2f7f9-8374-47bd-b845-2b415f499ba3-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "23c2f7f9-8374-47bd-b845-2b415f499ba3" (UID: "23c2f7f9-8374-47bd-b845-2b415f499ba3"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:36:14.140521 master-0 kubenswrapper[31411]: I0224 02:36:14.140438 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23c2f7f9-8374-47bd-b845-2b415f499ba3-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "23c2f7f9-8374-47bd-b845-2b415f499ba3" (UID: "23c2f7f9-8374-47bd-b845-2b415f499ba3"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:36:14.141188 master-0 kubenswrapper[31411]: I0224 02:36:14.141162 31411 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/23c2f7f9-8374-47bd-b845-2b415f499ba3-var-run-ovn\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:14.141329 master-0 kubenswrapper[31411]: I0224 02:36:14.141313 31411 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/23c2f7f9-8374-47bd-b845-2b415f499ba3-var-run\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:14.141442 master-0 kubenswrapper[31411]: I0224 02:36:14.141426 31411 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/23c2f7f9-8374-47bd-b845-2b415f499ba3-var-log-ovn\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:14.141539 master-0 kubenswrapper[31411]: I0224 02:36:14.141325 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23c2f7f9-8374-47bd-b845-2b415f499ba3-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "23c2f7f9-8374-47bd-b845-2b415f499ba3" (UID: "23c2f7f9-8374-47bd-b845-2b415f499ba3"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:36:14.141920 master-0 kubenswrapper[31411]: I0224 02:36:14.141874 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23c2f7f9-8374-47bd-b845-2b415f499ba3-scripts" (OuterVolumeSpecName: "scripts") pod "23c2f7f9-8374-47bd-b845-2b415f499ba3" (UID: "23c2f7f9-8374-47bd-b845-2b415f499ba3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:36:14.147816 master-0 kubenswrapper[31411]: I0224 02:36:14.147762 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23c2f7f9-8374-47bd-b845-2b415f499ba3-kube-api-access-pw296" (OuterVolumeSpecName: "kube-api-access-pw296") pod "23c2f7f9-8374-47bd-b845-2b415f499ba3" (UID: "23c2f7f9-8374-47bd-b845-2b415f499ba3"). InnerVolumeSpecName "kube-api-access-pw296". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:36:14.253604 master-0 kubenswrapper[31411]: I0224 02:36:14.251143 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pw296\" (UniqueName: \"kubernetes.io/projected/23c2f7f9-8374-47bd-b845-2b415f499ba3-kube-api-access-pw296\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:14.253604 master-0 kubenswrapper[31411]: I0224 02:36:14.251189 31411 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23c2f7f9-8374-47bd-b845-2b415f499ba3-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:14.253604 master-0 kubenswrapper[31411]: I0224 02:36:14.251200 31411 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/23c2f7f9-8374-47bd-b845-2b415f499ba3-additional-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:14.486604 master-0 kubenswrapper[31411]: I0224 02:36:14.486514 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hjmv9-config-rhxl6" Feb 24 02:36:14.486956 master-0 kubenswrapper[31411]: I0224 02:36:14.486532 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hjmv9-config-rhxl6" event={"ID":"23c2f7f9-8374-47bd-b845-2b415f499ba3","Type":"ContainerDied","Data":"20ce02cf5613e36f1adaebe09eac44e054aa460f35ec585150f0934069692069"} Feb 24 02:36:14.486956 master-0 kubenswrapper[31411]: I0224 02:36:14.486673 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20ce02cf5613e36f1adaebe09eac44e054aa460f35ec585150f0934069692069" Feb 24 02:36:15.552717 master-0 kubenswrapper[31411]: I0224 02:36:15.546643 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-hjmv9-config-rhxl6"] Feb 24 02:36:15.560535 master-0 kubenswrapper[31411]: I0224 02:36:15.560446 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-hjmv9-config-rhxl6"] Feb 24 02:36:17.116009 master-0 kubenswrapper[31411]: I0224 02:36:17.115894 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23c2f7f9-8374-47bd-b845-2b415f499ba3" path="/var/lib/kubelet/pods/23c2f7f9-8374-47bd-b845-2b415f499ba3/volumes" Feb 24 02:36:18.236224 master-0 kubenswrapper[31411]: I0224 02:36:18.236145 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7051-account-create-update-2j7gx" Feb 24 02:36:18.242697 master-0 kubenswrapper[31411]: I0224 02:36:18.242635 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x5qn7" Feb 24 02:36:18.251429 master-0 kubenswrapper[31411]: I0224 02:36:18.251364 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ed6f-account-create-update-kn7d6" Feb 24 02:36:18.260296 master-0 kubenswrapper[31411]: I0224 02:36:18.260224 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vptkz" Feb 24 02:36:18.286190 master-0 kubenswrapper[31411]: I0224 02:36:18.286112 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r46wh\" (UniqueName: \"kubernetes.io/projected/4459f0c8-08d6-4c34-8144-f36dc7408608-kube-api-access-r46wh\") pod \"4459f0c8-08d6-4c34-8144-f36dc7408608\" (UID: \"4459f0c8-08d6-4c34-8144-f36dc7408608\") " Feb 24 02:36:18.286464 master-0 kubenswrapper[31411]: I0224 02:36:18.286397 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bdc9612-8122-43dd-b0bd-3a2044dc4848-operator-scripts\") pod \"6bdc9612-8122-43dd-b0bd-3a2044dc4848\" (UID: \"6bdc9612-8122-43dd-b0bd-3a2044dc4848\") " Feb 24 02:36:18.286554 master-0 kubenswrapper[31411]: I0224 02:36:18.286527 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzv7x\" (UniqueName: \"kubernetes.io/projected/6bdc9612-8122-43dd-b0bd-3a2044dc4848-kube-api-access-dzv7x\") pod \"6bdc9612-8122-43dd-b0bd-3a2044dc4848\" (UID: \"6bdc9612-8122-43dd-b0bd-3a2044dc4848\") " Feb 24 02:36:18.287016 master-0 kubenswrapper[31411]: I0224 02:36:18.286953 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bdc9612-8122-43dd-b0bd-3a2044dc4848-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6bdc9612-8122-43dd-b0bd-3a2044dc4848" (UID: "6bdc9612-8122-43dd-b0bd-3a2044dc4848"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:36:18.287415 master-0 kubenswrapper[31411]: I0224 02:36:18.287366 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c7949ba-1c60-4a1d-872b-cc388aba1adc-operator-scripts\") pod \"5c7949ba-1c60-4a1d-872b-cc388aba1adc\" (UID: \"5c7949ba-1c60-4a1d-872b-cc388aba1adc\") " Feb 24 02:36:18.287503 master-0 kubenswrapper[31411]: I0224 02:36:18.287474 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gs9d\" (UniqueName: \"kubernetes.io/projected/6ac75776-ea65-4171-9504-a54ece21245f-kube-api-access-7gs9d\") pod \"6ac75776-ea65-4171-9504-a54ece21245f\" (UID: \"6ac75776-ea65-4171-9504-a54ece21245f\") " Feb 24 02:36:18.287670 master-0 kubenswrapper[31411]: I0224 02:36:18.287628 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p54xd\" (UniqueName: \"kubernetes.io/projected/5c7949ba-1c60-4a1d-872b-cc388aba1adc-kube-api-access-p54xd\") pod \"5c7949ba-1c60-4a1d-872b-cc388aba1adc\" (UID: \"5c7949ba-1c60-4a1d-872b-cc388aba1adc\") " Feb 24 02:36:18.287769 master-0 kubenswrapper[31411]: I0224 02:36:18.287741 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ac75776-ea65-4171-9504-a54ece21245f-operator-scripts\") pod \"6ac75776-ea65-4171-9504-a54ece21245f\" (UID: \"6ac75776-ea65-4171-9504-a54ece21245f\") " Feb 24 02:36:18.288098 master-0 kubenswrapper[31411]: I0224 02:36:18.288053 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4459f0c8-08d6-4c34-8144-f36dc7408608-operator-scripts\") pod \"4459f0c8-08d6-4c34-8144-f36dc7408608\" (UID: \"4459f0c8-08d6-4c34-8144-f36dc7408608\") " Feb 24 02:36:18.288217 master-0 kubenswrapper[31411]: I0224 02:36:18.288168 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c7949ba-1c60-4a1d-872b-cc388aba1adc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5c7949ba-1c60-4a1d-872b-cc388aba1adc" (UID: "5c7949ba-1c60-4a1d-872b-cc388aba1adc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:36:18.288970 master-0 kubenswrapper[31411]: I0224 02:36:18.288887 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ac75776-ea65-4171-9504-a54ece21245f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6ac75776-ea65-4171-9504-a54ece21245f" (UID: "6ac75776-ea65-4171-9504-a54ece21245f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:36:18.288970 master-0 kubenswrapper[31411]: I0224 02:36:18.288917 31411 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bdc9612-8122-43dd-b0bd-3a2044dc4848-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:18.289170 master-0 kubenswrapper[31411]: I0224 02:36:18.289005 31411 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c7949ba-1c60-4a1d-872b-cc388aba1adc-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:18.289534 master-0 kubenswrapper[31411]: I0224 02:36:18.289499 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4459f0c8-08d6-4c34-8144-f36dc7408608-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4459f0c8-08d6-4c34-8144-f36dc7408608" (UID: "4459f0c8-08d6-4c34-8144-f36dc7408608"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:36:18.293246 master-0 kubenswrapper[31411]: I0224 02:36:18.293188 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bdc9612-8122-43dd-b0bd-3a2044dc4848-kube-api-access-dzv7x" (OuterVolumeSpecName: "kube-api-access-dzv7x") pod "6bdc9612-8122-43dd-b0bd-3a2044dc4848" (UID: "6bdc9612-8122-43dd-b0bd-3a2044dc4848"). InnerVolumeSpecName "kube-api-access-dzv7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:36:18.294320 master-0 kubenswrapper[31411]: I0224 02:36:18.294275 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ac75776-ea65-4171-9504-a54ece21245f-kube-api-access-7gs9d" (OuterVolumeSpecName: "kube-api-access-7gs9d") pod "6ac75776-ea65-4171-9504-a54ece21245f" (UID: "6ac75776-ea65-4171-9504-a54ece21245f"). InnerVolumeSpecName "kube-api-access-7gs9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:36:18.295506 master-0 kubenswrapper[31411]: I0224 02:36:18.295450 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c7949ba-1c60-4a1d-872b-cc388aba1adc-kube-api-access-p54xd" (OuterVolumeSpecName: "kube-api-access-p54xd") pod "5c7949ba-1c60-4a1d-872b-cc388aba1adc" (UID: "5c7949ba-1c60-4a1d-872b-cc388aba1adc"). InnerVolumeSpecName "kube-api-access-p54xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:36:18.295506 master-0 kubenswrapper[31411]: I0224 02:36:18.295493 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4459f0c8-08d6-4c34-8144-f36dc7408608-kube-api-access-r46wh" (OuterVolumeSpecName: "kube-api-access-r46wh") pod "4459f0c8-08d6-4c34-8144-f36dc7408608" (UID: "4459f0c8-08d6-4c34-8144-f36dc7408608"). InnerVolumeSpecName "kube-api-access-r46wh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:36:18.391844 master-0 kubenswrapper[31411]: I0224 02:36:18.391774 31411 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4459f0c8-08d6-4c34-8144-f36dc7408608-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:18.391844 master-0 kubenswrapper[31411]: I0224 02:36:18.391830 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r46wh\" (UniqueName: \"kubernetes.io/projected/4459f0c8-08d6-4c34-8144-f36dc7408608-kube-api-access-r46wh\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:18.391844 master-0 kubenswrapper[31411]: I0224 02:36:18.391849 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzv7x\" (UniqueName: \"kubernetes.io/projected/6bdc9612-8122-43dd-b0bd-3a2044dc4848-kube-api-access-dzv7x\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:18.392218 master-0 kubenswrapper[31411]: I0224 02:36:18.391864 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gs9d\" (UniqueName: \"kubernetes.io/projected/6ac75776-ea65-4171-9504-a54ece21245f-kube-api-access-7gs9d\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:18.392218 master-0 kubenswrapper[31411]: I0224 02:36:18.391878 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p54xd\" (UniqueName: \"kubernetes.io/projected/5c7949ba-1c60-4a1d-872b-cc388aba1adc-kube-api-access-p54xd\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:18.392218 master-0 kubenswrapper[31411]: I0224 02:36:18.391893 31411 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ac75776-ea65-4171-9504-a54ece21245f-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:18.571315 master-0 kubenswrapper[31411]: I0224 02:36:18.571118 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-x5qn7" event={"ID":"4459f0c8-08d6-4c34-8144-f36dc7408608","Type":"ContainerDied","Data":"aa80193a481b5863dac7d9c3fe7d2bf9f557d551a3b914564008959e4fa27f3d"} Feb 24 02:36:18.571315 master-0 kubenswrapper[31411]: I0224 02:36:18.571179 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa80193a481b5863dac7d9c3fe7d2bf9f557d551a3b914564008959e4fa27f3d" Feb 24 02:36:18.571315 master-0 kubenswrapper[31411]: I0224 02:36:18.571175 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x5qn7" Feb 24 02:36:18.574935 master-0 kubenswrapper[31411]: I0224 02:36:18.574872 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vptkz" event={"ID":"6bdc9612-8122-43dd-b0bd-3a2044dc4848","Type":"ContainerDied","Data":"084f2a636ee33fad2b59b7bce1af0e60305e62ac736603a8577259ee3695da82"} Feb 24 02:36:18.574935 master-0 kubenswrapper[31411]: I0224 02:36:18.574898 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="084f2a636ee33fad2b59b7bce1af0e60305e62ac736603a8577259ee3695da82" Feb 24 02:36:18.575130 master-0 kubenswrapper[31411]: I0224 02:36:18.575014 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vptkz" Feb 24 02:36:18.588697 master-0 kubenswrapper[31411]: I0224 02:36:18.588560 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ed6f-account-create-update-kn7d6" event={"ID":"5c7949ba-1c60-4a1d-872b-cc388aba1adc","Type":"ContainerDied","Data":"f61c163aec07b75cf21441b6e5d93ec2d98e4fcc50badb3850b2eac47543b252"} Feb 24 02:36:18.588697 master-0 kubenswrapper[31411]: I0224 02:36:18.588649 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f61c163aec07b75cf21441b6e5d93ec2d98e4fcc50badb3850b2eac47543b252" Feb 24 02:36:18.589005 master-0 kubenswrapper[31411]: I0224 02:36:18.588952 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ed6f-account-create-update-kn7d6" Feb 24 02:36:18.590816 master-0 kubenswrapper[31411]: I0224 02:36:18.590737 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4z2pz" event={"ID":"0bdbc28e-3ed7-4454-8421-043a9d2864c8","Type":"ContainerStarted","Data":"7ff963faa5b47cf603ab6ed354024a4daa3d9b51272e1250252e26307a7cd261"} Feb 24 02:36:18.593564 master-0 kubenswrapper[31411]: I0224 02:36:18.593492 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7051-account-create-update-2j7gx" event={"ID":"6ac75776-ea65-4171-9504-a54ece21245f","Type":"ContainerDied","Data":"5494f8597ce1d6f1d7c1c2c6b75b094de5ffb79b89537af4e280a4cae3114ac2"} Feb 24 02:36:18.593700 master-0 kubenswrapper[31411]: I0224 02:36:18.593561 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5494f8597ce1d6f1d7c1c2c6b75b094de5ffb79b89537af4e280a4cae3114ac2" Feb 24 02:36:18.593700 master-0 kubenswrapper[31411]: I0224 02:36:18.593680 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7051-account-create-update-2j7gx" Feb 24 02:36:18.627500 master-0 kubenswrapper[31411]: I0224 02:36:18.627291 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-4z2pz" podStartSLOduration=11.004417448 podStartE2EDuration="16.627271472s" podCreationTimestamp="2026-02-24 02:36:02 +0000 UTC" firstStartedPulling="2026-02-24 02:36:12.460037107 +0000 UTC m=+915.677234963" lastFinishedPulling="2026-02-24 02:36:18.082891111 +0000 UTC m=+921.300088987" observedRunningTime="2026-02-24 02:36:18.617283282 +0000 UTC m=+921.834481128" watchObservedRunningTime="2026-02-24 02:36:18.627271472 +0000 UTC m=+921.844469318" Feb 24 02:36:22.664711 master-0 kubenswrapper[31411]: I0224 02:36:22.664602 31411 generic.go:334] "Generic (PLEG): container finished" podID="0bdbc28e-3ed7-4454-8421-043a9d2864c8" containerID="7ff963faa5b47cf603ab6ed354024a4daa3d9b51272e1250252e26307a7cd261" exitCode=0 Feb 24 02:36:22.665327 master-0 kubenswrapper[31411]: I0224 02:36:22.664717 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4z2pz" event={"ID":"0bdbc28e-3ed7-4454-8421-043a9d2864c8","Type":"ContainerDied","Data":"7ff963faa5b47cf603ab6ed354024a4daa3d9b51272e1250252e26307a7cd261"} Feb 24 02:36:24.289729 master-0 kubenswrapper[31411]: I0224 02:36:24.289666 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4z2pz" Feb 24 02:36:24.483861 master-0 kubenswrapper[31411]: I0224 02:36:24.483810 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bdbc28e-3ed7-4454-8421-043a9d2864c8-config-data\") pod \"0bdbc28e-3ed7-4454-8421-043a9d2864c8\" (UID: \"0bdbc28e-3ed7-4454-8421-043a9d2864c8\") " Feb 24 02:36:24.484193 master-0 kubenswrapper[31411]: I0224 02:36:24.483921 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lv8w\" (UniqueName: \"kubernetes.io/projected/0bdbc28e-3ed7-4454-8421-043a9d2864c8-kube-api-access-7lv8w\") pod \"0bdbc28e-3ed7-4454-8421-043a9d2864c8\" (UID: \"0bdbc28e-3ed7-4454-8421-043a9d2864c8\") " Feb 24 02:36:24.484193 master-0 kubenswrapper[31411]: I0224 02:36:24.484106 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bdbc28e-3ed7-4454-8421-043a9d2864c8-combined-ca-bundle\") pod \"0bdbc28e-3ed7-4454-8421-043a9d2864c8\" (UID: \"0bdbc28e-3ed7-4454-8421-043a9d2864c8\") " Feb 24 02:36:24.488199 master-0 kubenswrapper[31411]: I0224 02:36:24.488126 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bdbc28e-3ed7-4454-8421-043a9d2864c8-kube-api-access-7lv8w" (OuterVolumeSpecName: "kube-api-access-7lv8w") pod "0bdbc28e-3ed7-4454-8421-043a9d2864c8" (UID: "0bdbc28e-3ed7-4454-8421-043a9d2864c8"). InnerVolumeSpecName "kube-api-access-7lv8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:36:24.538252 master-0 kubenswrapper[31411]: I0224 02:36:24.538203 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bdbc28e-3ed7-4454-8421-043a9d2864c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0bdbc28e-3ed7-4454-8421-043a9d2864c8" (UID: "0bdbc28e-3ed7-4454-8421-043a9d2864c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:36:24.546497 master-0 kubenswrapper[31411]: I0224 02:36:24.546423 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bdbc28e-3ed7-4454-8421-043a9d2864c8-config-data" (OuterVolumeSpecName: "config-data") pod "0bdbc28e-3ed7-4454-8421-043a9d2864c8" (UID: "0bdbc28e-3ed7-4454-8421-043a9d2864c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:36:24.586565 master-0 kubenswrapper[31411]: I0224 02:36:24.586517 31411 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bdbc28e-3ed7-4454-8421-043a9d2864c8-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:24.586565 master-0 kubenswrapper[31411]: I0224 02:36:24.586562 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lv8w\" (UniqueName: \"kubernetes.io/projected/0bdbc28e-3ed7-4454-8421-043a9d2864c8-kube-api-access-7lv8w\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:24.586746 master-0 kubenswrapper[31411]: I0224 02:36:24.586592 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bdbc28e-3ed7-4454-8421-043a9d2864c8-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:24.698203 master-0 kubenswrapper[31411]: I0224 02:36:24.698119 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4z2pz" event={"ID":"0bdbc28e-3ed7-4454-8421-043a9d2864c8","Type":"ContainerDied","Data":"9ac9ef783b1a6de761142fb669f8b9a9e50e063b51924d5224aaae99e695f154"} Feb 24 02:36:24.698203 master-0 kubenswrapper[31411]: I0224 02:36:24.698192 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4z2pz" Feb 24 02:36:24.698598 master-0 kubenswrapper[31411]: I0224 02:36:24.698202 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ac9ef783b1a6de761142fb669f8b9a9e50e063b51924d5224aaae99e695f154" Feb 24 02:36:24.701024 master-0 kubenswrapper[31411]: I0224 02:36:24.700971 31411 generic.go:334] "Generic (PLEG): container finished" podID="1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af" containerID="c0ed8ed657a2640a5b93cec82ca016e1b8eaeacb56980d510576e81cb6580f18" exitCode=0 Feb 24 02:36:24.701096 master-0 kubenswrapper[31411]: I0224 02:36:24.701036 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dnhq7" event={"ID":"1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af","Type":"ContainerDied","Data":"c0ed8ed657a2640a5b93cec82ca016e1b8eaeacb56980d510576e81cb6580f18"} Feb 24 02:36:25.184104 master-0 kubenswrapper[31411]: I0224 02:36:25.184043 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-bw6l8"] Feb 24 02:36:25.186701 master-0 kubenswrapper[31411]: E0224 02:36:25.186677 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c7949ba-1c60-4a1d-872b-cc388aba1adc" containerName="mariadb-account-create-update" Feb 24 02:36:25.186701 master-0 kubenswrapper[31411]: I0224 02:36:25.186703 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c7949ba-1c60-4a1d-872b-cc388aba1adc" containerName="mariadb-account-create-update" Feb 24 02:36:25.186801 master-0 kubenswrapper[31411]: E0224 02:36:25.186729 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23c2f7f9-8374-47bd-b845-2b415f499ba3" containerName="ovn-config" Feb 24 02:36:25.186801 master-0 kubenswrapper[31411]: I0224 02:36:25.186738 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="23c2f7f9-8374-47bd-b845-2b415f499ba3" containerName="ovn-config" Feb 24 02:36:25.186801 master-0 kubenswrapper[31411]: E0224 02:36:25.186768 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42530b69-b5be-4699-873f-51cf3161c255" containerName="dnsmasq-dns" Feb 24 02:36:25.186801 master-0 kubenswrapper[31411]: I0224 02:36:25.186777 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="42530b69-b5be-4699-873f-51cf3161c255" containerName="dnsmasq-dns" Feb 24 02:36:25.186801 master-0 kubenswrapper[31411]: E0224 02:36:25.186792 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bdc9612-8122-43dd-b0bd-3a2044dc4848" containerName="mariadb-database-create" Feb 24 02:36:25.186801 master-0 kubenswrapper[31411]: I0224 02:36:25.186799 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bdc9612-8122-43dd-b0bd-3a2044dc4848" containerName="mariadb-database-create" Feb 24 02:36:25.186983 master-0 kubenswrapper[31411]: E0224 02:36:25.186811 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42530b69-b5be-4699-873f-51cf3161c255" containerName="init" Feb 24 02:36:25.186983 master-0 kubenswrapper[31411]: I0224 02:36:25.186822 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="42530b69-b5be-4699-873f-51cf3161c255" containerName="init" Feb 24 02:36:25.186983 master-0 kubenswrapper[31411]: E0224 02:36:25.186837 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac75776-ea65-4171-9504-a54ece21245f" containerName="mariadb-account-create-update" Feb 24 02:36:25.186983 master-0 kubenswrapper[31411]: I0224 02:36:25.186844 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac75776-ea65-4171-9504-a54ece21245f" containerName="mariadb-account-create-update" Feb 24 02:36:25.186983 master-0 kubenswrapper[31411]: E0224 02:36:25.186870 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bdbc28e-3ed7-4454-8421-043a9d2864c8" containerName="keystone-db-sync" Feb 24 02:36:25.186983 master-0 kubenswrapper[31411]: I0224 02:36:25.186877 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bdbc28e-3ed7-4454-8421-043a9d2864c8" containerName="keystone-db-sync" Feb 24 02:36:25.186983 master-0 kubenswrapper[31411]: E0224 02:36:25.186895 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dacf13b3-b36b-40db-8824-550ffb0b6cbd" containerName="mariadb-account-create-update" Feb 24 02:36:25.186983 master-0 kubenswrapper[31411]: I0224 02:36:25.186905 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="dacf13b3-b36b-40db-8824-550ffb0b6cbd" containerName="mariadb-account-create-update" Feb 24 02:36:25.186983 master-0 kubenswrapper[31411]: E0224 02:36:25.186916 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4459f0c8-08d6-4c34-8144-f36dc7408608" containerName="mariadb-database-create" Feb 24 02:36:25.186983 master-0 kubenswrapper[31411]: I0224 02:36:25.186922 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="4459f0c8-08d6-4c34-8144-f36dc7408608" containerName="mariadb-database-create" Feb 24 02:36:25.187279 master-0 kubenswrapper[31411]: I0224 02:36:25.187138 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="dacf13b3-b36b-40db-8824-550ffb0b6cbd" containerName="mariadb-account-create-update" Feb 24 02:36:25.187279 master-0 kubenswrapper[31411]: I0224 02:36:25.187163 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="42530b69-b5be-4699-873f-51cf3161c255" containerName="dnsmasq-dns" Feb 24 02:36:25.187279 master-0 kubenswrapper[31411]: I0224 02:36:25.187182 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bdc9612-8122-43dd-b0bd-3a2044dc4848" containerName="mariadb-database-create" Feb 24 02:36:25.187279 master-0 kubenswrapper[31411]: I0224 02:36:25.187202 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c7949ba-1c60-4a1d-872b-cc388aba1adc" containerName="mariadb-account-create-update" Feb 24 02:36:25.187279 master-0 kubenswrapper[31411]: I0224 02:36:25.187214 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bdbc28e-3ed7-4454-8421-043a9d2864c8" containerName="keystone-db-sync" Feb 24 02:36:25.187279 master-0 kubenswrapper[31411]: I0224 02:36:25.187226 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ac75776-ea65-4171-9504-a54ece21245f" containerName="mariadb-account-create-update" Feb 24 02:36:25.187279 master-0 kubenswrapper[31411]: I0224 02:36:25.187232 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="4459f0c8-08d6-4c34-8144-f36dc7408608" containerName="mariadb-database-create" Feb 24 02:36:25.187279 master-0 kubenswrapper[31411]: I0224 02:36:25.187248 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="23c2f7f9-8374-47bd-b845-2b415f499ba3" containerName="ovn-config" Feb 24 02:36:25.202163 master-0 kubenswrapper[31411]: I0224 02:36:25.201277 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bw6l8" Feb 24 02:36:25.215215 master-0 kubenswrapper[31411]: I0224 02:36:25.215160 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 24 02:36:25.215695 master-0 kubenswrapper[31411]: I0224 02:36:25.215374 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 24 02:36:25.215866 master-0 kubenswrapper[31411]: I0224 02:36:25.215424 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 24 02:36:25.216854 master-0 kubenswrapper[31411]: I0224 02:36:25.216839 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 24 02:36:25.220225 master-0 kubenswrapper[31411]: I0224 02:36:25.220204 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-597f6b8457-gn4tl"] Feb 24 02:36:25.223276 master-0 kubenswrapper[31411]: I0224 02:36:25.223257 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-597f6b8457-gn4tl" Feb 24 02:36:25.265055 master-0 kubenswrapper[31411]: I0224 02:36:25.264978 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-bw6l8"] Feb 24 02:36:25.282688 master-0 kubenswrapper[31411]: I0224 02:36:25.278603 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99772cb2-a116-47a4-a08b-e81a079e56f4-dns-svc\") pod \"dnsmasq-dns-597f6b8457-gn4tl\" (UID: \"99772cb2-a116-47a4-a08b-e81a079e56f4\") " pod="openstack/dnsmasq-dns-597f6b8457-gn4tl" Feb 24 02:36:25.282688 master-0 kubenswrapper[31411]: I0224 02:36:25.278685 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk6vs\" (UniqueName: \"kubernetes.io/projected/a8678c29-b004-43c1-8393-c7a285064416-kube-api-access-wk6vs\") pod \"keystone-bootstrap-bw6l8\" (UID: \"a8678c29-b004-43c1-8393-c7a285064416\") " pod="openstack/keystone-bootstrap-bw6l8" Feb 24 02:36:25.282688 master-0 kubenswrapper[31411]: I0224 02:36:25.278711 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8678c29-b004-43c1-8393-c7a285064416-combined-ca-bundle\") pod \"keystone-bootstrap-bw6l8\" (UID: \"a8678c29-b004-43c1-8393-c7a285064416\") " pod="openstack/keystone-bootstrap-bw6l8" Feb 24 02:36:25.282688 master-0 kubenswrapper[31411]: I0224 02:36:25.278745 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8678c29-b004-43c1-8393-c7a285064416-config-data\") pod \"keystone-bootstrap-bw6l8\" (UID: \"a8678c29-b004-43c1-8393-c7a285064416\") " pod="openstack/keystone-bootstrap-bw6l8" Feb 24 02:36:25.282688 master-0 kubenswrapper[31411]: I0224 02:36:25.278770 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8678c29-b004-43c1-8393-c7a285064416-scripts\") pod \"keystone-bootstrap-bw6l8\" (UID: \"a8678c29-b004-43c1-8393-c7a285064416\") " pod="openstack/keystone-bootstrap-bw6l8" Feb 24 02:36:25.282688 master-0 kubenswrapper[31411]: I0224 02:36:25.278820 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99772cb2-a116-47a4-a08b-e81a079e56f4-ovsdbserver-nb\") pod \"dnsmasq-dns-597f6b8457-gn4tl\" (UID: \"99772cb2-a116-47a4-a08b-e81a079e56f4\") " pod="openstack/dnsmasq-dns-597f6b8457-gn4tl" Feb 24 02:36:25.282688 master-0 kubenswrapper[31411]: I0224 02:36:25.278851 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99772cb2-a116-47a4-a08b-e81a079e56f4-ovsdbserver-sb\") pod \"dnsmasq-dns-597f6b8457-gn4tl\" (UID: \"99772cb2-a116-47a4-a08b-e81a079e56f4\") " pod="openstack/dnsmasq-dns-597f6b8457-gn4tl" Feb 24 02:36:25.282688 master-0 kubenswrapper[31411]: I0224 02:36:25.278889 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a8678c29-b004-43c1-8393-c7a285064416-fernet-keys\") pod \"keystone-bootstrap-bw6l8\" (UID: \"a8678c29-b004-43c1-8393-c7a285064416\") " pod="openstack/keystone-bootstrap-bw6l8" Feb 24 02:36:25.282688 master-0 kubenswrapper[31411]: I0224 02:36:25.278917 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a8678c29-b004-43c1-8393-c7a285064416-credential-keys\") pod \"keystone-bootstrap-bw6l8\" (UID: \"a8678c29-b004-43c1-8393-c7a285064416\") " pod="openstack/keystone-bootstrap-bw6l8" Feb 24 02:36:25.282688 master-0 kubenswrapper[31411]: I0224 02:36:25.278944 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99772cb2-a116-47a4-a08b-e81a079e56f4-dns-swift-storage-0\") pod \"dnsmasq-dns-597f6b8457-gn4tl\" (UID: \"99772cb2-a116-47a4-a08b-e81a079e56f4\") " pod="openstack/dnsmasq-dns-597f6b8457-gn4tl" Feb 24 02:36:25.282688 master-0 kubenswrapper[31411]: I0224 02:36:25.278968 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpb8d\" (UniqueName: \"kubernetes.io/projected/99772cb2-a116-47a4-a08b-e81a079e56f4-kube-api-access-tpb8d\") pod \"dnsmasq-dns-597f6b8457-gn4tl\" (UID: \"99772cb2-a116-47a4-a08b-e81a079e56f4\") " pod="openstack/dnsmasq-dns-597f6b8457-gn4tl" Feb 24 02:36:25.282688 master-0 kubenswrapper[31411]: I0224 02:36:25.278989 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99772cb2-a116-47a4-a08b-e81a079e56f4-config\") pod \"dnsmasq-dns-597f6b8457-gn4tl\" (UID: \"99772cb2-a116-47a4-a08b-e81a079e56f4\") " pod="openstack/dnsmasq-dns-597f6b8457-gn4tl" Feb 24 02:36:25.287554 master-0 kubenswrapper[31411]: I0224 02:36:25.287516 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-597f6b8457-gn4tl"] Feb 24 02:36:25.339119 master-0 kubenswrapper[31411]: I0224 02:36:25.339049 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-create-8l585"] Feb 24 02:36:25.341836 master-0 kubenswrapper[31411]: I0224 02:36:25.341814 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-8l585" Feb 24 02:36:25.364722 master-0 kubenswrapper[31411]: I0224 02:36:25.364654 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-8l585"] Feb 24 02:36:25.388506 master-0 kubenswrapper[31411]: I0224 02:36:25.388433 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8678c29-b004-43c1-8393-c7a285064416-config-data\") pod \"keystone-bootstrap-bw6l8\" (UID: \"a8678c29-b004-43c1-8393-c7a285064416\") " pod="openstack/keystone-bootstrap-bw6l8" Feb 24 02:36:25.388506 master-0 kubenswrapper[31411]: I0224 02:36:25.388507 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8678c29-b004-43c1-8393-c7a285064416-scripts\") pod \"keystone-bootstrap-bw6l8\" (UID: \"a8678c29-b004-43c1-8393-c7a285064416\") " pod="openstack/keystone-bootstrap-bw6l8" Feb 24 02:36:25.389050 master-0 kubenswrapper[31411]: I0224 02:36:25.388972 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99772cb2-a116-47a4-a08b-e81a079e56f4-ovsdbserver-nb\") pod \"dnsmasq-dns-597f6b8457-gn4tl\" (UID: \"99772cb2-a116-47a4-a08b-e81a079e56f4\") " pod="openstack/dnsmasq-dns-597f6b8457-gn4tl" Feb 24 02:36:25.390522 master-0 kubenswrapper[31411]: I0224 02:36:25.389549 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99772cb2-a116-47a4-a08b-e81a079e56f4-ovsdbserver-sb\") pod \"dnsmasq-dns-597f6b8457-gn4tl\" (UID: \"99772cb2-a116-47a4-a08b-e81a079e56f4\") " pod="openstack/dnsmasq-dns-597f6b8457-gn4tl" Feb 24 02:36:25.390522 master-0 kubenswrapper[31411]: I0224 02:36:25.389687 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a8678c29-b004-43c1-8393-c7a285064416-fernet-keys\") pod \"keystone-bootstrap-bw6l8\" (UID: \"a8678c29-b004-43c1-8393-c7a285064416\") " pod="openstack/keystone-bootstrap-bw6l8" Feb 24 02:36:25.390522 master-0 kubenswrapper[31411]: I0224 02:36:25.389722 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a8678c29-b004-43c1-8393-c7a285064416-credential-keys\") pod \"keystone-bootstrap-bw6l8\" (UID: \"a8678c29-b004-43c1-8393-c7a285064416\") " pod="openstack/keystone-bootstrap-bw6l8" Feb 24 02:36:25.390522 master-0 kubenswrapper[31411]: I0224 02:36:25.390441 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99772cb2-a116-47a4-a08b-e81a079e56f4-ovsdbserver-nb\") pod \"dnsmasq-dns-597f6b8457-gn4tl\" (UID: \"99772cb2-a116-47a4-a08b-e81a079e56f4\") " pod="openstack/dnsmasq-dns-597f6b8457-gn4tl" Feb 24 02:36:25.390707 master-0 kubenswrapper[31411]: I0224 02:36:25.390628 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99772cb2-a116-47a4-a08b-e81a079e56f4-ovsdbserver-sb\") pod \"dnsmasq-dns-597f6b8457-gn4tl\" (UID: \"99772cb2-a116-47a4-a08b-e81a079e56f4\") " pod="openstack/dnsmasq-dns-597f6b8457-gn4tl" Feb 24 02:36:25.393771 master-0 kubenswrapper[31411]: I0224 02:36:25.391414 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99772cb2-a116-47a4-a08b-e81a079e56f4-dns-swift-storage-0\") pod \"dnsmasq-dns-597f6b8457-gn4tl\" (UID: \"99772cb2-a116-47a4-a08b-e81a079e56f4\") " pod="openstack/dnsmasq-dns-597f6b8457-gn4tl" Feb 24 02:36:25.393771 master-0 kubenswrapper[31411]: I0224 02:36:25.391474 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpb8d\" (UniqueName: \"kubernetes.io/projected/99772cb2-a116-47a4-a08b-e81a079e56f4-kube-api-access-tpb8d\") pod \"dnsmasq-dns-597f6b8457-gn4tl\" (UID: \"99772cb2-a116-47a4-a08b-e81a079e56f4\") " pod="openstack/dnsmasq-dns-597f6b8457-gn4tl" Feb 24 02:36:25.393771 master-0 kubenswrapper[31411]: I0224 02:36:25.392138 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99772cb2-a116-47a4-a08b-e81a079e56f4-config\") pod \"dnsmasq-dns-597f6b8457-gn4tl\" (UID: \"99772cb2-a116-47a4-a08b-e81a079e56f4\") " pod="openstack/dnsmasq-dns-597f6b8457-gn4tl" Feb 24 02:36:25.393771 master-0 kubenswrapper[31411]: I0224 02:36:25.392423 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99772cb2-a116-47a4-a08b-e81a079e56f4-dns-swift-storage-0\") pod \"dnsmasq-dns-597f6b8457-gn4tl\" (UID: \"99772cb2-a116-47a4-a08b-e81a079e56f4\") " pod="openstack/dnsmasq-dns-597f6b8457-gn4tl" Feb 24 02:36:25.393771 master-0 kubenswrapper[31411]: I0224 02:36:25.393009 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99772cb2-a116-47a4-a08b-e81a079e56f4-dns-svc\") pod \"dnsmasq-dns-597f6b8457-gn4tl\" (UID: \"99772cb2-a116-47a4-a08b-e81a079e56f4\") " pod="openstack/dnsmasq-dns-597f6b8457-gn4tl" Feb 24 02:36:25.393771 master-0 kubenswrapper[31411]: I0224 02:36:25.393064 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk6vs\" (UniqueName: \"kubernetes.io/projected/a8678c29-b004-43c1-8393-c7a285064416-kube-api-access-wk6vs\") pod \"keystone-bootstrap-bw6l8\" (UID: \"a8678c29-b004-43c1-8393-c7a285064416\") " pod="openstack/keystone-bootstrap-bw6l8" Feb 24 02:36:25.393771 master-0 kubenswrapper[31411]: I0224 02:36:25.393091 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8678c29-b004-43c1-8393-c7a285064416-combined-ca-bundle\") pod \"keystone-bootstrap-bw6l8\" (UID: \"a8678c29-b004-43c1-8393-c7a285064416\") " pod="openstack/keystone-bootstrap-bw6l8" Feb 24 02:36:25.394751 master-0 kubenswrapper[31411]: I0224 02:36:25.394702 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99772cb2-a116-47a4-a08b-e81a079e56f4-dns-svc\") pod \"dnsmasq-dns-597f6b8457-gn4tl\" (UID: \"99772cb2-a116-47a4-a08b-e81a079e56f4\") " pod="openstack/dnsmasq-dns-597f6b8457-gn4tl" Feb 24 02:36:25.396202 master-0 kubenswrapper[31411]: I0224 02:36:25.396160 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8678c29-b004-43c1-8393-c7a285064416-scripts\") pod \"keystone-bootstrap-bw6l8\" (UID: \"a8678c29-b004-43c1-8393-c7a285064416\") " pod="openstack/keystone-bootstrap-bw6l8" Feb 24 02:36:25.396269 master-0 kubenswrapper[31411]: I0224 02:36:25.396184 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a8678c29-b004-43c1-8393-c7a285064416-fernet-keys\") pod \"keystone-bootstrap-bw6l8\" (UID: \"a8678c29-b004-43c1-8393-c7a285064416\") " pod="openstack/keystone-bootstrap-bw6l8" Feb 24 02:36:25.396638 master-0 kubenswrapper[31411]: I0224 02:36:25.396596 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99772cb2-a116-47a4-a08b-e81a079e56f4-config\") pod \"dnsmasq-dns-597f6b8457-gn4tl\" (UID: \"99772cb2-a116-47a4-a08b-e81a079e56f4\") " pod="openstack/dnsmasq-dns-597f6b8457-gn4tl" Feb 24 02:36:25.416018 master-0 kubenswrapper[31411]: I0224 02:36:25.410852 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-k6pnr"] Feb 24 02:36:25.416018 master-0 kubenswrapper[31411]: I0224 02:36:25.411012 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8678c29-b004-43c1-8393-c7a285064416-config-data\") pod \"keystone-bootstrap-bw6l8\" (UID: \"a8678c29-b004-43c1-8393-c7a285064416\") " pod="openstack/keystone-bootstrap-bw6l8" Feb 24 02:36:25.416018 master-0 kubenswrapper[31411]: I0224 02:36:25.412179 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a8678c29-b004-43c1-8393-c7a285064416-credential-keys\") pod \"keystone-bootstrap-bw6l8\" (UID: \"a8678c29-b004-43c1-8393-c7a285064416\") " pod="openstack/keystone-bootstrap-bw6l8" Feb 24 02:36:25.416018 master-0 kubenswrapper[31411]: I0224 02:36:25.413105 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-k6pnr" Feb 24 02:36:25.440812 master-0 kubenswrapper[31411]: I0224 02:36:25.420323 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 24 02:36:25.440812 master-0 kubenswrapper[31411]: I0224 02:36:25.421084 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-ecce-account-create-update-2pvjj"] Feb 24 02:36:25.440812 master-0 kubenswrapper[31411]: I0224 02:36:25.421556 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 24 02:36:25.440812 master-0 kubenswrapper[31411]: I0224 02:36:25.422984 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-ecce-account-create-update-2pvjj" Feb 24 02:36:25.440812 master-0 kubenswrapper[31411]: I0224 02:36:25.423887 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpb8d\" (UniqueName: \"kubernetes.io/projected/99772cb2-a116-47a4-a08b-e81a079e56f4-kube-api-access-tpb8d\") pod \"dnsmasq-dns-597f6b8457-gn4tl\" (UID: \"99772cb2-a116-47a4-a08b-e81a079e56f4\") " pod="openstack/dnsmasq-dns-597f6b8457-gn4tl" Feb 24 02:36:25.440812 master-0 kubenswrapper[31411]: I0224 02:36:25.424867 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk6vs\" (UniqueName: \"kubernetes.io/projected/a8678c29-b004-43c1-8393-c7a285064416-kube-api-access-wk6vs\") pod \"keystone-bootstrap-bw6l8\" (UID: \"a8678c29-b004-43c1-8393-c7a285064416\") " pod="openstack/keystone-bootstrap-bw6l8" Feb 24 02:36:25.440812 master-0 kubenswrapper[31411]: I0224 02:36:25.425128 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8678c29-b004-43c1-8393-c7a285064416-combined-ca-bundle\") pod \"keystone-bootstrap-bw6l8\" (UID: \"a8678c29-b004-43c1-8393-c7a285064416\") " pod="openstack/keystone-bootstrap-bw6l8" Feb 24 02:36:25.440812 master-0 kubenswrapper[31411]: I0224 02:36:25.429267 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-db-secret" Feb 24 02:36:25.444381 master-0 kubenswrapper[31411]: I0224 02:36:25.442932 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-6ac23-db-sync-mhchn"] Feb 24 02:36:25.454674 master-0 kubenswrapper[31411]: I0224 02:36:25.445279 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ac23-db-sync-mhchn" Feb 24 02:36:25.454674 master-0 kubenswrapper[31411]: I0224 02:36:25.454248 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-6ac23-config-data" Feb 24 02:36:25.455345 master-0 kubenswrapper[31411]: I0224 02:36:25.455271 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-k6pnr"] Feb 24 02:36:25.459586 master-0 kubenswrapper[31411]: I0224 02:36:25.455689 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-6ac23-scripts" Feb 24 02:36:25.481016 master-0 kubenswrapper[31411]: I0224 02:36:25.480948 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-ecce-account-create-update-2pvjj"] Feb 24 02:36:25.502646 master-0 kubenswrapper[31411]: I0224 02:36:25.502562 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6ac23-db-sync-mhchn"] Feb 24 02:36:25.530097 master-0 kubenswrapper[31411]: I0224 02:36:25.525303 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c356cf44-9774-4260-9463-2960be302f0e-db-sync-config-data\") pod \"cinder-6ac23-db-sync-mhchn\" (UID: \"c356cf44-9774-4260-9463-2960be302f0e\") " pod="openstack/cinder-6ac23-db-sync-mhchn" Feb 24 02:36:25.530097 master-0 kubenswrapper[31411]: I0224 02:36:25.525416 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d47rr\" (UniqueName: \"kubernetes.io/projected/2752407f-852c-433f-80e1-0a3d258c7edf-kube-api-access-d47rr\") pod \"ironic-db-create-8l585\" (UID: \"2752407f-852c-433f-80e1-0a3d258c7edf\") " pod="openstack/ironic-db-create-8l585" Feb 24 02:36:25.530097 master-0 kubenswrapper[31411]: I0224 02:36:25.525644 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2752407f-852c-433f-80e1-0a3d258c7edf-operator-scripts\") pod \"ironic-db-create-8l585\" (UID: \"2752407f-852c-433f-80e1-0a3d258c7edf\") " pod="openstack/ironic-db-create-8l585" Feb 24 02:36:25.530097 master-0 kubenswrapper[31411]: I0224 02:36:25.525812 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c356cf44-9774-4260-9463-2960be302f0e-scripts\") pod \"cinder-6ac23-db-sync-mhchn\" (UID: \"c356cf44-9774-4260-9463-2960be302f0e\") " pod="openstack/cinder-6ac23-db-sync-mhchn" Feb 24 02:36:25.530097 master-0 kubenswrapper[31411]: I0224 02:36:25.528670 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c356cf44-9774-4260-9463-2960be302f0e-etc-machine-id\") pod \"cinder-6ac23-db-sync-mhchn\" (UID: \"c356cf44-9774-4260-9463-2960be302f0e\") " pod="openstack/cinder-6ac23-db-sync-mhchn" Feb 24 02:36:25.530097 master-0 kubenswrapper[31411]: I0224 02:36:25.528779 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c356cf44-9774-4260-9463-2960be302f0e-combined-ca-bundle\") pod \"cinder-6ac23-db-sync-mhchn\" (UID: \"c356cf44-9774-4260-9463-2960be302f0e\") " pod="openstack/cinder-6ac23-db-sync-mhchn" Feb 24 02:36:25.530097 master-0 kubenswrapper[31411]: I0224 02:36:25.529089 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w4pl\" (UniqueName: \"kubernetes.io/projected/c356cf44-9774-4260-9463-2960be302f0e-kube-api-access-9w4pl\") pod \"cinder-6ac23-db-sync-mhchn\" (UID: \"c356cf44-9774-4260-9463-2960be302f0e\") " pod="openstack/cinder-6ac23-db-sync-mhchn" Feb 24 02:36:25.530097 master-0 kubenswrapper[31411]: I0224 02:36:25.529159 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c356cf44-9774-4260-9463-2960be302f0e-config-data\") pod \"cinder-6ac23-db-sync-mhchn\" (UID: \"c356cf44-9774-4260-9463-2960be302f0e\") " pod="openstack/cinder-6ac23-db-sync-mhchn" Feb 24 02:36:25.605688 master-0 kubenswrapper[31411]: I0224 02:36:25.605061 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-597f6b8457-gn4tl"] Feb 24 02:36:25.618598 master-0 kubenswrapper[31411]: I0224 02:36:25.614509 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-597f6b8457-gn4tl" Feb 24 02:36:25.627691 master-0 kubenswrapper[31411]: I0224 02:36:25.625013 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bw6l8" Feb 24 02:36:25.633593 master-0 kubenswrapper[31411]: I0224 02:36:25.631865 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2752407f-852c-433f-80e1-0a3d258c7edf-operator-scripts\") pod \"ironic-db-create-8l585\" (UID: \"2752407f-852c-433f-80e1-0a3d258c7edf\") " pod="openstack/ironic-db-create-8l585" Feb 24 02:36:25.633763 master-0 kubenswrapper[31411]: I0224 02:36:25.633611 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2752407f-852c-433f-80e1-0a3d258c7edf-operator-scripts\") pod \"ironic-db-create-8l585\" (UID: \"2752407f-852c-433f-80e1-0a3d258c7edf\") " pod="openstack/ironic-db-create-8l585" Feb 24 02:36:25.633812 master-0 kubenswrapper[31411]: I0224 02:36:25.633743 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8d7c1b26-0a14-4626-b7ed-ec82103e883c-config\") pod \"neutron-db-sync-k6pnr\" (UID: \"8d7c1b26-0a14-4626-b7ed-ec82103e883c\") " pod="openstack/neutron-db-sync-k6pnr" Feb 24 02:36:25.634309 master-0 kubenswrapper[31411]: I0224 02:36:25.633859 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c356cf44-9774-4260-9463-2960be302f0e-scripts\") pod \"cinder-6ac23-db-sync-mhchn\" (UID: \"c356cf44-9774-4260-9463-2960be302f0e\") " pod="openstack/cinder-6ac23-db-sync-mhchn" Feb 24 02:36:25.634731 master-0 kubenswrapper[31411]: I0224 02:36:25.634711 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c356cf44-9774-4260-9463-2960be302f0e-etc-machine-id\") pod \"cinder-6ac23-db-sync-mhchn\" (UID: \"c356cf44-9774-4260-9463-2960be302f0e\") " pod="openstack/cinder-6ac23-db-sync-mhchn" Feb 24 02:36:25.634842 master-0 kubenswrapper[31411]: I0224 02:36:25.634800 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c356cf44-9774-4260-9463-2960be302f0e-etc-machine-id\") pod \"cinder-6ac23-db-sync-mhchn\" (UID: \"c356cf44-9774-4260-9463-2960be302f0e\") " pod="openstack/cinder-6ac23-db-sync-mhchn" Feb 24 02:36:25.634917 master-0 kubenswrapper[31411]: I0224 02:36:25.634902 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c356cf44-9774-4260-9463-2960be302f0e-combined-ca-bundle\") pod \"cinder-6ac23-db-sync-mhchn\" (UID: \"c356cf44-9774-4260-9463-2960be302f0e\") " pod="openstack/cinder-6ac23-db-sync-mhchn" Feb 24 02:36:25.635013 master-0 kubenswrapper[31411]: I0224 02:36:25.634996 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f4631c5-4285-4b86-8afb-2462577e53fc-operator-scripts\") pod \"ironic-ecce-account-create-update-2pvjj\" (UID: \"9f4631c5-4285-4b86-8afb-2462577e53fc\") " pod="openstack/ironic-ecce-account-create-update-2pvjj" Feb 24 02:36:25.635247 master-0 kubenswrapper[31411]: I0224 02:36:25.635233 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w4pl\" (UniqueName: \"kubernetes.io/projected/c356cf44-9774-4260-9463-2960be302f0e-kube-api-access-9w4pl\") pod \"cinder-6ac23-db-sync-mhchn\" (UID: \"c356cf44-9774-4260-9463-2960be302f0e\") " pod="openstack/cinder-6ac23-db-sync-mhchn" Feb 24 02:36:25.635344 master-0 kubenswrapper[31411]: I0224 02:36:25.635331 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d7c1b26-0a14-4626-b7ed-ec82103e883c-combined-ca-bundle\") pod \"neutron-db-sync-k6pnr\" (UID: \"8d7c1b26-0a14-4626-b7ed-ec82103e883c\") " pod="openstack/neutron-db-sync-k6pnr" Feb 24 02:36:25.635437 master-0 kubenswrapper[31411]: I0224 02:36:25.635425 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c356cf44-9774-4260-9463-2960be302f0e-config-data\") pod \"cinder-6ac23-db-sync-mhchn\" (UID: \"c356cf44-9774-4260-9463-2960be302f0e\") " pod="openstack/cinder-6ac23-db-sync-mhchn" Feb 24 02:36:25.635567 master-0 kubenswrapper[31411]: I0224 02:36:25.635554 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6288\" (UniqueName: \"kubernetes.io/projected/8d7c1b26-0a14-4626-b7ed-ec82103e883c-kube-api-access-m6288\") pod \"neutron-db-sync-k6pnr\" (UID: \"8d7c1b26-0a14-4626-b7ed-ec82103e883c\") " pod="openstack/neutron-db-sync-k6pnr" Feb 24 02:36:25.636343 master-0 kubenswrapper[31411]: I0224 02:36:25.636326 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c356cf44-9774-4260-9463-2960be302f0e-db-sync-config-data\") pod \"cinder-6ac23-db-sync-mhchn\" (UID: \"c356cf44-9774-4260-9463-2960be302f0e\") " pod="openstack/cinder-6ac23-db-sync-mhchn" Feb 24 02:36:25.636474 master-0 kubenswrapper[31411]: I0224 02:36:25.636458 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d47rr\" (UniqueName: \"kubernetes.io/projected/2752407f-852c-433f-80e1-0a3d258c7edf-kube-api-access-d47rr\") pod \"ironic-db-create-8l585\" (UID: \"2752407f-852c-433f-80e1-0a3d258c7edf\") " pod="openstack/ironic-db-create-8l585" Feb 24 02:36:25.636621 master-0 kubenswrapper[31411]: I0224 02:36:25.636606 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nbgv\" (UniqueName: \"kubernetes.io/projected/9f4631c5-4285-4b86-8afb-2462577e53fc-kube-api-access-6nbgv\") pod \"ironic-ecce-account-create-update-2pvjj\" (UID: \"9f4631c5-4285-4b86-8afb-2462577e53fc\") " pod="openstack/ironic-ecce-account-create-update-2pvjj" Feb 24 02:36:25.639462 master-0 kubenswrapper[31411]: I0224 02:36:25.639417 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c356cf44-9774-4260-9463-2960be302f0e-scripts\") pod \"cinder-6ac23-db-sync-mhchn\" (UID: \"c356cf44-9774-4260-9463-2960be302f0e\") " pod="openstack/cinder-6ac23-db-sync-mhchn" Feb 24 02:36:25.641305 master-0 kubenswrapper[31411]: I0224 02:36:25.641257 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c356cf44-9774-4260-9463-2960be302f0e-config-data\") pod \"cinder-6ac23-db-sync-mhchn\" (UID: \"c356cf44-9774-4260-9463-2960be302f0e\") " pod="openstack/cinder-6ac23-db-sync-mhchn" Feb 24 02:36:25.641742 master-0 kubenswrapper[31411]: I0224 02:36:25.641687 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c356cf44-9774-4260-9463-2960be302f0e-combined-ca-bundle\") pod \"cinder-6ac23-db-sync-mhchn\" (UID: \"c356cf44-9774-4260-9463-2960be302f0e\") " pod="openstack/cinder-6ac23-db-sync-mhchn" Feb 24 02:36:25.658268 master-0 kubenswrapper[31411]: I0224 02:36:25.654181 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c356cf44-9774-4260-9463-2960be302f0e-db-sync-config-data\") pod \"cinder-6ac23-db-sync-mhchn\" (UID: \"c356cf44-9774-4260-9463-2960be302f0e\") " pod="openstack/cinder-6ac23-db-sync-mhchn" Feb 24 02:36:25.666397 master-0 kubenswrapper[31411]: I0224 02:36:25.666349 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d47rr\" (UniqueName: \"kubernetes.io/projected/2752407f-852c-433f-80e1-0a3d258c7edf-kube-api-access-d47rr\") pod \"ironic-db-create-8l585\" (UID: \"2752407f-852c-433f-80e1-0a3d258c7edf\") " pod="openstack/ironic-db-create-8l585" Feb 24 02:36:25.669556 master-0 kubenswrapper[31411]: I0224 02:36:25.669499 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w4pl\" (UniqueName: \"kubernetes.io/projected/c356cf44-9774-4260-9463-2960be302f0e-kube-api-access-9w4pl\") pod \"cinder-6ac23-db-sync-mhchn\" (UID: \"c356cf44-9774-4260-9463-2960be302f0e\") " pod="openstack/cinder-6ac23-db-sync-mhchn" Feb 24 02:36:25.674760 master-0 kubenswrapper[31411]: I0224 02:36:25.674723 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64b4994945-klvx7"] Feb 24 02:36:25.677550 master-0 kubenswrapper[31411]: I0224 02:36:25.677521 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64b4994945-klvx7" Feb 24 02:36:25.681886 master-0 kubenswrapper[31411]: I0224 02:36:25.681825 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-njvpx"] Feb 24 02:36:25.685476 master-0 kubenswrapper[31411]: I0224 02:36:25.684552 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-njvpx" Feb 24 02:36:25.691972 master-0 kubenswrapper[31411]: I0224 02:36:25.691869 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-njvpx"] Feb 24 02:36:25.693414 master-0 kubenswrapper[31411]: I0224 02:36:25.693371 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 24 02:36:25.693696 master-0 kubenswrapper[31411]: I0224 02:36:25.693661 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-8l585" Feb 24 02:36:25.694302 master-0 kubenswrapper[31411]: I0224 02:36:25.694283 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 24 02:36:25.705295 master-0 kubenswrapper[31411]: I0224 02:36:25.705217 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64b4994945-klvx7"] Feb 24 02:36:25.740506 master-0 kubenswrapper[31411]: I0224 02:36:25.740412 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8d7c1b26-0a14-4626-b7ed-ec82103e883c-config\") pod \"neutron-db-sync-k6pnr\" (UID: \"8d7c1b26-0a14-4626-b7ed-ec82103e883c\") " pod="openstack/neutron-db-sync-k6pnr" Feb 24 02:36:25.740774 master-0 kubenswrapper[31411]: I0224 02:36:25.740546 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f4631c5-4285-4b86-8afb-2462577e53fc-operator-scripts\") pod \"ironic-ecce-account-create-update-2pvjj\" (UID: \"9f4631c5-4285-4b86-8afb-2462577e53fc\") " pod="openstack/ironic-ecce-account-create-update-2pvjj" Feb 24 02:36:25.740774 master-0 kubenswrapper[31411]: I0224 02:36:25.740666 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d7c1b26-0a14-4626-b7ed-ec82103e883c-combined-ca-bundle\") pod \"neutron-db-sync-k6pnr\" (UID: \"8d7c1b26-0a14-4626-b7ed-ec82103e883c\") " pod="openstack/neutron-db-sync-k6pnr" Feb 24 02:36:25.740774 master-0 kubenswrapper[31411]: I0224 02:36:25.740710 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6288\" (UniqueName: \"kubernetes.io/projected/8d7c1b26-0a14-4626-b7ed-ec82103e883c-kube-api-access-m6288\") pod \"neutron-db-sync-k6pnr\" (UID: \"8d7c1b26-0a14-4626-b7ed-ec82103e883c\") " pod="openstack/neutron-db-sync-k6pnr" Feb 24 02:36:25.740774 master-0 kubenswrapper[31411]: I0224 02:36:25.740768 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nbgv\" (UniqueName: \"kubernetes.io/projected/9f4631c5-4285-4b86-8afb-2462577e53fc-kube-api-access-6nbgv\") pod \"ironic-ecce-account-create-update-2pvjj\" (UID: \"9f4631c5-4285-4b86-8afb-2462577e53fc\") " pod="openstack/ironic-ecce-account-create-update-2pvjj" Feb 24 02:36:25.742256 master-0 kubenswrapper[31411]: I0224 02:36:25.742228 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f4631c5-4285-4b86-8afb-2462577e53fc-operator-scripts\") pod \"ironic-ecce-account-create-update-2pvjj\" (UID: \"9f4631c5-4285-4b86-8afb-2462577e53fc\") " pod="openstack/ironic-ecce-account-create-update-2pvjj" Feb 24 02:36:25.746652 master-0 kubenswrapper[31411]: I0224 02:36:25.746605 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8d7c1b26-0a14-4626-b7ed-ec82103e883c-config\") pod \"neutron-db-sync-k6pnr\" (UID: \"8d7c1b26-0a14-4626-b7ed-ec82103e883c\") " pod="openstack/neutron-db-sync-k6pnr" Feb 24 02:36:25.749646 master-0 kubenswrapper[31411]: I0224 02:36:25.749600 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d7c1b26-0a14-4626-b7ed-ec82103e883c-combined-ca-bundle\") pod \"neutron-db-sync-k6pnr\" (UID: \"8d7c1b26-0a14-4626-b7ed-ec82103e883c\") " pod="openstack/neutron-db-sync-k6pnr" Feb 24 02:36:25.771763 master-0 kubenswrapper[31411]: I0224 02:36:25.771709 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nbgv\" (UniqueName: \"kubernetes.io/projected/9f4631c5-4285-4b86-8afb-2462577e53fc-kube-api-access-6nbgv\") pod \"ironic-ecce-account-create-update-2pvjj\" (UID: \"9f4631c5-4285-4b86-8afb-2462577e53fc\") " pod="openstack/ironic-ecce-account-create-update-2pvjj" Feb 24 02:36:25.780654 master-0 kubenswrapper[31411]: I0224 02:36:25.780623 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6288\" (UniqueName: \"kubernetes.io/projected/8d7c1b26-0a14-4626-b7ed-ec82103e883c-kube-api-access-m6288\") pod \"neutron-db-sync-k6pnr\" (UID: \"8d7c1b26-0a14-4626-b7ed-ec82103e883c\") " pod="openstack/neutron-db-sync-k6pnr" Feb 24 02:36:25.844192 master-0 kubenswrapper[31411]: I0224 02:36:25.844121 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4wc8\" (UniqueName: \"kubernetes.io/projected/4f83a0b0-4fc6-4600-b532-d40414aa61a0-kube-api-access-m4wc8\") pod \"dnsmasq-dns-64b4994945-klvx7\" (UID: \"4f83a0b0-4fc6-4600-b532-d40414aa61a0\") " pod="openstack/dnsmasq-dns-64b4994945-klvx7" Feb 24 02:36:25.844398 master-0 kubenswrapper[31411]: I0224 02:36:25.844232 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7edb7291-5510-45f1-810f-de8e6bf08cd0-logs\") pod \"placement-db-sync-njvpx\" (UID: \"7edb7291-5510-45f1-810f-de8e6bf08cd0\") " pod="openstack/placement-db-sync-njvpx" Feb 24 02:36:25.844398 master-0 kubenswrapper[31411]: I0224 02:36:25.844336 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7edb7291-5510-45f1-810f-de8e6bf08cd0-scripts\") pod \"placement-db-sync-njvpx\" (UID: \"7edb7291-5510-45f1-810f-de8e6bf08cd0\") " pod="openstack/placement-db-sync-njvpx" Feb 24 02:36:25.844475 master-0 kubenswrapper[31411]: I0224 02:36:25.844406 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7edb7291-5510-45f1-810f-de8e6bf08cd0-combined-ca-bundle\") pod \"placement-db-sync-njvpx\" (UID: \"7edb7291-5510-45f1-810f-de8e6bf08cd0\") " pod="openstack/placement-db-sync-njvpx" Feb 24 02:36:25.844508 master-0 kubenswrapper[31411]: I0224 02:36:25.844484 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f83a0b0-4fc6-4600-b532-d40414aa61a0-ovsdbserver-nb\") pod \"dnsmasq-dns-64b4994945-klvx7\" (UID: \"4f83a0b0-4fc6-4600-b532-d40414aa61a0\") " pod="openstack/dnsmasq-dns-64b4994945-klvx7" Feb 24 02:36:25.844600 master-0 kubenswrapper[31411]: I0224 02:36:25.844582 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-824bs\" (UniqueName: \"kubernetes.io/projected/7edb7291-5510-45f1-810f-de8e6bf08cd0-kube-api-access-824bs\") pod \"placement-db-sync-njvpx\" (UID: \"7edb7291-5510-45f1-810f-de8e6bf08cd0\") " pod="openstack/placement-db-sync-njvpx" Feb 24 02:36:25.844759 master-0 kubenswrapper[31411]: I0224 02:36:25.844729 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f83a0b0-4fc6-4600-b532-d40414aa61a0-dns-svc\") pod \"dnsmasq-dns-64b4994945-klvx7\" (UID: \"4f83a0b0-4fc6-4600-b532-d40414aa61a0\") " pod="openstack/dnsmasq-dns-64b4994945-klvx7" Feb 24 02:36:25.844804 master-0 kubenswrapper[31411]: I0224 02:36:25.844786 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f83a0b0-4fc6-4600-b532-d40414aa61a0-ovsdbserver-sb\") pod \"dnsmasq-dns-64b4994945-klvx7\" (UID: \"4f83a0b0-4fc6-4600-b532-d40414aa61a0\") " pod="openstack/dnsmasq-dns-64b4994945-klvx7" Feb 24 02:36:25.844890 master-0 kubenswrapper[31411]: I0224 02:36:25.844869 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7edb7291-5510-45f1-810f-de8e6bf08cd0-config-data\") pod \"placement-db-sync-njvpx\" (UID: \"7edb7291-5510-45f1-810f-de8e6bf08cd0\") " pod="openstack/placement-db-sync-njvpx" Feb 24 02:36:25.844943 master-0 kubenswrapper[31411]: I0224 02:36:25.844899 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f83a0b0-4fc6-4600-b532-d40414aa61a0-config\") pod \"dnsmasq-dns-64b4994945-klvx7\" (UID: \"4f83a0b0-4fc6-4600-b532-d40414aa61a0\") " pod="openstack/dnsmasq-dns-64b4994945-klvx7" Feb 24 02:36:25.845084 master-0 kubenswrapper[31411]: I0224 02:36:25.845057 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f83a0b0-4fc6-4600-b532-d40414aa61a0-dns-swift-storage-0\") pod \"dnsmasq-dns-64b4994945-klvx7\" (UID: \"4f83a0b0-4fc6-4600-b532-d40414aa61a0\") " pod="openstack/dnsmasq-dns-64b4994945-klvx7" Feb 24 02:36:25.872450 master-0 kubenswrapper[31411]: I0224 02:36:25.872381 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-k6pnr" Feb 24 02:36:25.883914 master-0 kubenswrapper[31411]: I0224 02:36:25.883652 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-ecce-account-create-update-2pvjj" Feb 24 02:36:25.914080 master-0 kubenswrapper[31411]: I0224 02:36:25.899940 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ac23-db-sync-mhchn" Feb 24 02:36:25.952358 master-0 kubenswrapper[31411]: I0224 02:36:25.952237 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f83a0b0-4fc6-4600-b532-d40414aa61a0-ovsdbserver-sb\") pod \"dnsmasq-dns-64b4994945-klvx7\" (UID: \"4f83a0b0-4fc6-4600-b532-d40414aa61a0\") " pod="openstack/dnsmasq-dns-64b4994945-klvx7" Feb 24 02:36:25.952358 master-0 kubenswrapper[31411]: I0224 02:36:25.952322 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7edb7291-5510-45f1-810f-de8e6bf08cd0-config-data\") pod \"placement-db-sync-njvpx\" (UID: \"7edb7291-5510-45f1-810f-de8e6bf08cd0\") " pod="openstack/placement-db-sync-njvpx" Feb 24 02:36:25.952358 master-0 kubenswrapper[31411]: I0224 02:36:25.952346 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f83a0b0-4fc6-4600-b532-d40414aa61a0-config\") pod \"dnsmasq-dns-64b4994945-klvx7\" (UID: \"4f83a0b0-4fc6-4600-b532-d40414aa61a0\") " pod="openstack/dnsmasq-dns-64b4994945-klvx7" Feb 24 02:36:25.952358 master-0 kubenswrapper[31411]: I0224 02:36:25.952404 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f83a0b0-4fc6-4600-b532-d40414aa61a0-dns-swift-storage-0\") pod \"dnsmasq-dns-64b4994945-klvx7\" (UID: \"4f83a0b0-4fc6-4600-b532-d40414aa61a0\") " pod="openstack/dnsmasq-dns-64b4994945-klvx7" Feb 24 02:36:25.952358 master-0 kubenswrapper[31411]: I0224 02:36:25.952498 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4wc8\" (UniqueName: \"kubernetes.io/projected/4f83a0b0-4fc6-4600-b532-d40414aa61a0-kube-api-access-m4wc8\") pod \"dnsmasq-dns-64b4994945-klvx7\" (UID: \"4f83a0b0-4fc6-4600-b532-d40414aa61a0\") " pod="openstack/dnsmasq-dns-64b4994945-klvx7" Feb 24 02:36:25.952897 master-0 kubenswrapper[31411]: I0224 02:36:25.952522 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7edb7291-5510-45f1-810f-de8e6bf08cd0-logs\") pod \"placement-db-sync-njvpx\" (UID: \"7edb7291-5510-45f1-810f-de8e6bf08cd0\") " pod="openstack/placement-db-sync-njvpx" Feb 24 02:36:25.952897 master-0 kubenswrapper[31411]: I0224 02:36:25.952555 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7edb7291-5510-45f1-810f-de8e6bf08cd0-scripts\") pod \"placement-db-sync-njvpx\" (UID: \"7edb7291-5510-45f1-810f-de8e6bf08cd0\") " pod="openstack/placement-db-sync-njvpx" Feb 24 02:36:25.952897 master-0 kubenswrapper[31411]: I0224 02:36:25.952597 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7edb7291-5510-45f1-810f-de8e6bf08cd0-combined-ca-bundle\") pod \"placement-db-sync-njvpx\" (UID: \"7edb7291-5510-45f1-810f-de8e6bf08cd0\") " pod="openstack/placement-db-sync-njvpx" Feb 24 02:36:25.952897 master-0 kubenswrapper[31411]: I0224 02:36:25.952623 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f83a0b0-4fc6-4600-b532-d40414aa61a0-ovsdbserver-nb\") pod \"dnsmasq-dns-64b4994945-klvx7\" (UID: \"4f83a0b0-4fc6-4600-b532-d40414aa61a0\") " pod="openstack/dnsmasq-dns-64b4994945-klvx7" Feb 24 02:36:25.952897 master-0 kubenswrapper[31411]: I0224 02:36:25.952653 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-824bs\" (UniqueName: \"kubernetes.io/projected/7edb7291-5510-45f1-810f-de8e6bf08cd0-kube-api-access-824bs\") pod \"placement-db-sync-njvpx\" (UID: \"7edb7291-5510-45f1-810f-de8e6bf08cd0\") " pod="openstack/placement-db-sync-njvpx" Feb 24 02:36:25.952897 master-0 kubenswrapper[31411]: I0224 02:36:25.952694 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f83a0b0-4fc6-4600-b532-d40414aa61a0-dns-svc\") pod \"dnsmasq-dns-64b4994945-klvx7\" (UID: \"4f83a0b0-4fc6-4600-b532-d40414aa61a0\") " pod="openstack/dnsmasq-dns-64b4994945-klvx7" Feb 24 02:36:25.956284 master-0 kubenswrapper[31411]: I0224 02:36:25.953404 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f83a0b0-4fc6-4600-b532-d40414aa61a0-ovsdbserver-sb\") pod \"dnsmasq-dns-64b4994945-klvx7\" (UID: \"4f83a0b0-4fc6-4600-b532-d40414aa61a0\") " pod="openstack/dnsmasq-dns-64b4994945-klvx7" Feb 24 02:36:25.956284 master-0 kubenswrapper[31411]: I0224 02:36:25.953617 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7edb7291-5510-45f1-810f-de8e6bf08cd0-logs\") pod \"placement-db-sync-njvpx\" (UID: \"7edb7291-5510-45f1-810f-de8e6bf08cd0\") " pod="openstack/placement-db-sync-njvpx" Feb 24 02:36:25.956284 master-0 kubenswrapper[31411]: I0224 02:36:25.953958 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f83a0b0-4fc6-4600-b532-d40414aa61a0-dns-svc\") pod \"dnsmasq-dns-64b4994945-klvx7\" (UID: \"4f83a0b0-4fc6-4600-b532-d40414aa61a0\") " pod="openstack/dnsmasq-dns-64b4994945-klvx7" Feb 24 02:36:25.956284 master-0 kubenswrapper[31411]: I0224 02:36:25.955067 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f83a0b0-4fc6-4600-b532-d40414aa61a0-config\") pod \"dnsmasq-dns-64b4994945-klvx7\" (UID: \"4f83a0b0-4fc6-4600-b532-d40414aa61a0\") " pod="openstack/dnsmasq-dns-64b4994945-klvx7" Feb 24 02:36:25.956284 master-0 kubenswrapper[31411]: I0224 02:36:25.954882 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f83a0b0-4fc6-4600-b532-d40414aa61a0-ovsdbserver-nb\") pod \"dnsmasq-dns-64b4994945-klvx7\" (UID: \"4f83a0b0-4fc6-4600-b532-d40414aa61a0\") " pod="openstack/dnsmasq-dns-64b4994945-klvx7" Feb 24 02:36:25.957023 master-0 kubenswrapper[31411]: I0224 02:36:25.956972 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f83a0b0-4fc6-4600-b532-d40414aa61a0-dns-swift-storage-0\") pod \"dnsmasq-dns-64b4994945-klvx7\" (UID: \"4f83a0b0-4fc6-4600-b532-d40414aa61a0\") " pod="openstack/dnsmasq-dns-64b4994945-klvx7" Feb 24 02:36:25.967085 master-0 kubenswrapper[31411]: I0224 02:36:25.967048 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7edb7291-5510-45f1-810f-de8e6bf08cd0-config-data\") pod \"placement-db-sync-njvpx\" (UID: \"7edb7291-5510-45f1-810f-de8e6bf08cd0\") " pod="openstack/placement-db-sync-njvpx" Feb 24 02:36:25.976033 master-0 kubenswrapper[31411]: I0224 02:36:25.975993 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4wc8\" (UniqueName: \"kubernetes.io/projected/4f83a0b0-4fc6-4600-b532-d40414aa61a0-kube-api-access-m4wc8\") pod \"dnsmasq-dns-64b4994945-klvx7\" (UID: \"4f83a0b0-4fc6-4600-b532-d40414aa61a0\") " pod="openstack/dnsmasq-dns-64b4994945-klvx7" Feb 24 02:36:25.977290 master-0 kubenswrapper[31411]: I0224 02:36:25.977239 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7edb7291-5510-45f1-810f-de8e6bf08cd0-combined-ca-bundle\") pod \"placement-db-sync-njvpx\" (UID: \"7edb7291-5510-45f1-810f-de8e6bf08cd0\") " pod="openstack/placement-db-sync-njvpx" Feb 24 02:36:25.981964 master-0 kubenswrapper[31411]: I0224 02:36:25.981317 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-824bs\" (UniqueName: \"kubernetes.io/projected/7edb7291-5510-45f1-810f-de8e6bf08cd0-kube-api-access-824bs\") pod \"placement-db-sync-njvpx\" (UID: \"7edb7291-5510-45f1-810f-de8e6bf08cd0\") " pod="openstack/placement-db-sync-njvpx" Feb 24 02:36:25.984043 master-0 kubenswrapper[31411]: I0224 02:36:25.983954 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7edb7291-5510-45f1-810f-de8e6bf08cd0-scripts\") pod \"placement-db-sync-njvpx\" (UID: \"7edb7291-5510-45f1-810f-de8e6bf08cd0\") " pod="openstack/placement-db-sync-njvpx" Feb 24 02:36:26.144027 master-0 kubenswrapper[31411]: I0224 02:36:26.143960 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64b4994945-klvx7" Feb 24 02:36:26.171791 master-0 kubenswrapper[31411]: I0224 02:36:26.169032 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-njvpx" Feb 24 02:36:26.516145 master-0 kubenswrapper[31411]: I0224 02:36:26.516082 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dnhq7" Feb 24 02:36:26.612796 master-0 kubenswrapper[31411]: I0224 02:36:26.612705 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlz7k\" (UniqueName: \"kubernetes.io/projected/1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af-kube-api-access-hlz7k\") pod \"1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af\" (UID: \"1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af\") " Feb 24 02:36:26.612997 master-0 kubenswrapper[31411]: I0224 02:36:26.612953 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af-db-sync-config-data\") pod \"1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af\" (UID: \"1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af\") " Feb 24 02:36:26.613219 master-0 kubenswrapper[31411]: I0224 02:36:26.613175 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af-config-data\") pod \"1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af\" (UID: \"1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af\") " Feb 24 02:36:26.615228 master-0 kubenswrapper[31411]: I0224 02:36:26.615165 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-597f6b8457-gn4tl"] Feb 24 02:36:26.636899 master-0 kubenswrapper[31411]: I0224 02:36:26.617372 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af-combined-ca-bundle\") pod \"1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af\" (UID: \"1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af\") " Feb 24 02:36:26.636899 master-0 kubenswrapper[31411]: I0224 02:36:26.626694 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-bw6l8"] Feb 24 02:36:26.636899 master-0 kubenswrapper[31411]: I0224 02:36:26.632317 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af" (UID: "1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:36:26.640866 master-0 kubenswrapper[31411]: I0224 02:36:26.640800 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af-kube-api-access-hlz7k" (OuterVolumeSpecName: "kube-api-access-hlz7k") pod "1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af" (UID: "1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af"). InnerVolumeSpecName "kube-api-access-hlz7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:36:26.697857 master-0 kubenswrapper[31411]: I0224 02:36:26.697770 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af" (UID: "1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:36:26.722045 master-0 kubenswrapper[31411]: I0224 02:36:26.720796 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:26.722045 master-0 kubenswrapper[31411]: I0224 02:36:26.720836 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlz7k\" (UniqueName: \"kubernetes.io/projected/1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af-kube-api-access-hlz7k\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:26.722045 master-0 kubenswrapper[31411]: I0224 02:36:26.720850 31411 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:26.742709 master-0 kubenswrapper[31411]: I0224 02:36:26.728491 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-ecce-account-create-update-2pvjj"] Feb 24 02:36:26.751981 master-0 kubenswrapper[31411]: I0224 02:36:26.751920 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-k6pnr"] Feb 24 02:36:26.771724 master-0 kubenswrapper[31411]: I0224 02:36:26.768083 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-8l585"] Feb 24 02:36:26.783163 master-0 kubenswrapper[31411]: I0224 02:36:26.783104 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af-config-data" (OuterVolumeSpecName: "config-data") pod "1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af" (UID: "1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:36:26.789875 master-0 kubenswrapper[31411]: I0224 02:36:26.789737 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-ecce-account-create-update-2pvjj" event={"ID":"9f4631c5-4285-4b86-8afb-2462577e53fc","Type":"ContainerStarted","Data":"5a698de41db9e7b85a427a2e85f25ec1aeef6073a983763a78db472deb7fedf4"} Feb 24 02:36:26.796058 master-0 kubenswrapper[31411]: I0224 02:36:26.795929 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dnhq7" event={"ID":"1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af","Type":"ContainerDied","Data":"c436687278d8e1f31b859f287d062a78d9ad8078821276e79852e67b19f5d1bf"} Feb 24 02:36:26.796058 master-0 kubenswrapper[31411]: I0224 02:36:26.795964 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c436687278d8e1f31b859f287d062a78d9ad8078821276e79852e67b19f5d1bf" Feb 24 02:36:26.796058 master-0 kubenswrapper[31411]: I0224 02:36:26.795928 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dnhq7" Feb 24 02:36:26.801399 master-0 kubenswrapper[31411]: I0224 02:36:26.801345 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-8l585" event={"ID":"2752407f-852c-433f-80e1-0a3d258c7edf","Type":"ContainerStarted","Data":"098832266f2627d94c05d57f9ce9dcd12202bd5dfd4d21bb6f9b714f1e8c7852"} Feb 24 02:36:26.802762 master-0 kubenswrapper[31411]: I0224 02:36:26.802736 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-k6pnr" event={"ID":"8d7c1b26-0a14-4626-b7ed-ec82103e883c","Type":"ContainerStarted","Data":"cf15467a9b1539b1d67a6edf1125709a45c073d86057777ed7c43c4a40ca0ef3"} Feb 24 02:36:26.803803 master-0 kubenswrapper[31411]: I0224 02:36:26.803779 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-597f6b8457-gn4tl" event={"ID":"99772cb2-a116-47a4-a08b-e81a079e56f4","Type":"ContainerStarted","Data":"f706fbceaa2c4ce68c0914b6f364aba8219b52542cb7c51c971e00cdbbdf1cec"} Feb 24 02:36:26.804906 master-0 kubenswrapper[31411]: I0224 02:36:26.804881 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bw6l8" event={"ID":"a8678c29-b004-43c1-8393-c7a285064416","Type":"ContainerStarted","Data":"844c316fcd5157eb6aad95b20bd5ef9fa8a5a98cc93a7727cd0fc224e5b93785"} Feb 24 02:36:26.833978 master-0 kubenswrapper[31411]: I0224 02:36:26.833924 31411 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:26.955390 master-0 kubenswrapper[31411]: I0224 02:36:26.955302 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6ac23-db-sync-mhchn"] Feb 24 02:36:27.154924 master-0 kubenswrapper[31411]: I0224 02:36:27.139842 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-njvpx"] Feb 24 02:36:27.154924 master-0 kubenswrapper[31411]: I0224 02:36:27.139890 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64b4994945-klvx7"] Feb 24 02:36:27.197216 master-0 kubenswrapper[31411]: W0224 02:36:27.196237 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f83a0b0_4fc6_4600_b532_d40414aa61a0.slice/crio-9c5ea9aaab1cb8bae569d7caf9cbc4883b529f88600d99ba2c9dd854a6411a21 WatchSource:0}: Error finding container 9c5ea9aaab1cb8bae569d7caf9cbc4883b529f88600d99ba2c9dd854a6411a21: Status 404 returned error can't find the container with id 9c5ea9aaab1cb8bae569d7caf9cbc4883b529f88600d99ba2c9dd854a6411a21 Feb 24 02:36:27.236859 master-0 kubenswrapper[31411]: I0224 02:36:27.231787 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64b4994945-klvx7"] Feb 24 02:36:27.448673 master-0 kubenswrapper[31411]: I0224 02:36:27.445613 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f74bd995c-jflbg"] Feb 24 02:36:27.448673 master-0 kubenswrapper[31411]: E0224 02:36:27.447106 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af" containerName="glance-db-sync" Feb 24 02:36:27.448673 master-0 kubenswrapper[31411]: I0224 02:36:27.447127 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af" containerName="glance-db-sync" Feb 24 02:36:27.448673 master-0 kubenswrapper[31411]: I0224 02:36:27.447610 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af" containerName="glance-db-sync" Feb 24 02:36:27.453388 master-0 kubenswrapper[31411]: I0224 02:36:27.450143 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f74bd995c-jflbg" Feb 24 02:36:27.599645 master-0 kubenswrapper[31411]: I0224 02:36:27.597944 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f74bd995c-jflbg"] Feb 24 02:36:27.606000 master-0 kubenswrapper[31411]: I0224 02:36:27.605163 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78056aef-5d74-4349-9faa-7ee56e5090b6-ovsdbserver-sb\") pod \"dnsmasq-dns-7f74bd995c-jflbg\" (UID: \"78056aef-5d74-4349-9faa-7ee56e5090b6\") " pod="openstack/dnsmasq-dns-7f74bd995c-jflbg" Feb 24 02:36:27.606000 master-0 kubenswrapper[31411]: I0224 02:36:27.605233 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78056aef-5d74-4349-9faa-7ee56e5090b6-ovsdbserver-nb\") pod \"dnsmasq-dns-7f74bd995c-jflbg\" (UID: \"78056aef-5d74-4349-9faa-7ee56e5090b6\") " pod="openstack/dnsmasq-dns-7f74bd995c-jflbg" Feb 24 02:36:27.606000 master-0 kubenswrapper[31411]: I0224 02:36:27.605284 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78056aef-5d74-4349-9faa-7ee56e5090b6-config\") pod \"dnsmasq-dns-7f74bd995c-jflbg\" (UID: \"78056aef-5d74-4349-9faa-7ee56e5090b6\") " pod="openstack/dnsmasq-dns-7f74bd995c-jflbg" Feb 24 02:36:27.606000 master-0 kubenswrapper[31411]: I0224 02:36:27.605309 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/78056aef-5d74-4349-9faa-7ee56e5090b6-dns-swift-storage-0\") pod \"dnsmasq-dns-7f74bd995c-jflbg\" (UID: \"78056aef-5d74-4349-9faa-7ee56e5090b6\") " pod="openstack/dnsmasq-dns-7f74bd995c-jflbg" Feb 24 02:36:27.606000 master-0 kubenswrapper[31411]: I0224 02:36:27.605344 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dptb\" (UniqueName: \"kubernetes.io/projected/78056aef-5d74-4349-9faa-7ee56e5090b6-kube-api-access-5dptb\") pod \"dnsmasq-dns-7f74bd995c-jflbg\" (UID: \"78056aef-5d74-4349-9faa-7ee56e5090b6\") " pod="openstack/dnsmasq-dns-7f74bd995c-jflbg" Feb 24 02:36:27.606000 master-0 kubenswrapper[31411]: I0224 02:36:27.605407 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78056aef-5d74-4349-9faa-7ee56e5090b6-dns-svc\") pod \"dnsmasq-dns-7f74bd995c-jflbg\" (UID: \"78056aef-5d74-4349-9faa-7ee56e5090b6\") " pod="openstack/dnsmasq-dns-7f74bd995c-jflbg" Feb 24 02:36:27.708171 master-0 kubenswrapper[31411]: I0224 02:36:27.708034 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dptb\" (UniqueName: \"kubernetes.io/projected/78056aef-5d74-4349-9faa-7ee56e5090b6-kube-api-access-5dptb\") pod \"dnsmasq-dns-7f74bd995c-jflbg\" (UID: \"78056aef-5d74-4349-9faa-7ee56e5090b6\") " pod="openstack/dnsmasq-dns-7f74bd995c-jflbg" Feb 24 02:36:27.708171 master-0 kubenswrapper[31411]: I0224 02:36:27.708153 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78056aef-5d74-4349-9faa-7ee56e5090b6-dns-svc\") pod \"dnsmasq-dns-7f74bd995c-jflbg\" (UID: \"78056aef-5d74-4349-9faa-7ee56e5090b6\") " pod="openstack/dnsmasq-dns-7f74bd995c-jflbg" Feb 24 02:36:27.708382 master-0 kubenswrapper[31411]: I0224 02:36:27.708234 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78056aef-5d74-4349-9faa-7ee56e5090b6-ovsdbserver-sb\") pod \"dnsmasq-dns-7f74bd995c-jflbg\" (UID: \"78056aef-5d74-4349-9faa-7ee56e5090b6\") " pod="openstack/dnsmasq-dns-7f74bd995c-jflbg" Feb 24 02:36:27.708382 master-0 kubenswrapper[31411]: I0224 02:36:27.708262 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78056aef-5d74-4349-9faa-7ee56e5090b6-ovsdbserver-nb\") pod \"dnsmasq-dns-7f74bd995c-jflbg\" (UID: \"78056aef-5d74-4349-9faa-7ee56e5090b6\") " pod="openstack/dnsmasq-dns-7f74bd995c-jflbg" Feb 24 02:36:27.708382 master-0 kubenswrapper[31411]: I0224 02:36:27.708304 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78056aef-5d74-4349-9faa-7ee56e5090b6-config\") pod \"dnsmasq-dns-7f74bd995c-jflbg\" (UID: \"78056aef-5d74-4349-9faa-7ee56e5090b6\") " pod="openstack/dnsmasq-dns-7f74bd995c-jflbg" Feb 24 02:36:27.708382 master-0 kubenswrapper[31411]: I0224 02:36:27.708326 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/78056aef-5d74-4349-9faa-7ee56e5090b6-dns-swift-storage-0\") pod \"dnsmasq-dns-7f74bd995c-jflbg\" (UID: \"78056aef-5d74-4349-9faa-7ee56e5090b6\") " pod="openstack/dnsmasq-dns-7f74bd995c-jflbg" Feb 24 02:36:27.709397 master-0 kubenswrapper[31411]: I0224 02:36:27.709338 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/78056aef-5d74-4349-9faa-7ee56e5090b6-dns-swift-storage-0\") pod \"dnsmasq-dns-7f74bd995c-jflbg\" (UID: \"78056aef-5d74-4349-9faa-7ee56e5090b6\") " pod="openstack/dnsmasq-dns-7f74bd995c-jflbg" Feb 24 02:36:27.709485 master-0 kubenswrapper[31411]: I0224 02:36:27.709443 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78056aef-5d74-4349-9faa-7ee56e5090b6-ovsdbserver-sb\") pod \"dnsmasq-dns-7f74bd995c-jflbg\" (UID: \"78056aef-5d74-4349-9faa-7ee56e5090b6\") " pod="openstack/dnsmasq-dns-7f74bd995c-jflbg" Feb 24 02:36:27.710065 master-0 kubenswrapper[31411]: I0224 02:36:27.710046 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78056aef-5d74-4349-9faa-7ee56e5090b6-ovsdbserver-nb\") pod \"dnsmasq-dns-7f74bd995c-jflbg\" (UID: \"78056aef-5d74-4349-9faa-7ee56e5090b6\") " pod="openstack/dnsmasq-dns-7f74bd995c-jflbg" Feb 24 02:36:27.712161 master-0 kubenswrapper[31411]: I0224 02:36:27.711725 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78056aef-5d74-4349-9faa-7ee56e5090b6-dns-svc\") pod \"dnsmasq-dns-7f74bd995c-jflbg\" (UID: \"78056aef-5d74-4349-9faa-7ee56e5090b6\") " pod="openstack/dnsmasq-dns-7f74bd995c-jflbg" Feb 24 02:36:27.731055 master-0 kubenswrapper[31411]: I0224 02:36:27.729811 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dptb\" (UniqueName: \"kubernetes.io/projected/78056aef-5d74-4349-9faa-7ee56e5090b6-kube-api-access-5dptb\") pod \"dnsmasq-dns-7f74bd995c-jflbg\" (UID: \"78056aef-5d74-4349-9faa-7ee56e5090b6\") " pod="openstack/dnsmasq-dns-7f74bd995c-jflbg" Feb 24 02:36:27.731750 master-0 kubenswrapper[31411]: I0224 02:36:27.731709 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78056aef-5d74-4349-9faa-7ee56e5090b6-config\") pod \"dnsmasq-dns-7f74bd995c-jflbg\" (UID: \"78056aef-5d74-4349-9faa-7ee56e5090b6\") " pod="openstack/dnsmasq-dns-7f74bd995c-jflbg" Feb 24 02:36:27.784211 master-0 kubenswrapper[31411]: I0224 02:36:27.783835 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f74bd995c-jflbg" Feb 24 02:36:27.840658 master-0 kubenswrapper[31411]: I0224 02:36:27.839150 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-njvpx" event={"ID":"7edb7291-5510-45f1-810f-de8e6bf08cd0","Type":"ContainerStarted","Data":"192e091c935f2d5576ba7c5e67ec86e5cddab711bb6fb7ef60fe8a51faf4d7d2"} Feb 24 02:36:27.844071 master-0 kubenswrapper[31411]: I0224 02:36:27.844006 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-8l585" event={"ID":"2752407f-852c-433f-80e1-0a3d258c7edf","Type":"ContainerStarted","Data":"9e1a11cd30d8e641c8a6d186495650d5b23fc6cf4d80e70d7d5428f1c31d1861"} Feb 24 02:36:27.849763 master-0 kubenswrapper[31411]: I0224 02:36:27.849605 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-k6pnr" event={"ID":"8d7c1b26-0a14-4626-b7ed-ec82103e883c","Type":"ContainerStarted","Data":"6549e94a090b89b6149dbf8755d38f83b1b705346f90e412b1f91fca52cc279c"} Feb 24 02:36:27.858489 master-0 kubenswrapper[31411]: I0224 02:36:27.858433 31411 generic.go:334] "Generic (PLEG): container finished" podID="99772cb2-a116-47a4-a08b-e81a079e56f4" containerID="7d1d856f31b47e7767721fa351e925b1f630a671cd2dbf9045f4e320dd3d984e" exitCode=0 Feb 24 02:36:27.858647 master-0 kubenswrapper[31411]: I0224 02:36:27.858488 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-597f6b8457-gn4tl" event={"ID":"99772cb2-a116-47a4-a08b-e81a079e56f4","Type":"ContainerDied","Data":"7d1d856f31b47e7767721fa351e925b1f630a671cd2dbf9045f4e320dd3d984e"} Feb 24 02:36:27.865149 master-0 kubenswrapper[31411]: I0224 02:36:27.864930 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64b4994945-klvx7" event={"ID":"4f83a0b0-4fc6-4600-b532-d40414aa61a0","Type":"ContainerStarted","Data":"9c5ea9aaab1cb8bae569d7caf9cbc4883b529f88600d99ba2c9dd854a6411a21"} Feb 24 02:36:27.885157 master-0 kubenswrapper[31411]: I0224 02:36:27.885080 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bw6l8" event={"ID":"a8678c29-b004-43c1-8393-c7a285064416","Type":"ContainerStarted","Data":"29c9cf6824c423edde9609e265c6f1c4cd9529c2ca073aad171b549468b2cff5"} Feb 24 02:36:27.896615 master-0 kubenswrapper[31411]: I0224 02:36:27.896189 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-db-create-8l585" podStartSLOduration=2.896161374 podStartE2EDuration="2.896161374s" podCreationTimestamp="2026-02-24 02:36:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:36:27.894772665 +0000 UTC m=+931.111970692" watchObservedRunningTime="2026-02-24 02:36:27.896161374 +0000 UTC m=+931.113359220" Feb 24 02:36:27.900393 master-0 kubenswrapper[31411]: I0224 02:36:27.900325 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-db-sync-mhchn" event={"ID":"c356cf44-9774-4260-9463-2960be302f0e","Type":"ContainerStarted","Data":"4ad28eccb95db9c578dd0e0bca11ea30637059ca654da894081956d824463da2"} Feb 24 02:36:27.913937 master-0 kubenswrapper[31411]: I0224 02:36:27.910090 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-ecce-account-create-update-2pvjj" event={"ID":"9f4631c5-4285-4b86-8afb-2462577e53fc","Type":"ContainerStarted","Data":"2ed19f5063aed050bc5f2a7be8758140c36c7d5a22c41712656ec008ef2ecb69"} Feb 24 02:36:27.983520 master-0 kubenswrapper[31411]: I0224 02:36:27.982194 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-k6pnr" podStartSLOduration=2.982154645 podStartE2EDuration="2.982154645s" podCreationTimestamp="2026-02-24 02:36:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:36:27.959310085 +0000 UTC m=+931.176507931" watchObservedRunningTime="2026-02-24 02:36:27.982154645 +0000 UTC m=+931.199352511" Feb 24 02:36:28.024853 master-0 kubenswrapper[31411]: I0224 02:36:28.020226 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-bw6l8" podStartSLOduration=3.020179301 podStartE2EDuration="3.020179301s" podCreationTimestamp="2026-02-24 02:36:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:36:27.998892934 +0000 UTC m=+931.216090780" watchObservedRunningTime="2026-02-24 02:36:28.020179301 +0000 UTC m=+931.237377147" Feb 24 02:36:28.086823 master-0 kubenswrapper[31411]: I0224 02:36:28.086719 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-ecce-account-create-update-2pvjj" podStartSLOduration=3.086695516 podStartE2EDuration="3.086695516s" podCreationTimestamp="2026-02-24 02:36:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:36:28.018629028 +0000 UTC m=+931.235826874" watchObservedRunningTime="2026-02-24 02:36:28.086695516 +0000 UTC m=+931.303893362" Feb 24 02:36:28.324590 master-0 kubenswrapper[31411]: I0224 02:36:28.324519 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-597f6b8457-gn4tl" Feb 24 02:36:28.451066 master-0 kubenswrapper[31411]: I0224 02:36:28.451005 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpb8d\" (UniqueName: \"kubernetes.io/projected/99772cb2-a116-47a4-a08b-e81a079e56f4-kube-api-access-tpb8d\") pod \"99772cb2-a116-47a4-a08b-e81a079e56f4\" (UID: \"99772cb2-a116-47a4-a08b-e81a079e56f4\") " Feb 24 02:36:28.451289 master-0 kubenswrapper[31411]: I0224 02:36:28.451098 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99772cb2-a116-47a4-a08b-e81a079e56f4-ovsdbserver-nb\") pod \"99772cb2-a116-47a4-a08b-e81a079e56f4\" (UID: \"99772cb2-a116-47a4-a08b-e81a079e56f4\") " Feb 24 02:36:28.452149 master-0 kubenswrapper[31411]: I0224 02:36:28.451515 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99772cb2-a116-47a4-a08b-e81a079e56f4-dns-svc\") pod \"99772cb2-a116-47a4-a08b-e81a079e56f4\" (UID: \"99772cb2-a116-47a4-a08b-e81a079e56f4\") " Feb 24 02:36:28.452149 master-0 kubenswrapper[31411]: I0224 02:36:28.451625 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99772cb2-a116-47a4-a08b-e81a079e56f4-config\") pod \"99772cb2-a116-47a4-a08b-e81a079e56f4\" (UID: \"99772cb2-a116-47a4-a08b-e81a079e56f4\") " Feb 24 02:36:28.452149 master-0 kubenswrapper[31411]: I0224 02:36:28.451752 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99772cb2-a116-47a4-a08b-e81a079e56f4-dns-swift-storage-0\") pod \"99772cb2-a116-47a4-a08b-e81a079e56f4\" (UID: \"99772cb2-a116-47a4-a08b-e81a079e56f4\") " Feb 24 02:36:28.452149 master-0 kubenswrapper[31411]: I0224 02:36:28.451787 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99772cb2-a116-47a4-a08b-e81a079e56f4-ovsdbserver-sb\") pod \"99772cb2-a116-47a4-a08b-e81a079e56f4\" (UID: \"99772cb2-a116-47a4-a08b-e81a079e56f4\") " Feb 24 02:36:28.456337 master-0 kubenswrapper[31411]: I0224 02:36:28.456103 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99772cb2-a116-47a4-a08b-e81a079e56f4-kube-api-access-tpb8d" (OuterVolumeSpecName: "kube-api-access-tpb8d") pod "99772cb2-a116-47a4-a08b-e81a079e56f4" (UID: "99772cb2-a116-47a4-a08b-e81a079e56f4"). InnerVolumeSpecName "kube-api-access-tpb8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:36:28.502598 master-0 kubenswrapper[31411]: I0224 02:36:28.490708 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99772cb2-a116-47a4-a08b-e81a079e56f4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "99772cb2-a116-47a4-a08b-e81a079e56f4" (UID: "99772cb2-a116-47a4-a08b-e81a079e56f4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:36:28.502598 master-0 kubenswrapper[31411]: I0224 02:36:28.490835 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99772cb2-a116-47a4-a08b-e81a079e56f4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "99772cb2-a116-47a4-a08b-e81a079e56f4" (UID: "99772cb2-a116-47a4-a08b-e81a079e56f4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:36:28.517788 master-0 kubenswrapper[31411]: I0224 02:36:28.506645 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-8705a-default-external-api-0"] Feb 24 02:36:28.517788 master-0 kubenswrapper[31411]: E0224 02:36:28.507404 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99772cb2-a116-47a4-a08b-e81a079e56f4" containerName="init" Feb 24 02:36:28.517788 master-0 kubenswrapper[31411]: I0224 02:36:28.507420 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="99772cb2-a116-47a4-a08b-e81a079e56f4" containerName="init" Feb 24 02:36:28.517788 master-0 kubenswrapper[31411]: I0224 02:36:28.507756 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="99772cb2-a116-47a4-a08b-e81a079e56f4" containerName="init" Feb 24 02:36:28.517788 master-0 kubenswrapper[31411]: I0224 02:36:28.509140 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:28.517788 master-0 kubenswrapper[31411]: I0224 02:36:28.511499 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99772cb2-a116-47a4-a08b-e81a079e56f4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "99772cb2-a116-47a4-a08b-e81a079e56f4" (UID: "99772cb2-a116-47a4-a08b-e81a079e56f4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:36:28.517788 master-0 kubenswrapper[31411]: I0224 02:36:28.513697 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-8705a-default-external-config-data" Feb 24 02:36:28.517788 master-0 kubenswrapper[31411]: I0224 02:36:28.514062 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 24 02:36:28.536619 master-0 kubenswrapper[31411]: I0224 02:36:28.536537 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99772cb2-a116-47a4-a08b-e81a079e56f4-config" (OuterVolumeSpecName: "config") pod "99772cb2-a116-47a4-a08b-e81a079e56f4" (UID: "99772cb2-a116-47a4-a08b-e81a079e56f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:36:28.554059 master-0 kubenswrapper[31411]: I0224 02:36:28.553987 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99772cb2-a116-47a4-a08b-e81a079e56f4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "99772cb2-a116-47a4-a08b-e81a079e56f4" (UID: "99772cb2-a116-47a4-a08b-e81a079e56f4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:36:28.555047 master-0 kubenswrapper[31411]: I0224 02:36:28.555002 31411 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99772cb2-a116-47a4-a08b-e81a079e56f4-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:28.555047 master-0 kubenswrapper[31411]: I0224 02:36:28.555044 31411 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99772cb2-a116-47a4-a08b-e81a079e56f4-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:28.555143 master-0 kubenswrapper[31411]: I0224 02:36:28.555056 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpb8d\" (UniqueName: \"kubernetes.io/projected/99772cb2-a116-47a4-a08b-e81a079e56f4-kube-api-access-tpb8d\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:28.555143 master-0 kubenswrapper[31411]: I0224 02:36:28.555066 31411 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99772cb2-a116-47a4-a08b-e81a079e56f4-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:28.555143 master-0 kubenswrapper[31411]: I0224 02:36:28.555079 31411 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99772cb2-a116-47a4-a08b-e81a079e56f4-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:28.555143 master-0 kubenswrapper[31411]: I0224 02:36:28.555090 31411 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99772cb2-a116-47a4-a08b-e81a079e56f4-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:28.574860 master-0 kubenswrapper[31411]: I0224 02:36:28.574811 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8705a-default-external-api-0"] Feb 24 02:36:28.617849 master-0 kubenswrapper[31411]: W0224 02:36:28.617798 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78056aef_5d74_4349_9faa_7ee56e5090b6.slice/crio-e747023b7c2b65a08acd038638272e87654f99bdd6bad24712f741a7a4d80736 WatchSource:0}: Error finding container e747023b7c2b65a08acd038638272e87654f99bdd6bad24712f741a7a4d80736: Status 404 returned error can't find the container with id e747023b7c2b65a08acd038638272e87654f99bdd6bad24712f741a7a4d80736 Feb 24 02:36:28.649682 master-0 kubenswrapper[31411]: I0224 02:36:28.648384 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f74bd995c-jflbg"] Feb 24 02:36:28.658200 master-0 kubenswrapper[31411]: I0224 02:36:28.656460 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646d5895-0594-419d-bc57-3beb2730117e-config-data\") pod \"glance-8705a-default-external-api-0\" (UID: \"646d5895-0594-419d-bc57-3beb2730117e\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:28.658200 master-0 kubenswrapper[31411]: I0224 02:36:28.656505 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/646d5895-0594-419d-bc57-3beb2730117e-httpd-run\") pod \"glance-8705a-default-external-api-0\" (UID: \"646d5895-0594-419d-bc57-3beb2730117e\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:28.658200 master-0 kubenswrapper[31411]: I0224 02:36:28.656558 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/646d5895-0594-419d-bc57-3beb2730117e-scripts\") pod \"glance-8705a-default-external-api-0\" (UID: \"646d5895-0594-419d-bc57-3beb2730117e\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:28.658200 master-0 kubenswrapper[31411]: I0224 02:36:28.656631 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-77a90a1f-3b19-443f-bfa7-9776b1f847b6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a5d836f3-76db-4c3b-84c0-25ab845f7f66\") pod \"glance-8705a-default-external-api-0\" (UID: \"646d5895-0594-419d-bc57-3beb2730117e\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:28.658200 master-0 kubenswrapper[31411]: I0224 02:36:28.656680 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5w68\" (UniqueName: \"kubernetes.io/projected/646d5895-0594-419d-bc57-3beb2730117e-kube-api-access-k5w68\") pod \"glance-8705a-default-external-api-0\" (UID: \"646d5895-0594-419d-bc57-3beb2730117e\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:28.658200 master-0 kubenswrapper[31411]: I0224 02:36:28.656716 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/646d5895-0594-419d-bc57-3beb2730117e-logs\") pod \"glance-8705a-default-external-api-0\" (UID: \"646d5895-0594-419d-bc57-3beb2730117e\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:28.658200 master-0 kubenswrapper[31411]: I0224 02:36:28.656772 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646d5895-0594-419d-bc57-3beb2730117e-combined-ca-bundle\") pod \"glance-8705a-default-external-api-0\" (UID: \"646d5895-0594-419d-bc57-3beb2730117e\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:28.760618 master-0 kubenswrapper[31411]: I0224 02:36:28.758892 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-77a90a1f-3b19-443f-bfa7-9776b1f847b6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a5d836f3-76db-4c3b-84c0-25ab845f7f66\") pod \"glance-8705a-default-external-api-0\" (UID: \"646d5895-0594-419d-bc57-3beb2730117e\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:28.760618 master-0 kubenswrapper[31411]: I0224 02:36:28.759524 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5w68\" (UniqueName: \"kubernetes.io/projected/646d5895-0594-419d-bc57-3beb2730117e-kube-api-access-k5w68\") pod \"glance-8705a-default-external-api-0\" (UID: \"646d5895-0594-419d-bc57-3beb2730117e\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:28.760618 master-0 kubenswrapper[31411]: I0224 02:36:28.759610 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/646d5895-0594-419d-bc57-3beb2730117e-logs\") pod \"glance-8705a-default-external-api-0\" (UID: \"646d5895-0594-419d-bc57-3beb2730117e\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:28.760618 master-0 kubenswrapper[31411]: I0224 02:36:28.759684 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646d5895-0594-419d-bc57-3beb2730117e-combined-ca-bundle\") pod \"glance-8705a-default-external-api-0\" (UID: \"646d5895-0594-419d-bc57-3beb2730117e\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:28.760618 master-0 kubenswrapper[31411]: I0224 02:36:28.760185 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646d5895-0594-419d-bc57-3beb2730117e-config-data\") pod \"glance-8705a-default-external-api-0\" (UID: \"646d5895-0594-419d-bc57-3beb2730117e\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:28.760618 master-0 kubenswrapper[31411]: I0224 02:36:28.760223 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/646d5895-0594-419d-bc57-3beb2730117e-httpd-run\") pod \"glance-8705a-default-external-api-0\" (UID: \"646d5895-0594-419d-bc57-3beb2730117e\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:28.760618 master-0 kubenswrapper[31411]: I0224 02:36:28.760269 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/646d5895-0594-419d-bc57-3beb2730117e-scripts\") pod \"glance-8705a-default-external-api-0\" (UID: \"646d5895-0594-419d-bc57-3beb2730117e\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:28.762490 master-0 kubenswrapper[31411]: I0224 02:36:28.761687 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/646d5895-0594-419d-bc57-3beb2730117e-logs\") pod \"glance-8705a-default-external-api-0\" (UID: \"646d5895-0594-419d-bc57-3beb2730117e\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:28.762490 master-0 kubenswrapper[31411]: I0224 02:36:28.761965 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/646d5895-0594-419d-bc57-3beb2730117e-httpd-run\") pod \"glance-8705a-default-external-api-0\" (UID: \"646d5895-0594-419d-bc57-3beb2730117e\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:28.762956 master-0 kubenswrapper[31411]: I0224 02:36:28.762911 31411 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 02:36:28.763005 master-0 kubenswrapper[31411]: I0224 02:36:28.762965 31411 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-77a90a1f-3b19-443f-bfa7-9776b1f847b6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a5d836f3-76db-4c3b-84c0-25ab845f7f66\") pod \"glance-8705a-default-external-api-0\" (UID: \"646d5895-0594-419d-bc57-3beb2730117e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/e389381360593e6090fc0484f10effeb9de4577cbce267bb52534b321405c56c/globalmount\"" pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:28.764580 master-0 kubenswrapper[31411]: I0224 02:36:28.764533 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646d5895-0594-419d-bc57-3beb2730117e-combined-ca-bundle\") pod \"glance-8705a-default-external-api-0\" (UID: \"646d5895-0594-419d-bc57-3beb2730117e\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:28.766707 master-0 kubenswrapper[31411]: I0224 02:36:28.765326 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646d5895-0594-419d-bc57-3beb2730117e-config-data\") pod \"glance-8705a-default-external-api-0\" (UID: \"646d5895-0594-419d-bc57-3beb2730117e\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:28.770645 master-0 kubenswrapper[31411]: I0224 02:36:28.767959 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/646d5895-0594-419d-bc57-3beb2730117e-scripts\") pod \"glance-8705a-default-external-api-0\" (UID: \"646d5895-0594-419d-bc57-3beb2730117e\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:28.780547 master-0 kubenswrapper[31411]: I0224 02:36:28.780513 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5w68\" (UniqueName: \"kubernetes.io/projected/646d5895-0594-419d-bc57-3beb2730117e-kube-api-access-k5w68\") pod \"glance-8705a-default-external-api-0\" (UID: \"646d5895-0594-419d-bc57-3beb2730117e\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:28.940252 master-0 kubenswrapper[31411]: I0224 02:36:28.940188 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f74bd995c-jflbg" event={"ID":"78056aef-5d74-4349-9faa-7ee56e5090b6","Type":"ContainerStarted","Data":"e747023b7c2b65a08acd038638272e87654f99bdd6bad24712f741a7a4d80736"} Feb 24 02:36:28.941643 master-0 kubenswrapper[31411]: I0224 02:36:28.941610 31411 generic.go:334] "Generic (PLEG): container finished" podID="9f4631c5-4285-4b86-8afb-2462577e53fc" containerID="2ed19f5063aed050bc5f2a7be8758140c36c7d5a22c41712656ec008ef2ecb69" exitCode=0 Feb 24 02:36:28.941736 master-0 kubenswrapper[31411]: I0224 02:36:28.941671 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-ecce-account-create-update-2pvjj" event={"ID":"9f4631c5-4285-4b86-8afb-2462577e53fc","Type":"ContainerDied","Data":"2ed19f5063aed050bc5f2a7be8758140c36c7d5a22c41712656ec008ef2ecb69"} Feb 24 02:36:28.948106 master-0 kubenswrapper[31411]: I0224 02:36:28.946372 31411 generic.go:334] "Generic (PLEG): container finished" podID="2752407f-852c-433f-80e1-0a3d258c7edf" containerID="9e1a11cd30d8e641c8a6d186495650d5b23fc6cf4d80e70d7d5428f1c31d1861" exitCode=0 Feb 24 02:36:28.948106 master-0 kubenswrapper[31411]: I0224 02:36:28.946491 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-8l585" event={"ID":"2752407f-852c-433f-80e1-0a3d258c7edf","Type":"ContainerDied","Data":"9e1a11cd30d8e641c8a6d186495650d5b23fc6cf4d80e70d7d5428f1c31d1861"} Feb 24 02:36:28.961954 master-0 kubenswrapper[31411]: I0224 02:36:28.958849 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-597f6b8457-gn4tl" event={"ID":"99772cb2-a116-47a4-a08b-e81a079e56f4","Type":"ContainerDied","Data":"f706fbceaa2c4ce68c0914b6f364aba8219b52542cb7c51c971e00cdbbdf1cec"} Feb 24 02:36:28.961954 master-0 kubenswrapper[31411]: I0224 02:36:28.958913 31411 scope.go:117] "RemoveContainer" containerID="7d1d856f31b47e7767721fa351e925b1f630a671cd2dbf9045f4e320dd3d984e" Feb 24 02:36:28.961954 master-0 kubenswrapper[31411]: I0224 02:36:28.959071 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-597f6b8457-gn4tl" Feb 24 02:36:28.975592 master-0 kubenswrapper[31411]: I0224 02:36:28.975126 31411 generic.go:334] "Generic (PLEG): container finished" podID="4f83a0b0-4fc6-4600-b532-d40414aa61a0" containerID="9997289192bca284bf363402968ae2620809afb4e747e870822428fa33bdb998" exitCode=0 Feb 24 02:36:28.976169 master-0 kubenswrapper[31411]: I0224 02:36:28.976124 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64b4994945-klvx7" event={"ID":"4f83a0b0-4fc6-4600-b532-d40414aa61a0","Type":"ContainerDied","Data":"9997289192bca284bf363402968ae2620809afb4e747e870822428fa33bdb998"} Feb 24 02:36:29.214784 master-0 kubenswrapper[31411]: I0224 02:36:29.214699 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-597f6b8457-gn4tl"] Feb 24 02:36:29.224087 master-0 kubenswrapper[31411]: I0224 02:36:29.223413 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-597f6b8457-gn4tl"] Feb 24 02:36:29.569463 master-0 kubenswrapper[31411]: I0224 02:36:29.569406 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-8705a-default-internal-api-0"] Feb 24 02:36:29.573528 master-0 kubenswrapper[31411]: I0224 02:36:29.573110 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:29.581776 master-0 kubenswrapper[31411]: I0224 02:36:29.581617 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-8705a-default-internal-config-data" Feb 24 02:36:29.596892 master-0 kubenswrapper[31411]: I0224 02:36:29.594946 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8705a-default-internal-api-0"] Feb 24 02:36:29.633112 master-0 kubenswrapper[31411]: I0224 02:36:29.633059 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64b4994945-klvx7" Feb 24 02:36:29.733880 master-0 kubenswrapper[31411]: I0224 02:36:29.733793 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f83a0b0-4fc6-4600-b532-d40414aa61a0-dns-svc\") pod \"4f83a0b0-4fc6-4600-b532-d40414aa61a0\" (UID: \"4f83a0b0-4fc6-4600-b532-d40414aa61a0\") " Feb 24 02:36:29.734197 master-0 kubenswrapper[31411]: I0224 02:36:29.734031 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f83a0b0-4fc6-4600-b532-d40414aa61a0-dns-swift-storage-0\") pod \"4f83a0b0-4fc6-4600-b532-d40414aa61a0\" (UID: \"4f83a0b0-4fc6-4600-b532-d40414aa61a0\") " Feb 24 02:36:29.734197 master-0 kubenswrapper[31411]: I0224 02:36:29.734211 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f83a0b0-4fc6-4600-b532-d40414aa61a0-config\") pod \"4f83a0b0-4fc6-4600-b532-d40414aa61a0\" (UID: \"4f83a0b0-4fc6-4600-b532-d40414aa61a0\") " Feb 24 02:36:29.734197 master-0 kubenswrapper[31411]: I0224 02:36:29.734252 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4wc8\" (UniqueName: \"kubernetes.io/projected/4f83a0b0-4fc6-4600-b532-d40414aa61a0-kube-api-access-m4wc8\") pod \"4f83a0b0-4fc6-4600-b532-d40414aa61a0\" (UID: \"4f83a0b0-4fc6-4600-b532-d40414aa61a0\") " Feb 24 02:36:29.734404 master-0 kubenswrapper[31411]: I0224 02:36:29.734277 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f83a0b0-4fc6-4600-b532-d40414aa61a0-ovsdbserver-sb\") pod \"4f83a0b0-4fc6-4600-b532-d40414aa61a0\" (UID: \"4f83a0b0-4fc6-4600-b532-d40414aa61a0\") " Feb 24 02:36:29.734404 master-0 kubenswrapper[31411]: I0224 02:36:29.734340 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f83a0b0-4fc6-4600-b532-d40414aa61a0-ovsdbserver-nb\") pod \"4f83a0b0-4fc6-4600-b532-d40414aa61a0\" (UID: \"4f83a0b0-4fc6-4600-b532-d40414aa61a0\") " Feb 24 02:36:29.735077 master-0 kubenswrapper[31411]: I0224 02:36:29.734680 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsrfd\" (UniqueName: \"kubernetes.io/projected/d6814476-68e8-4541-91b5-e5a159982ff5-kube-api-access-jsrfd\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d6814476-68e8-4541-91b5-e5a159982ff5\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:29.735077 master-0 kubenswrapper[31411]: I0224 02:36:29.734741 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6814476-68e8-4541-91b5-e5a159982ff5-httpd-run\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d6814476-68e8-4541-91b5-e5a159982ff5\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:29.735077 master-0 kubenswrapper[31411]: I0224 02:36:29.734763 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6814476-68e8-4541-91b5-e5a159982ff5-logs\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d6814476-68e8-4541-91b5-e5a159982ff5\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:29.735077 master-0 kubenswrapper[31411]: I0224 02:36:29.734871 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6814476-68e8-4541-91b5-e5a159982ff5-config-data\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d6814476-68e8-4541-91b5-e5a159982ff5\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:29.735077 master-0 kubenswrapper[31411]: I0224 02:36:29.734937 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6814476-68e8-4541-91b5-e5a159982ff5-scripts\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d6814476-68e8-4541-91b5-e5a159982ff5\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:29.735077 master-0 kubenswrapper[31411]: I0224 02:36:29.735031 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6814476-68e8-4541-91b5-e5a159982ff5-combined-ca-bundle\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d6814476-68e8-4541-91b5-e5a159982ff5\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:29.735332 master-0 kubenswrapper[31411]: I0224 02:36:29.735116 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-57eabf84-c6fa-42eb-b9cc-5a07d1a482b8\" (UniqueName: \"kubernetes.io/csi/topolvm.io^29278cd9-fc15-4e42-b224-808884eb5fd0\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d6814476-68e8-4541-91b5-e5a159982ff5\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:29.748724 master-0 kubenswrapper[31411]: I0224 02:36:29.748670 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f83a0b0-4fc6-4600-b532-d40414aa61a0-kube-api-access-m4wc8" (OuterVolumeSpecName: "kube-api-access-m4wc8") pod "4f83a0b0-4fc6-4600-b532-d40414aa61a0" (UID: "4f83a0b0-4fc6-4600-b532-d40414aa61a0"). InnerVolumeSpecName "kube-api-access-m4wc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:36:29.758910 master-0 kubenswrapper[31411]: I0224 02:36:29.758850 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f83a0b0-4fc6-4600-b532-d40414aa61a0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4f83a0b0-4fc6-4600-b532-d40414aa61a0" (UID: "4f83a0b0-4fc6-4600-b532-d40414aa61a0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:36:29.787584 master-0 kubenswrapper[31411]: I0224 02:36:29.780689 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f83a0b0-4fc6-4600-b532-d40414aa61a0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4f83a0b0-4fc6-4600-b532-d40414aa61a0" (UID: "4f83a0b0-4fc6-4600-b532-d40414aa61a0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:36:29.787584 master-0 kubenswrapper[31411]: I0224 02:36:29.781687 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f83a0b0-4fc6-4600-b532-d40414aa61a0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4f83a0b0-4fc6-4600-b532-d40414aa61a0" (UID: "4f83a0b0-4fc6-4600-b532-d40414aa61a0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:36:29.787584 master-0 kubenswrapper[31411]: I0224 02:36:29.782342 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f83a0b0-4fc6-4600-b532-d40414aa61a0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4f83a0b0-4fc6-4600-b532-d40414aa61a0" (UID: "4f83a0b0-4fc6-4600-b532-d40414aa61a0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:36:29.787584 master-0 kubenswrapper[31411]: I0224 02:36:29.783487 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f83a0b0-4fc6-4600-b532-d40414aa61a0-config" (OuterVolumeSpecName: "config") pod "4f83a0b0-4fc6-4600-b532-d40414aa61a0" (UID: "4f83a0b0-4fc6-4600-b532-d40414aa61a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:36:29.836648 master-0 kubenswrapper[31411]: I0224 02:36:29.836592 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6814476-68e8-4541-91b5-e5a159982ff5-config-data\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d6814476-68e8-4541-91b5-e5a159982ff5\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:29.837267 master-0 kubenswrapper[31411]: I0224 02:36:29.836673 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6814476-68e8-4541-91b5-e5a159982ff5-scripts\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d6814476-68e8-4541-91b5-e5a159982ff5\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:29.837267 master-0 kubenswrapper[31411]: I0224 02:36:29.836708 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6814476-68e8-4541-91b5-e5a159982ff5-combined-ca-bundle\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d6814476-68e8-4541-91b5-e5a159982ff5\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:29.837267 master-0 kubenswrapper[31411]: I0224 02:36:29.836750 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-57eabf84-c6fa-42eb-b9cc-5a07d1a482b8\" (UniqueName: \"kubernetes.io/csi/topolvm.io^29278cd9-fc15-4e42-b224-808884eb5fd0\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d6814476-68e8-4541-91b5-e5a159982ff5\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:29.837267 master-0 kubenswrapper[31411]: I0224 02:36:29.836814 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsrfd\" (UniqueName: \"kubernetes.io/projected/d6814476-68e8-4541-91b5-e5a159982ff5-kube-api-access-jsrfd\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d6814476-68e8-4541-91b5-e5a159982ff5\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:29.837267 master-0 kubenswrapper[31411]: I0224 02:36:29.836844 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6814476-68e8-4541-91b5-e5a159982ff5-httpd-run\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d6814476-68e8-4541-91b5-e5a159982ff5\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:29.837267 master-0 kubenswrapper[31411]: I0224 02:36:29.836863 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6814476-68e8-4541-91b5-e5a159982ff5-logs\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d6814476-68e8-4541-91b5-e5a159982ff5\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:29.837267 master-0 kubenswrapper[31411]: I0224 02:36:29.836940 31411 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f83a0b0-4fc6-4600-b532-d40414aa61a0-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:29.837267 master-0 kubenswrapper[31411]: I0224 02:36:29.836954 31411 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f83a0b0-4fc6-4600-b532-d40414aa61a0-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:29.837267 master-0 kubenswrapper[31411]: I0224 02:36:29.836963 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4wc8\" (UniqueName: \"kubernetes.io/projected/4f83a0b0-4fc6-4600-b532-d40414aa61a0-kube-api-access-m4wc8\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:29.837267 master-0 kubenswrapper[31411]: I0224 02:36:29.836972 31411 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f83a0b0-4fc6-4600-b532-d40414aa61a0-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:29.837267 master-0 kubenswrapper[31411]: I0224 02:36:29.836981 31411 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f83a0b0-4fc6-4600-b532-d40414aa61a0-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:29.837267 master-0 kubenswrapper[31411]: I0224 02:36:29.836990 31411 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f83a0b0-4fc6-4600-b532-d40414aa61a0-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:29.838009 master-0 kubenswrapper[31411]: I0224 02:36:29.837479 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6814476-68e8-4541-91b5-e5a159982ff5-logs\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d6814476-68e8-4541-91b5-e5a159982ff5\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:29.839015 master-0 kubenswrapper[31411]: I0224 02:36:29.838960 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6814476-68e8-4541-91b5-e5a159982ff5-httpd-run\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d6814476-68e8-4541-91b5-e5a159982ff5\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:29.840495 master-0 kubenswrapper[31411]: I0224 02:36:29.840461 31411 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 02:36:29.840610 master-0 kubenswrapper[31411]: I0224 02:36:29.840503 31411 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-57eabf84-c6fa-42eb-b9cc-5a07d1a482b8\" (UniqueName: \"kubernetes.io/csi/topolvm.io^29278cd9-fc15-4e42-b224-808884eb5fd0\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d6814476-68e8-4541-91b5-e5a159982ff5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/568a2d9cea509129bb10d2a8daaed82c1a66b4ccaea30faa55e2d5b91b5cf92d/globalmount\"" pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:29.842935 master-0 kubenswrapper[31411]: I0224 02:36:29.842884 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6814476-68e8-4541-91b5-e5a159982ff5-config-data\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d6814476-68e8-4541-91b5-e5a159982ff5\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:29.843385 master-0 kubenswrapper[31411]: I0224 02:36:29.843339 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6814476-68e8-4541-91b5-e5a159982ff5-scripts\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d6814476-68e8-4541-91b5-e5a159982ff5\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:29.844921 master-0 kubenswrapper[31411]: I0224 02:36:29.844875 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6814476-68e8-4541-91b5-e5a159982ff5-combined-ca-bundle\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d6814476-68e8-4541-91b5-e5a159982ff5\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:29.893325 master-0 kubenswrapper[31411]: I0224 02:36:29.893245 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsrfd\" (UniqueName: \"kubernetes.io/projected/d6814476-68e8-4541-91b5-e5a159982ff5-kube-api-access-jsrfd\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d6814476-68e8-4541-91b5-e5a159982ff5\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:29.993032 master-0 kubenswrapper[31411]: I0224 02:36:29.992873 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64b4994945-klvx7" event={"ID":"4f83a0b0-4fc6-4600-b532-d40414aa61a0","Type":"ContainerDied","Data":"9c5ea9aaab1cb8bae569d7caf9cbc4883b529f88600d99ba2c9dd854a6411a21"} Feb 24 02:36:29.993032 master-0 kubenswrapper[31411]: I0224 02:36:29.992944 31411 scope.go:117] "RemoveContainer" containerID="9997289192bca284bf363402968ae2620809afb4e747e870822428fa33bdb998" Feb 24 02:36:29.993335 master-0 kubenswrapper[31411]: I0224 02:36:29.993094 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64b4994945-klvx7" Feb 24 02:36:30.000370 master-0 kubenswrapper[31411]: I0224 02:36:30.000315 31411 generic.go:334] "Generic (PLEG): container finished" podID="78056aef-5d74-4349-9faa-7ee56e5090b6" containerID="80f83c7016640d60426f83a31172755f75a0e237220d8babddf8665022bb3a2d" exitCode=0 Feb 24 02:36:30.000701 master-0 kubenswrapper[31411]: I0224 02:36:30.000667 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f74bd995c-jflbg" event={"ID":"78056aef-5d74-4349-9faa-7ee56e5090b6","Type":"ContainerDied","Data":"80f83c7016640d60426f83a31172755f75a0e237220d8babddf8665022bb3a2d"} Feb 24 02:36:30.271193 master-0 kubenswrapper[31411]: I0224 02:36:30.271132 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64b4994945-klvx7"] Feb 24 02:36:30.276618 master-0 kubenswrapper[31411]: I0224 02:36:30.276474 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-77a90a1f-3b19-443f-bfa7-9776b1f847b6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a5d836f3-76db-4c3b-84c0-25ab845f7f66\") pod \"glance-8705a-default-external-api-0\" (UID: \"646d5895-0594-419d-bc57-3beb2730117e\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:30.296533 master-0 kubenswrapper[31411]: I0224 02:36:30.296482 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-64b4994945-klvx7"] Feb 24 02:36:30.411496 master-0 kubenswrapper[31411]: I0224 02:36:30.411415 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:31.016792 master-0 kubenswrapper[31411]: I0224 02:36:31.016708 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f74bd995c-jflbg" event={"ID":"78056aef-5d74-4349-9faa-7ee56e5090b6","Type":"ContainerStarted","Data":"e385e5e5a8a5cfb0183077cd94d1d4cb19ee83e46fecb9613441aff3b0fa8342"} Feb 24 02:36:31.017468 master-0 kubenswrapper[31411]: I0224 02:36:31.016975 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f74bd995c-jflbg" Feb 24 02:36:31.049207 master-0 kubenswrapper[31411]: I0224 02:36:31.049117 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f74bd995c-jflbg" podStartSLOduration=4.04909434 podStartE2EDuration="4.04909434s" podCreationTimestamp="2026-02-24 02:36:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:36:31.043489152 +0000 UTC m=+934.260687048" watchObservedRunningTime="2026-02-24 02:36:31.04909434 +0000 UTC m=+934.266292186" Feb 24 02:36:31.106119 master-0 kubenswrapper[31411]: I0224 02:36:31.106032 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f83a0b0-4fc6-4600-b532-d40414aa61a0" path="/var/lib/kubelet/pods/4f83a0b0-4fc6-4600-b532-d40414aa61a0/volumes" Feb 24 02:36:31.107090 master-0 kubenswrapper[31411]: I0224 02:36:31.107048 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99772cb2-a116-47a4-a08b-e81a079e56f4" path="/var/lib/kubelet/pods/99772cb2-a116-47a4-a08b-e81a079e56f4/volumes" Feb 24 02:36:32.559266 master-0 kubenswrapper[31411]: I0224 02:36:32.559180 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-57eabf84-c6fa-42eb-b9cc-5a07d1a482b8\" (UniqueName: \"kubernetes.io/csi/topolvm.io^29278cd9-fc15-4e42-b224-808884eb5fd0\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d6814476-68e8-4541-91b5-e5a159982ff5\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:32.695751 master-0 kubenswrapper[31411]: I0224 02:36:32.695582 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:32.697782 master-0 kubenswrapper[31411]: I0224 02:36:32.697739 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-8l585" Feb 24 02:36:32.707240 master-0 kubenswrapper[31411]: I0224 02:36:32.707187 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-ecce-account-create-update-2pvjj" Feb 24 02:36:32.857343 master-0 kubenswrapper[31411]: I0224 02:36:32.857296 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d47rr\" (UniqueName: \"kubernetes.io/projected/2752407f-852c-433f-80e1-0a3d258c7edf-kube-api-access-d47rr\") pod \"2752407f-852c-433f-80e1-0a3d258c7edf\" (UID: \"2752407f-852c-433f-80e1-0a3d258c7edf\") " Feb 24 02:36:32.857492 master-0 kubenswrapper[31411]: I0224 02:36:32.857456 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f4631c5-4285-4b86-8afb-2462577e53fc-operator-scripts\") pod \"9f4631c5-4285-4b86-8afb-2462577e53fc\" (UID: \"9f4631c5-4285-4b86-8afb-2462577e53fc\") " Feb 24 02:36:32.857646 master-0 kubenswrapper[31411]: I0224 02:36:32.857625 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nbgv\" (UniqueName: \"kubernetes.io/projected/9f4631c5-4285-4b86-8afb-2462577e53fc-kube-api-access-6nbgv\") pod \"9f4631c5-4285-4b86-8afb-2462577e53fc\" (UID: \"9f4631c5-4285-4b86-8afb-2462577e53fc\") " Feb 24 02:36:32.857900 master-0 kubenswrapper[31411]: I0224 02:36:32.857872 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2752407f-852c-433f-80e1-0a3d258c7edf-operator-scripts\") pod \"2752407f-852c-433f-80e1-0a3d258c7edf\" (UID: \"2752407f-852c-433f-80e1-0a3d258c7edf\") " Feb 24 02:36:32.858137 master-0 kubenswrapper[31411]: I0224 02:36:32.858039 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f4631c5-4285-4b86-8afb-2462577e53fc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9f4631c5-4285-4b86-8afb-2462577e53fc" (UID: "9f4631c5-4285-4b86-8afb-2462577e53fc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:36:32.858432 master-0 kubenswrapper[31411]: I0224 02:36:32.858398 31411 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f4631c5-4285-4b86-8afb-2462577e53fc-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:32.858500 master-0 kubenswrapper[31411]: I0224 02:36:32.858424 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2752407f-852c-433f-80e1-0a3d258c7edf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2752407f-852c-433f-80e1-0a3d258c7edf" (UID: "2752407f-852c-433f-80e1-0a3d258c7edf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:36:32.863810 master-0 kubenswrapper[31411]: I0224 02:36:32.863759 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2752407f-852c-433f-80e1-0a3d258c7edf-kube-api-access-d47rr" (OuterVolumeSpecName: "kube-api-access-d47rr") pod "2752407f-852c-433f-80e1-0a3d258c7edf" (UID: "2752407f-852c-433f-80e1-0a3d258c7edf"). InnerVolumeSpecName "kube-api-access-d47rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:36:32.865097 master-0 kubenswrapper[31411]: I0224 02:36:32.865035 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f4631c5-4285-4b86-8afb-2462577e53fc-kube-api-access-6nbgv" (OuterVolumeSpecName: "kube-api-access-6nbgv") pod "9f4631c5-4285-4b86-8afb-2462577e53fc" (UID: "9f4631c5-4285-4b86-8afb-2462577e53fc"). InnerVolumeSpecName "kube-api-access-6nbgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:36:32.965816 master-0 kubenswrapper[31411]: I0224 02:36:32.963788 31411 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2752407f-852c-433f-80e1-0a3d258c7edf-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:32.965816 master-0 kubenswrapper[31411]: I0224 02:36:32.963872 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d47rr\" (UniqueName: \"kubernetes.io/projected/2752407f-852c-433f-80e1-0a3d258c7edf-kube-api-access-d47rr\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:32.965816 master-0 kubenswrapper[31411]: I0224 02:36:32.963891 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nbgv\" (UniqueName: \"kubernetes.io/projected/9f4631c5-4285-4b86-8afb-2462577e53fc-kube-api-access-6nbgv\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:33.070544 master-0 kubenswrapper[31411]: I0224 02:36:33.069781 31411 generic.go:334] "Generic (PLEG): container finished" podID="a8678c29-b004-43c1-8393-c7a285064416" containerID="29c9cf6824c423edde9609e265c6f1c4cd9529c2ca073aad171b549468b2cff5" exitCode=0 Feb 24 02:36:33.070544 master-0 kubenswrapper[31411]: I0224 02:36:33.069884 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bw6l8" event={"ID":"a8678c29-b004-43c1-8393-c7a285064416","Type":"ContainerDied","Data":"29c9cf6824c423edde9609e265c6f1c4cd9529c2ca073aad171b549468b2cff5"} Feb 24 02:36:33.071734 master-0 kubenswrapper[31411]: I0224 02:36:33.071681 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-ecce-account-create-update-2pvjj" event={"ID":"9f4631c5-4285-4b86-8afb-2462577e53fc","Type":"ContainerDied","Data":"5a698de41db9e7b85a427a2e85f25ec1aeef6073a983763a78db472deb7fedf4"} Feb 24 02:36:33.071734 master-0 kubenswrapper[31411]: I0224 02:36:33.071708 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a698de41db9e7b85a427a2e85f25ec1aeef6073a983763a78db472deb7fedf4" Feb 24 02:36:33.071911 master-0 kubenswrapper[31411]: I0224 02:36:33.071753 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-ecce-account-create-update-2pvjj" Feb 24 02:36:33.075225 master-0 kubenswrapper[31411]: I0224 02:36:33.073377 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-njvpx" event={"ID":"7edb7291-5510-45f1-810f-de8e6bf08cd0","Type":"ContainerStarted","Data":"a396992afc6bd6a8dd79fe828cd981509e7cf8dd75f880184e243de5f99c769a"} Feb 24 02:36:33.075953 master-0 kubenswrapper[31411]: I0224 02:36:33.075925 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-8l585" event={"ID":"2752407f-852c-433f-80e1-0a3d258c7edf","Type":"ContainerDied","Data":"098832266f2627d94c05d57f9ce9dcd12202bd5dfd4d21bb6f9b714f1e8c7852"} Feb 24 02:36:33.076018 master-0 kubenswrapper[31411]: I0224 02:36:33.075952 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-8l585" Feb 24 02:36:33.076195 master-0 kubenswrapper[31411]: I0224 02:36:33.075957 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="098832266f2627d94c05d57f9ce9dcd12202bd5dfd4d21bb6f9b714f1e8c7852" Feb 24 02:36:33.136047 master-0 kubenswrapper[31411]: I0224 02:36:33.135947 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-njvpx" podStartSLOduration=2.798107086 podStartE2EDuration="8.135922479s" podCreationTimestamp="2026-02-24 02:36:25 +0000 UTC" firstStartedPulling="2026-02-24 02:36:27.191316195 +0000 UTC m=+930.408514041" lastFinishedPulling="2026-02-24 02:36:32.529131558 +0000 UTC m=+935.746329434" observedRunningTime="2026-02-24 02:36:33.125938229 +0000 UTC m=+936.343136075" watchObservedRunningTime="2026-02-24 02:36:33.135922479 +0000 UTC m=+936.353120325" Feb 24 02:36:33.306222 master-0 kubenswrapper[31411]: I0224 02:36:33.306157 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8705a-default-external-api-0"] Feb 24 02:36:33.408172 master-0 kubenswrapper[31411]: I0224 02:36:33.408119 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8705a-default-internal-api-0"] Feb 24 02:36:34.131765 master-0 kubenswrapper[31411]: I0224 02:36:34.131700 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8705a-default-internal-api-0" event={"ID":"d6814476-68e8-4541-91b5-e5a159982ff5","Type":"ContainerStarted","Data":"5cce40d8d29e9be9445938ac036a92b7bca01c79ad779ad52b6f20cf11b6fe70"} Feb 24 02:36:34.145634 master-0 kubenswrapper[31411]: I0224 02:36:34.142996 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8705a-default-external-api-0" event={"ID":"646d5895-0594-419d-bc57-3beb2730117e","Type":"ContainerStarted","Data":"dd82f27f0c57a76f84c5efda33e8d6c01072fe917998ad05ab2156d3341121d6"} Feb 24 02:36:34.145634 master-0 kubenswrapper[31411]: I0224 02:36:34.143096 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8705a-default-external-api-0" event={"ID":"646d5895-0594-419d-bc57-3beb2730117e","Type":"ContainerStarted","Data":"11aaafe12f3c30b5e3c71a37c9f04a1b82d2744958da0fc37a85b7c6e89e99f5"} Feb 24 02:36:34.641805 master-0 kubenswrapper[31411]: I0224 02:36:34.641749 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bw6l8" Feb 24 02:36:34.732417 master-0 kubenswrapper[31411]: I0224 02:36:34.729287 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8678c29-b004-43c1-8393-c7a285064416-combined-ca-bundle\") pod \"a8678c29-b004-43c1-8393-c7a285064416\" (UID: \"a8678c29-b004-43c1-8393-c7a285064416\") " Feb 24 02:36:34.732417 master-0 kubenswrapper[31411]: I0224 02:36:34.731149 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a8678c29-b004-43c1-8393-c7a285064416-fernet-keys\") pod \"a8678c29-b004-43c1-8393-c7a285064416\" (UID: \"a8678c29-b004-43c1-8393-c7a285064416\") " Feb 24 02:36:34.732417 master-0 kubenswrapper[31411]: I0224 02:36:34.731250 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a8678c29-b004-43c1-8393-c7a285064416-credential-keys\") pod \"a8678c29-b004-43c1-8393-c7a285064416\" (UID: \"a8678c29-b004-43c1-8393-c7a285064416\") " Feb 24 02:36:34.732417 master-0 kubenswrapper[31411]: I0224 02:36:34.731380 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8678c29-b004-43c1-8393-c7a285064416-config-data\") pod \"a8678c29-b004-43c1-8393-c7a285064416\" (UID: \"a8678c29-b004-43c1-8393-c7a285064416\") " Feb 24 02:36:34.732417 master-0 kubenswrapper[31411]: I0224 02:36:34.731450 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk6vs\" (UniqueName: \"kubernetes.io/projected/a8678c29-b004-43c1-8393-c7a285064416-kube-api-access-wk6vs\") pod \"a8678c29-b004-43c1-8393-c7a285064416\" (UID: \"a8678c29-b004-43c1-8393-c7a285064416\") " Feb 24 02:36:34.732417 master-0 kubenswrapper[31411]: I0224 02:36:34.731486 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8678c29-b004-43c1-8393-c7a285064416-scripts\") pod \"a8678c29-b004-43c1-8393-c7a285064416\" (UID: \"a8678c29-b004-43c1-8393-c7a285064416\") " Feb 24 02:36:34.742312 master-0 kubenswrapper[31411]: I0224 02:36:34.737357 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8678c29-b004-43c1-8393-c7a285064416-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a8678c29-b004-43c1-8393-c7a285064416" (UID: "a8678c29-b004-43c1-8393-c7a285064416"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:36:34.742312 master-0 kubenswrapper[31411]: I0224 02:36:34.737962 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8678c29-b004-43c1-8393-c7a285064416-scripts" (OuterVolumeSpecName: "scripts") pod "a8678c29-b004-43c1-8393-c7a285064416" (UID: "a8678c29-b004-43c1-8393-c7a285064416"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:36:34.744618 master-0 kubenswrapper[31411]: I0224 02:36:34.742534 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8678c29-b004-43c1-8393-c7a285064416-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a8678c29-b004-43c1-8393-c7a285064416" (UID: "a8678c29-b004-43c1-8393-c7a285064416"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:36:34.759121 master-0 kubenswrapper[31411]: I0224 02:36:34.759046 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8678c29-b004-43c1-8393-c7a285064416-kube-api-access-wk6vs" (OuterVolumeSpecName: "kube-api-access-wk6vs") pod "a8678c29-b004-43c1-8393-c7a285064416" (UID: "a8678c29-b004-43c1-8393-c7a285064416"). InnerVolumeSpecName "kube-api-access-wk6vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:36:34.763704 master-0 kubenswrapper[31411]: I0224 02:36:34.763558 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8678c29-b004-43c1-8393-c7a285064416-config-data" (OuterVolumeSpecName: "config-data") pod "a8678c29-b004-43c1-8393-c7a285064416" (UID: "a8678c29-b004-43c1-8393-c7a285064416"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:36:34.779766 master-0 kubenswrapper[31411]: I0224 02:36:34.778905 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8678c29-b004-43c1-8393-c7a285064416-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8678c29-b004-43c1-8393-c7a285064416" (UID: "a8678c29-b004-43c1-8393-c7a285064416"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:36:34.838171 master-0 kubenswrapper[31411]: I0224 02:36:34.838110 31411 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8678c29-b004-43c1-8393-c7a285064416-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:34.838171 master-0 kubenswrapper[31411]: I0224 02:36:34.838163 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk6vs\" (UniqueName: \"kubernetes.io/projected/a8678c29-b004-43c1-8393-c7a285064416-kube-api-access-wk6vs\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:34.838171 master-0 kubenswrapper[31411]: I0224 02:36:34.838178 31411 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8678c29-b004-43c1-8393-c7a285064416-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:34.838345 master-0 kubenswrapper[31411]: I0224 02:36:34.838188 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8678c29-b004-43c1-8393-c7a285064416-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:34.838345 master-0 kubenswrapper[31411]: I0224 02:36:34.838200 31411 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a8678c29-b004-43c1-8393-c7a285064416-fernet-keys\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:34.838345 master-0 kubenswrapper[31411]: I0224 02:36:34.838209 31411 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a8678c29-b004-43c1-8393-c7a285064416-credential-keys\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:35.084474 master-0 kubenswrapper[31411]: I0224 02:36:35.084391 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-8705a-default-external-api-0"] Feb 24 02:36:35.159960 master-0 kubenswrapper[31411]: I0224 02:36:35.159711 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-8705a-default-internal-api-0"] Feb 24 02:36:35.165647 master-0 kubenswrapper[31411]: I0224 02:36:35.164190 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bw6l8" event={"ID":"a8678c29-b004-43c1-8393-c7a285064416","Type":"ContainerDied","Data":"844c316fcd5157eb6aad95b20bd5ef9fa8a5a98cc93a7727cd0fc224e5b93785"} Feb 24 02:36:35.165647 master-0 kubenswrapper[31411]: I0224 02:36:35.164276 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="844c316fcd5157eb6aad95b20bd5ef9fa8a5a98cc93a7727cd0fc224e5b93785" Feb 24 02:36:35.165647 master-0 kubenswrapper[31411]: I0224 02:36:35.164286 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bw6l8" Feb 24 02:36:35.171032 master-0 kubenswrapper[31411]: I0224 02:36:35.170980 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8705a-default-external-api-0" event={"ID":"646d5895-0594-419d-bc57-3beb2730117e","Type":"ContainerStarted","Data":"779faa0a04810831e960b3239c3fff4e132938adfa66533570a15a2aeeed4b79"} Feb 24 02:36:35.177467 master-0 kubenswrapper[31411]: I0224 02:36:35.177410 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8705a-default-internal-api-0" event={"ID":"d6814476-68e8-4541-91b5-e5a159982ff5","Type":"ContainerStarted","Data":"e384e2d0410bedeb4234d1124efb7db793999cb383a58aa2d2afc9cc3a62b1ef"} Feb 24 02:36:35.177467 master-0 kubenswrapper[31411]: I0224 02:36:35.177468 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8705a-default-internal-api-0" event={"ID":"d6814476-68e8-4541-91b5-e5a159982ff5","Type":"ContainerStarted","Data":"ee3a47bc61fe2b5bc13160578709c20ad39bac3d0df18cb4d84e91cc9b4a6a01"} Feb 24 02:36:35.216061 master-0 kubenswrapper[31411]: I0224 02:36:35.215958 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-8705a-default-external-api-0" podStartSLOduration=7.215933598 podStartE2EDuration="7.215933598s" podCreationTimestamp="2026-02-24 02:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:36:35.200939207 +0000 UTC m=+938.418137053" watchObservedRunningTime="2026-02-24 02:36:35.215933598 +0000 UTC m=+938.433131444" Feb 24 02:36:35.237598 master-0 kubenswrapper[31411]: I0224 02:36:35.234995 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-8705a-default-internal-api-0" podStartSLOduration=7.234980001 podStartE2EDuration="7.234980001s" podCreationTimestamp="2026-02-24 02:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:36:35.229557059 +0000 UTC m=+938.446754905" watchObservedRunningTime="2026-02-24 02:36:35.234980001 +0000 UTC m=+938.452177847" Feb 24 02:36:35.328147 master-0 kubenswrapper[31411]: I0224 02:36:35.328071 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-bw6l8"] Feb 24 02:36:35.339595 master-0 kubenswrapper[31411]: I0224 02:36:35.338891 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-bw6l8"] Feb 24 02:36:35.483713 master-0 kubenswrapper[31411]: I0224 02:36:35.483449 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-wdcb6"] Feb 24 02:36:35.484598 master-0 kubenswrapper[31411]: E0224 02:36:35.484270 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f83a0b0-4fc6-4600-b532-d40414aa61a0" containerName="init" Feb 24 02:36:35.484598 master-0 kubenswrapper[31411]: I0224 02:36:35.484294 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f83a0b0-4fc6-4600-b532-d40414aa61a0" containerName="init" Feb 24 02:36:35.484598 master-0 kubenswrapper[31411]: E0224 02:36:35.484326 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8678c29-b004-43c1-8393-c7a285064416" containerName="keystone-bootstrap" Feb 24 02:36:35.484598 master-0 kubenswrapper[31411]: I0224 02:36:35.484333 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8678c29-b004-43c1-8393-c7a285064416" containerName="keystone-bootstrap" Feb 24 02:36:35.484598 master-0 kubenswrapper[31411]: E0224 02:36:35.484363 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f4631c5-4285-4b86-8afb-2462577e53fc" containerName="mariadb-account-create-update" Feb 24 02:36:35.484598 master-0 kubenswrapper[31411]: I0224 02:36:35.484374 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f4631c5-4285-4b86-8afb-2462577e53fc" containerName="mariadb-account-create-update" Feb 24 02:36:35.484598 master-0 kubenswrapper[31411]: E0224 02:36:35.484434 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2752407f-852c-433f-80e1-0a3d258c7edf" containerName="mariadb-database-create" Feb 24 02:36:35.484598 master-0 kubenswrapper[31411]: I0224 02:36:35.484442 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="2752407f-852c-433f-80e1-0a3d258c7edf" containerName="mariadb-database-create" Feb 24 02:36:35.484861 master-0 kubenswrapper[31411]: I0224 02:36:35.484767 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f4631c5-4285-4b86-8afb-2462577e53fc" containerName="mariadb-account-create-update" Feb 24 02:36:35.484861 master-0 kubenswrapper[31411]: I0224 02:36:35.484788 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="2752407f-852c-433f-80e1-0a3d258c7edf" containerName="mariadb-database-create" Feb 24 02:36:35.484861 master-0 kubenswrapper[31411]: I0224 02:36:35.484819 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f83a0b0-4fc6-4600-b532-d40414aa61a0" containerName="init" Feb 24 02:36:35.484861 master-0 kubenswrapper[31411]: I0224 02:36:35.484847 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8678c29-b004-43c1-8393-c7a285064416" containerName="keystone-bootstrap" Feb 24 02:36:35.497600 master-0 kubenswrapper[31411]: I0224 02:36:35.486151 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wdcb6" Feb 24 02:36:35.497600 master-0 kubenswrapper[31411]: I0224 02:36:35.488361 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 24 02:36:35.497600 master-0 kubenswrapper[31411]: I0224 02:36:35.488375 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 24 02:36:35.497600 master-0 kubenswrapper[31411]: I0224 02:36:35.488869 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 24 02:36:35.497600 master-0 kubenswrapper[31411]: I0224 02:36:35.489006 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 24 02:36:35.497600 master-0 kubenswrapper[31411]: I0224 02:36:35.493358 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wdcb6"] Feb 24 02:36:35.574451 master-0 kubenswrapper[31411]: I0224 02:36:35.574373 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxcsc\" (UniqueName: \"kubernetes.io/projected/4e6b0cc7-1005-47be-bc50-5b6057e2407d-kube-api-access-qxcsc\") pod \"keystone-bootstrap-wdcb6\" (UID: \"4e6b0cc7-1005-47be-bc50-5b6057e2407d\") " pod="openstack/keystone-bootstrap-wdcb6" Feb 24 02:36:35.574743 master-0 kubenswrapper[31411]: I0224 02:36:35.574468 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e6b0cc7-1005-47be-bc50-5b6057e2407d-credential-keys\") pod \"keystone-bootstrap-wdcb6\" (UID: \"4e6b0cc7-1005-47be-bc50-5b6057e2407d\") " pod="openstack/keystone-bootstrap-wdcb6" Feb 24 02:36:35.574743 master-0 kubenswrapper[31411]: I0224 02:36:35.574597 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e6b0cc7-1005-47be-bc50-5b6057e2407d-fernet-keys\") pod \"keystone-bootstrap-wdcb6\" (UID: \"4e6b0cc7-1005-47be-bc50-5b6057e2407d\") " pod="openstack/keystone-bootstrap-wdcb6" Feb 24 02:36:35.574743 master-0 kubenswrapper[31411]: I0224 02:36:35.574631 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e6b0cc7-1005-47be-bc50-5b6057e2407d-scripts\") pod \"keystone-bootstrap-wdcb6\" (UID: \"4e6b0cc7-1005-47be-bc50-5b6057e2407d\") " pod="openstack/keystone-bootstrap-wdcb6" Feb 24 02:36:35.574743 master-0 kubenswrapper[31411]: I0224 02:36:35.574700 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e6b0cc7-1005-47be-bc50-5b6057e2407d-config-data\") pod \"keystone-bootstrap-wdcb6\" (UID: \"4e6b0cc7-1005-47be-bc50-5b6057e2407d\") " pod="openstack/keystone-bootstrap-wdcb6" Feb 24 02:36:35.574889 master-0 kubenswrapper[31411]: I0224 02:36:35.574782 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e6b0cc7-1005-47be-bc50-5b6057e2407d-combined-ca-bundle\") pod \"keystone-bootstrap-wdcb6\" (UID: \"4e6b0cc7-1005-47be-bc50-5b6057e2407d\") " pod="openstack/keystone-bootstrap-wdcb6" Feb 24 02:36:35.679438 master-0 kubenswrapper[31411]: I0224 02:36:35.679340 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e6b0cc7-1005-47be-bc50-5b6057e2407d-fernet-keys\") pod \"keystone-bootstrap-wdcb6\" (UID: \"4e6b0cc7-1005-47be-bc50-5b6057e2407d\") " pod="openstack/keystone-bootstrap-wdcb6" Feb 24 02:36:35.679850 master-0 kubenswrapper[31411]: I0224 02:36:35.679626 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e6b0cc7-1005-47be-bc50-5b6057e2407d-scripts\") pod \"keystone-bootstrap-wdcb6\" (UID: \"4e6b0cc7-1005-47be-bc50-5b6057e2407d\") " pod="openstack/keystone-bootstrap-wdcb6" Feb 24 02:36:35.679850 master-0 kubenswrapper[31411]: I0224 02:36:35.679716 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e6b0cc7-1005-47be-bc50-5b6057e2407d-config-data\") pod \"keystone-bootstrap-wdcb6\" (UID: \"4e6b0cc7-1005-47be-bc50-5b6057e2407d\") " pod="openstack/keystone-bootstrap-wdcb6" Feb 24 02:36:35.679991 master-0 kubenswrapper[31411]: I0224 02:36:35.679890 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e6b0cc7-1005-47be-bc50-5b6057e2407d-combined-ca-bundle\") pod \"keystone-bootstrap-wdcb6\" (UID: \"4e6b0cc7-1005-47be-bc50-5b6057e2407d\") " pod="openstack/keystone-bootstrap-wdcb6" Feb 24 02:36:35.680169 master-0 kubenswrapper[31411]: I0224 02:36:35.680126 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxcsc\" (UniqueName: \"kubernetes.io/projected/4e6b0cc7-1005-47be-bc50-5b6057e2407d-kube-api-access-qxcsc\") pod \"keystone-bootstrap-wdcb6\" (UID: \"4e6b0cc7-1005-47be-bc50-5b6057e2407d\") " pod="openstack/keystone-bootstrap-wdcb6" Feb 24 02:36:35.680264 master-0 kubenswrapper[31411]: I0224 02:36:35.680226 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e6b0cc7-1005-47be-bc50-5b6057e2407d-credential-keys\") pod \"keystone-bootstrap-wdcb6\" (UID: \"4e6b0cc7-1005-47be-bc50-5b6057e2407d\") " pod="openstack/keystone-bootstrap-wdcb6" Feb 24 02:36:35.685959 master-0 kubenswrapper[31411]: I0224 02:36:35.685894 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e6b0cc7-1005-47be-bc50-5b6057e2407d-credential-keys\") pod \"keystone-bootstrap-wdcb6\" (UID: \"4e6b0cc7-1005-47be-bc50-5b6057e2407d\") " pod="openstack/keystone-bootstrap-wdcb6" Feb 24 02:36:35.686147 master-0 kubenswrapper[31411]: I0224 02:36:35.686070 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e6b0cc7-1005-47be-bc50-5b6057e2407d-config-data\") pod \"keystone-bootstrap-wdcb6\" (UID: \"4e6b0cc7-1005-47be-bc50-5b6057e2407d\") " pod="openstack/keystone-bootstrap-wdcb6" Feb 24 02:36:35.687005 master-0 kubenswrapper[31411]: I0224 02:36:35.686951 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e6b0cc7-1005-47be-bc50-5b6057e2407d-scripts\") pod \"keystone-bootstrap-wdcb6\" (UID: \"4e6b0cc7-1005-47be-bc50-5b6057e2407d\") " pod="openstack/keystone-bootstrap-wdcb6" Feb 24 02:36:35.687623 master-0 kubenswrapper[31411]: I0224 02:36:35.687529 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e6b0cc7-1005-47be-bc50-5b6057e2407d-fernet-keys\") pod \"keystone-bootstrap-wdcb6\" (UID: \"4e6b0cc7-1005-47be-bc50-5b6057e2407d\") " pod="openstack/keystone-bootstrap-wdcb6" Feb 24 02:36:35.689118 master-0 kubenswrapper[31411]: I0224 02:36:35.689043 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e6b0cc7-1005-47be-bc50-5b6057e2407d-combined-ca-bundle\") pod \"keystone-bootstrap-wdcb6\" (UID: \"4e6b0cc7-1005-47be-bc50-5b6057e2407d\") " pod="openstack/keystone-bootstrap-wdcb6" Feb 24 02:36:35.711081 master-0 kubenswrapper[31411]: I0224 02:36:35.710128 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxcsc\" (UniqueName: \"kubernetes.io/projected/4e6b0cc7-1005-47be-bc50-5b6057e2407d-kube-api-access-qxcsc\") pod \"keystone-bootstrap-wdcb6\" (UID: \"4e6b0cc7-1005-47be-bc50-5b6057e2407d\") " pod="openstack/keystone-bootstrap-wdcb6" Feb 24 02:36:35.730614 master-0 kubenswrapper[31411]: I0224 02:36:35.730120 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-sync-jzr8b"] Feb 24 02:36:35.737657 master-0 kubenswrapper[31411]: I0224 02:36:35.734844 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-jzr8b" Feb 24 02:36:35.739606 master-0 kubenswrapper[31411]: I0224 02:36:35.738129 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-scripts" Feb 24 02:36:35.739606 master-0 kubenswrapper[31411]: I0224 02:36:35.738825 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Feb 24 02:36:35.751074 master-0 kubenswrapper[31411]: I0224 02:36:35.747192 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-jzr8b"] Feb 24 02:36:35.786979 master-0 kubenswrapper[31411]: I0224 02:36:35.786891 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3a705bf-9636-4410-a44a-6ff6907d4179-scripts\") pod \"ironic-db-sync-jzr8b\" (UID: \"a3a705bf-9636-4410-a44a-6ff6907d4179\") " pod="openstack/ironic-db-sync-jzr8b" Feb 24 02:36:35.786979 master-0 kubenswrapper[31411]: I0224 02:36:35.786967 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q8vp\" (UniqueName: \"kubernetes.io/projected/a3a705bf-9636-4410-a44a-6ff6907d4179-kube-api-access-6q8vp\") pod \"ironic-db-sync-jzr8b\" (UID: \"a3a705bf-9636-4410-a44a-6ff6907d4179\") " pod="openstack/ironic-db-sync-jzr8b" Feb 24 02:36:35.787297 master-0 kubenswrapper[31411]: I0224 02:36:35.787030 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a3a705bf-9636-4410-a44a-6ff6907d4179-config-data-merged\") pod \"ironic-db-sync-jzr8b\" (UID: \"a3a705bf-9636-4410-a44a-6ff6907d4179\") " pod="openstack/ironic-db-sync-jzr8b" Feb 24 02:36:35.787297 master-0 kubenswrapper[31411]: I0224 02:36:35.787070 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/a3a705bf-9636-4410-a44a-6ff6907d4179-etc-podinfo\") pod \"ironic-db-sync-jzr8b\" (UID: \"a3a705bf-9636-4410-a44a-6ff6907d4179\") " pod="openstack/ironic-db-sync-jzr8b" Feb 24 02:36:35.787297 master-0 kubenswrapper[31411]: I0224 02:36:35.787105 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a705bf-9636-4410-a44a-6ff6907d4179-config-data\") pod \"ironic-db-sync-jzr8b\" (UID: \"a3a705bf-9636-4410-a44a-6ff6907d4179\") " pod="openstack/ironic-db-sync-jzr8b" Feb 24 02:36:35.787618 master-0 kubenswrapper[31411]: I0224 02:36:35.787560 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a705bf-9636-4410-a44a-6ff6907d4179-combined-ca-bundle\") pod \"ironic-db-sync-jzr8b\" (UID: \"a3a705bf-9636-4410-a44a-6ff6907d4179\") " pod="openstack/ironic-db-sync-jzr8b" Feb 24 02:36:35.814437 master-0 kubenswrapper[31411]: I0224 02:36:35.813233 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wdcb6" Feb 24 02:36:35.889859 master-0 kubenswrapper[31411]: I0224 02:36:35.889783 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a705bf-9636-4410-a44a-6ff6907d4179-combined-ca-bundle\") pod \"ironic-db-sync-jzr8b\" (UID: \"a3a705bf-9636-4410-a44a-6ff6907d4179\") " pod="openstack/ironic-db-sync-jzr8b" Feb 24 02:36:35.890678 master-0 kubenswrapper[31411]: I0224 02:36:35.890621 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3a705bf-9636-4410-a44a-6ff6907d4179-scripts\") pod \"ironic-db-sync-jzr8b\" (UID: \"a3a705bf-9636-4410-a44a-6ff6907d4179\") " pod="openstack/ironic-db-sync-jzr8b" Feb 24 02:36:35.890763 master-0 kubenswrapper[31411]: I0224 02:36:35.890709 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q8vp\" (UniqueName: \"kubernetes.io/projected/a3a705bf-9636-4410-a44a-6ff6907d4179-kube-api-access-6q8vp\") pod \"ironic-db-sync-jzr8b\" (UID: \"a3a705bf-9636-4410-a44a-6ff6907d4179\") " pod="openstack/ironic-db-sync-jzr8b" Feb 24 02:36:35.890858 master-0 kubenswrapper[31411]: I0224 02:36:35.890820 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a3a705bf-9636-4410-a44a-6ff6907d4179-config-data-merged\") pod \"ironic-db-sync-jzr8b\" (UID: \"a3a705bf-9636-4410-a44a-6ff6907d4179\") " pod="openstack/ironic-db-sync-jzr8b" Feb 24 02:36:35.890929 master-0 kubenswrapper[31411]: I0224 02:36:35.890906 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/a3a705bf-9636-4410-a44a-6ff6907d4179-etc-podinfo\") pod \"ironic-db-sync-jzr8b\" (UID: \"a3a705bf-9636-4410-a44a-6ff6907d4179\") " pod="openstack/ironic-db-sync-jzr8b" Feb 24 02:36:35.891013 master-0 kubenswrapper[31411]: I0224 02:36:35.890982 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a705bf-9636-4410-a44a-6ff6907d4179-config-data\") pod \"ironic-db-sync-jzr8b\" (UID: \"a3a705bf-9636-4410-a44a-6ff6907d4179\") " pod="openstack/ironic-db-sync-jzr8b" Feb 24 02:36:35.894179 master-0 kubenswrapper[31411]: I0224 02:36:35.894137 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a3a705bf-9636-4410-a44a-6ff6907d4179-config-data-merged\") pod \"ironic-db-sync-jzr8b\" (UID: \"a3a705bf-9636-4410-a44a-6ff6907d4179\") " pod="openstack/ironic-db-sync-jzr8b" Feb 24 02:36:35.895379 master-0 kubenswrapper[31411]: I0224 02:36:35.895317 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a705bf-9636-4410-a44a-6ff6907d4179-combined-ca-bundle\") pod \"ironic-db-sync-jzr8b\" (UID: \"a3a705bf-9636-4410-a44a-6ff6907d4179\") " pod="openstack/ironic-db-sync-jzr8b" Feb 24 02:36:35.895670 master-0 kubenswrapper[31411]: I0224 02:36:35.895621 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a705bf-9636-4410-a44a-6ff6907d4179-config-data\") pod \"ironic-db-sync-jzr8b\" (UID: \"a3a705bf-9636-4410-a44a-6ff6907d4179\") " pod="openstack/ironic-db-sync-jzr8b" Feb 24 02:36:35.898931 master-0 kubenswrapper[31411]: I0224 02:36:35.898862 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/a3a705bf-9636-4410-a44a-6ff6907d4179-etc-podinfo\") pod \"ironic-db-sync-jzr8b\" (UID: \"a3a705bf-9636-4410-a44a-6ff6907d4179\") " pod="openstack/ironic-db-sync-jzr8b" Feb 24 02:36:35.900181 master-0 kubenswrapper[31411]: I0224 02:36:35.900130 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3a705bf-9636-4410-a44a-6ff6907d4179-scripts\") pod \"ironic-db-sync-jzr8b\" (UID: \"a3a705bf-9636-4410-a44a-6ff6907d4179\") " pod="openstack/ironic-db-sync-jzr8b" Feb 24 02:36:36.406029 master-0 kubenswrapper[31411]: I0224 02:36:36.404152 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-8705a-default-internal-api-0" podUID="d6814476-68e8-4541-91b5-e5a159982ff5" containerName="glance-log" containerID="cri-o://ee3a47bc61fe2b5bc13160578709c20ad39bac3d0df18cb4d84e91cc9b4a6a01" gracePeriod=30 Feb 24 02:36:36.408562 master-0 kubenswrapper[31411]: I0224 02:36:36.407017 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q8vp\" (UniqueName: \"kubernetes.io/projected/a3a705bf-9636-4410-a44a-6ff6907d4179-kube-api-access-6q8vp\") pod \"ironic-db-sync-jzr8b\" (UID: \"a3a705bf-9636-4410-a44a-6ff6907d4179\") " pod="openstack/ironic-db-sync-jzr8b" Feb 24 02:36:36.408562 master-0 kubenswrapper[31411]: I0224 02:36:36.407552 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-8705a-default-external-api-0" podUID="646d5895-0594-419d-bc57-3beb2730117e" containerName="glance-log" containerID="cri-o://dd82f27f0c57a76f84c5efda33e8d6c01072fe917998ad05ab2156d3341121d6" gracePeriod=30 Feb 24 02:36:36.408562 master-0 kubenswrapper[31411]: I0224 02:36:36.407618 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-8705a-default-internal-api-0" podUID="d6814476-68e8-4541-91b5-e5a159982ff5" containerName="glance-httpd" containerID="cri-o://e384e2d0410bedeb4234d1124efb7db793999cb383a58aa2d2afc9cc3a62b1ef" gracePeriod=30 Feb 24 02:36:36.408562 master-0 kubenswrapper[31411]: I0224 02:36:36.408153 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-8705a-default-external-api-0" podUID="646d5895-0594-419d-bc57-3beb2730117e" containerName="glance-httpd" containerID="cri-o://779faa0a04810831e960b3239c3fff4e132938adfa66533570a15a2aeeed4b79" gracePeriod=30 Feb 24 02:36:36.420699 master-0 kubenswrapper[31411]: I0224 02:36:36.419825 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-jzr8b" Feb 24 02:36:36.966032 master-0 kubenswrapper[31411]: I0224 02:36:36.959324 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wdcb6"] Feb 24 02:36:37.121620 master-0 kubenswrapper[31411]: I0224 02:36:37.121484 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8678c29-b004-43c1-8393-c7a285064416" path="/var/lib/kubelet/pods/a8678c29-b004-43c1-8393-c7a285064416/volumes" Feb 24 02:36:37.266029 master-0 kubenswrapper[31411]: I0224 02:36:37.265975 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-jzr8b"] Feb 24 02:36:37.299256 master-0 kubenswrapper[31411]: W0224 02:36:37.294319 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3a705bf_9636_4410_a44a_6ff6907d4179.slice/crio-4132648f2993f7a672d22ddb371ede07a50dc3610cf1142ca93b21a22ec87fb2 WatchSource:0}: Error finding container 4132648f2993f7a672d22ddb371ede07a50dc3610cf1142ca93b21a22ec87fb2: Status 404 returned error can't find the container with id 4132648f2993f7a672d22ddb371ede07a50dc3610cf1142ca93b21a22ec87fb2 Feb 24 02:36:37.481246 master-0 kubenswrapper[31411]: I0224 02:36:37.481169 31411 generic.go:334] "Generic (PLEG): container finished" podID="d6814476-68e8-4541-91b5-e5a159982ff5" containerID="e384e2d0410bedeb4234d1124efb7db793999cb383a58aa2d2afc9cc3a62b1ef" exitCode=0 Feb 24 02:36:37.481246 master-0 kubenswrapper[31411]: I0224 02:36:37.481217 31411 generic.go:334] "Generic (PLEG): container finished" podID="d6814476-68e8-4541-91b5-e5a159982ff5" containerID="ee3a47bc61fe2b5bc13160578709c20ad39bac3d0df18cb4d84e91cc9b4a6a01" exitCode=143 Feb 24 02:36:37.481985 master-0 kubenswrapper[31411]: I0224 02:36:37.481311 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8705a-default-internal-api-0" event={"ID":"d6814476-68e8-4541-91b5-e5a159982ff5","Type":"ContainerDied","Data":"e384e2d0410bedeb4234d1124efb7db793999cb383a58aa2d2afc9cc3a62b1ef"} Feb 24 02:36:37.481985 master-0 kubenswrapper[31411]: I0224 02:36:37.481378 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8705a-default-internal-api-0" event={"ID":"d6814476-68e8-4541-91b5-e5a159982ff5","Type":"ContainerDied","Data":"ee3a47bc61fe2b5bc13160578709c20ad39bac3d0df18cb4d84e91cc9b4a6a01"} Feb 24 02:36:37.484820 master-0 kubenswrapper[31411]: I0224 02:36:37.484717 31411 generic.go:334] "Generic (PLEG): container finished" podID="646d5895-0594-419d-bc57-3beb2730117e" containerID="779faa0a04810831e960b3239c3fff4e132938adfa66533570a15a2aeeed4b79" exitCode=0 Feb 24 02:36:37.484820 master-0 kubenswrapper[31411]: I0224 02:36:37.484780 31411 generic.go:334] "Generic (PLEG): container finished" podID="646d5895-0594-419d-bc57-3beb2730117e" containerID="dd82f27f0c57a76f84c5efda33e8d6c01072fe917998ad05ab2156d3341121d6" exitCode=143 Feb 24 02:36:37.488261 master-0 kubenswrapper[31411]: I0224 02:36:37.484814 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8705a-default-external-api-0" event={"ID":"646d5895-0594-419d-bc57-3beb2730117e","Type":"ContainerDied","Data":"779faa0a04810831e960b3239c3fff4e132938adfa66533570a15a2aeeed4b79"} Feb 24 02:36:37.488261 master-0 kubenswrapper[31411]: I0224 02:36:37.484874 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8705a-default-external-api-0" event={"ID":"646d5895-0594-419d-bc57-3beb2730117e","Type":"ContainerDied","Data":"dd82f27f0c57a76f84c5efda33e8d6c01072fe917998ad05ab2156d3341121d6"} Feb 24 02:36:37.492213 master-0 kubenswrapper[31411]: I0224 02:36:37.492135 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wdcb6" event={"ID":"4e6b0cc7-1005-47be-bc50-5b6057e2407d","Type":"ContainerStarted","Data":"836773369c211dfb595440864cdcc202fb44fd758d7990ee88075dba5852c88d"} Feb 24 02:36:37.492213 master-0 kubenswrapper[31411]: I0224 02:36:37.492187 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wdcb6" event={"ID":"4e6b0cc7-1005-47be-bc50-5b6057e2407d","Type":"ContainerStarted","Data":"b0334569865c691ea1bbd65d37ef31df922a690a52b652f140e63e33836ecbb3"} Feb 24 02:36:37.498841 master-0 kubenswrapper[31411]: I0224 02:36:37.498798 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-jzr8b" event={"ID":"a3a705bf-9636-4410-a44a-6ff6907d4179","Type":"ContainerStarted","Data":"4132648f2993f7a672d22ddb371ede07a50dc3610cf1142ca93b21a22ec87fb2"} Feb 24 02:36:37.504002 master-0 kubenswrapper[31411]: I0224 02:36:37.503952 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:37.528074 master-0 kubenswrapper[31411]: I0224 02:36:37.527985 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-wdcb6" podStartSLOduration=2.527961049 podStartE2EDuration="2.527961049s" podCreationTimestamp="2026-02-24 02:36:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:36:37.514141392 +0000 UTC m=+940.731339238" watchObservedRunningTime="2026-02-24 02:36:37.527961049 +0000 UTC m=+940.745158905" Feb 24 02:36:37.708004 master-0 kubenswrapper[31411]: I0224 02:36:37.707650 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a5d836f3-76db-4c3b-84c0-25ab845f7f66\") pod \"646d5895-0594-419d-bc57-3beb2730117e\" (UID: \"646d5895-0594-419d-bc57-3beb2730117e\") " Feb 24 02:36:37.708004 master-0 kubenswrapper[31411]: I0224 02:36:37.707870 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646d5895-0594-419d-bc57-3beb2730117e-config-data\") pod \"646d5895-0594-419d-bc57-3beb2730117e\" (UID: \"646d5895-0594-419d-bc57-3beb2730117e\") " Feb 24 02:36:37.708004 master-0 kubenswrapper[31411]: I0224 02:36:37.707908 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/646d5895-0594-419d-bc57-3beb2730117e-scripts\") pod \"646d5895-0594-419d-bc57-3beb2730117e\" (UID: \"646d5895-0594-419d-bc57-3beb2730117e\") " Feb 24 02:36:37.708004 master-0 kubenswrapper[31411]: I0224 02:36:37.707956 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/646d5895-0594-419d-bc57-3beb2730117e-httpd-run\") pod \"646d5895-0594-419d-bc57-3beb2730117e\" (UID: \"646d5895-0594-419d-bc57-3beb2730117e\") " Feb 24 02:36:37.708418 master-0 kubenswrapper[31411]: I0224 02:36:37.708026 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/646d5895-0594-419d-bc57-3beb2730117e-logs\") pod \"646d5895-0594-419d-bc57-3beb2730117e\" (UID: \"646d5895-0594-419d-bc57-3beb2730117e\") " Feb 24 02:36:37.708418 master-0 kubenswrapper[31411]: I0224 02:36:37.708072 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646d5895-0594-419d-bc57-3beb2730117e-combined-ca-bundle\") pod \"646d5895-0594-419d-bc57-3beb2730117e\" (UID: \"646d5895-0594-419d-bc57-3beb2730117e\") " Feb 24 02:36:37.708418 master-0 kubenswrapper[31411]: I0224 02:36:37.708143 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5w68\" (UniqueName: \"kubernetes.io/projected/646d5895-0594-419d-bc57-3beb2730117e-kube-api-access-k5w68\") pod \"646d5895-0594-419d-bc57-3beb2730117e\" (UID: \"646d5895-0594-419d-bc57-3beb2730117e\") " Feb 24 02:36:37.720875 master-0 kubenswrapper[31411]: I0224 02:36:37.709034 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/646d5895-0594-419d-bc57-3beb2730117e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "646d5895-0594-419d-bc57-3beb2730117e" (UID: "646d5895-0594-419d-bc57-3beb2730117e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:36:37.720875 master-0 kubenswrapper[31411]: I0224 02:36:37.710938 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/646d5895-0594-419d-bc57-3beb2730117e-logs" (OuterVolumeSpecName: "logs") pod "646d5895-0594-419d-bc57-3beb2730117e" (UID: "646d5895-0594-419d-bc57-3beb2730117e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:36:37.720875 master-0 kubenswrapper[31411]: I0224 02:36:37.715107 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/646d5895-0594-419d-bc57-3beb2730117e-scripts" (OuterVolumeSpecName: "scripts") pod "646d5895-0594-419d-bc57-3beb2730117e" (UID: "646d5895-0594-419d-bc57-3beb2730117e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:36:37.720875 master-0 kubenswrapper[31411]: I0224 02:36:37.716095 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/646d5895-0594-419d-bc57-3beb2730117e-kube-api-access-k5w68" (OuterVolumeSpecName: "kube-api-access-k5w68") pod "646d5895-0594-419d-bc57-3beb2730117e" (UID: "646d5895-0594-419d-bc57-3beb2730117e"). InnerVolumeSpecName "kube-api-access-k5w68". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:36:37.735664 master-0 kubenswrapper[31411]: I0224 02:36:37.734024 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^a5d836f3-76db-4c3b-84c0-25ab845f7f66" (OuterVolumeSpecName: "glance") pod "646d5895-0594-419d-bc57-3beb2730117e" (UID: "646d5895-0594-419d-bc57-3beb2730117e"). InnerVolumeSpecName "pvc-77a90a1f-3b19-443f-bfa7-9776b1f847b6". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 24 02:36:37.750967 master-0 kubenswrapper[31411]: I0224 02:36:37.750922 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/646d5895-0594-419d-bc57-3beb2730117e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "646d5895-0594-419d-bc57-3beb2730117e" (UID: "646d5895-0594-419d-bc57-3beb2730117e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:36:37.772696 master-0 kubenswrapper[31411]: I0224 02:36:37.772631 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/646d5895-0594-419d-bc57-3beb2730117e-config-data" (OuterVolumeSpecName: "config-data") pod "646d5895-0594-419d-bc57-3beb2730117e" (UID: "646d5895-0594-419d-bc57-3beb2730117e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:36:37.786383 master-0 kubenswrapper[31411]: I0224 02:36:37.786109 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f74bd995c-jflbg" Feb 24 02:36:37.815274 master-0 kubenswrapper[31411]: I0224 02:36:37.815187 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646d5895-0594-419d-bc57-3beb2730117e-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:37.815274 master-0 kubenswrapper[31411]: I0224 02:36:37.815233 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5w68\" (UniqueName: \"kubernetes.io/projected/646d5895-0594-419d-bc57-3beb2730117e-kube-api-access-k5w68\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:37.815274 master-0 kubenswrapper[31411]: I0224 02:36:37.815295 31411 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-77a90a1f-3b19-443f-bfa7-9776b1f847b6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a5d836f3-76db-4c3b-84c0-25ab845f7f66\") on node \"master-0\" " Feb 24 02:36:37.815766 master-0 kubenswrapper[31411]: I0224 02:36:37.815311 31411 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646d5895-0594-419d-bc57-3beb2730117e-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:37.815766 master-0 kubenswrapper[31411]: I0224 02:36:37.815323 31411 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/646d5895-0594-419d-bc57-3beb2730117e-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:37.815766 master-0 kubenswrapper[31411]: I0224 02:36:37.815333 31411 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/646d5895-0594-419d-bc57-3beb2730117e-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:37.815766 master-0 kubenswrapper[31411]: I0224 02:36:37.815345 31411 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/646d5895-0594-419d-bc57-3beb2730117e-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:37.848810 master-0 kubenswrapper[31411]: I0224 02:36:37.848266 31411 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 24 02:36:37.848810 master-0 kubenswrapper[31411]: I0224 02:36:37.848493 31411 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-77a90a1f-3b19-443f-bfa7-9776b1f847b6" (UniqueName: "kubernetes.io/csi/topolvm.io^a5d836f3-76db-4c3b-84c0-25ab845f7f66") on node "master-0" Feb 24 02:36:37.869297 master-0 kubenswrapper[31411]: I0224 02:36:37.869207 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:37.896951 master-0 kubenswrapper[31411]: I0224 02:36:37.891821 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84556f859-6lpst"] Feb 24 02:36:37.896951 master-0 kubenswrapper[31411]: I0224 02:36:37.892162 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84556f859-6lpst" podUID="49237a00-8280-4287-a6fd-9fdf6a486c95" containerName="dnsmasq-dns" containerID="cri-o://bc0d39e514b6aaeeaeac1c835533735f6eb360a0428ecb221dc08d014e43c8fb" gracePeriod=10 Feb 24 02:36:37.918565 master-0 kubenswrapper[31411]: I0224 02:36:37.918503 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6814476-68e8-4541-91b5-e5a159982ff5-scripts\") pod \"d6814476-68e8-4541-91b5-e5a159982ff5\" (UID: \"d6814476-68e8-4541-91b5-e5a159982ff5\") " Feb 24 02:36:37.918844 master-0 kubenswrapper[31411]: I0224 02:36:37.918812 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^29278cd9-fc15-4e42-b224-808884eb5fd0\") pod \"d6814476-68e8-4541-91b5-e5a159982ff5\" (UID: \"d6814476-68e8-4541-91b5-e5a159982ff5\") " Feb 24 02:36:37.918893 master-0 kubenswrapper[31411]: I0224 02:36:37.918873 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6814476-68e8-4541-91b5-e5a159982ff5-config-data\") pod \"d6814476-68e8-4541-91b5-e5a159982ff5\" (UID: \"d6814476-68e8-4541-91b5-e5a159982ff5\") " Feb 24 02:36:37.918951 master-0 kubenswrapper[31411]: I0224 02:36:37.918930 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6814476-68e8-4541-91b5-e5a159982ff5-httpd-run\") pod \"d6814476-68e8-4541-91b5-e5a159982ff5\" (UID: \"d6814476-68e8-4541-91b5-e5a159982ff5\") " Feb 24 02:36:37.930069 master-0 kubenswrapper[31411]: I0224 02:36:37.929995 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6814476-68e8-4541-91b5-e5a159982ff5-scripts" (OuterVolumeSpecName: "scripts") pod "d6814476-68e8-4541-91b5-e5a159982ff5" (UID: "d6814476-68e8-4541-91b5-e5a159982ff5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:36:37.931219 master-0 kubenswrapper[31411]: I0224 02:36:37.931183 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6814476-68e8-4541-91b5-e5a159982ff5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d6814476-68e8-4541-91b5-e5a159982ff5" (UID: "d6814476-68e8-4541-91b5-e5a159982ff5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:36:37.966726 master-0 kubenswrapper[31411]: I0224 02:36:37.965955 31411 reconciler_common.go:293] "Volume detached for volume \"pvc-77a90a1f-3b19-443f-bfa7-9776b1f847b6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a5d836f3-76db-4c3b-84c0-25ab845f7f66\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:37.966726 master-0 kubenswrapper[31411]: I0224 02:36:37.966028 31411 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6814476-68e8-4541-91b5-e5a159982ff5-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:37.966726 master-0 kubenswrapper[31411]: I0224 02:36:37.966044 31411 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6814476-68e8-4541-91b5-e5a159982ff5-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:37.977912 master-0 kubenswrapper[31411]: I0224 02:36:37.977860 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^29278cd9-fc15-4e42-b224-808884eb5fd0" (OuterVolumeSpecName: "glance") pod "d6814476-68e8-4541-91b5-e5a159982ff5" (UID: "d6814476-68e8-4541-91b5-e5a159982ff5"). InnerVolumeSpecName "pvc-57eabf84-c6fa-42eb-b9cc-5a07d1a482b8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 24 02:36:38.026876 master-0 kubenswrapper[31411]: I0224 02:36:38.026807 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6814476-68e8-4541-91b5-e5a159982ff5-config-data" (OuterVolumeSpecName: "config-data") pod "d6814476-68e8-4541-91b5-e5a159982ff5" (UID: "d6814476-68e8-4541-91b5-e5a159982ff5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:36:38.068100 master-0 kubenswrapper[31411]: I0224 02:36:38.068038 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsrfd\" (UniqueName: \"kubernetes.io/projected/d6814476-68e8-4541-91b5-e5a159982ff5-kube-api-access-jsrfd\") pod \"d6814476-68e8-4541-91b5-e5a159982ff5\" (UID: \"d6814476-68e8-4541-91b5-e5a159982ff5\") " Feb 24 02:36:38.068337 master-0 kubenswrapper[31411]: I0224 02:36:38.068134 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6814476-68e8-4541-91b5-e5a159982ff5-logs\") pod \"d6814476-68e8-4541-91b5-e5a159982ff5\" (UID: \"d6814476-68e8-4541-91b5-e5a159982ff5\") " Feb 24 02:36:38.068705 master-0 kubenswrapper[31411]: I0224 02:36:38.068677 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6814476-68e8-4541-91b5-e5a159982ff5-combined-ca-bundle\") pod \"d6814476-68e8-4541-91b5-e5a159982ff5\" (UID: \"d6814476-68e8-4541-91b5-e5a159982ff5\") " Feb 24 02:36:38.069036 master-0 kubenswrapper[31411]: I0224 02:36:38.068987 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6814476-68e8-4541-91b5-e5a159982ff5-logs" (OuterVolumeSpecName: "logs") pod "d6814476-68e8-4541-91b5-e5a159982ff5" (UID: "d6814476-68e8-4541-91b5-e5a159982ff5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:36:38.069878 master-0 kubenswrapper[31411]: I0224 02:36:38.069840 31411 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6814476-68e8-4541-91b5-e5a159982ff5-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:38.069934 master-0 kubenswrapper[31411]: I0224 02:36:38.069900 31411 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-57eabf84-c6fa-42eb-b9cc-5a07d1a482b8\" (UniqueName: \"kubernetes.io/csi/topolvm.io^29278cd9-fc15-4e42-b224-808884eb5fd0\") on node \"master-0\" " Feb 24 02:36:38.069934 master-0 kubenswrapper[31411]: I0224 02:36:38.069919 31411 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6814476-68e8-4541-91b5-e5a159982ff5-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:38.072246 master-0 kubenswrapper[31411]: I0224 02:36:38.072181 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6814476-68e8-4541-91b5-e5a159982ff5-kube-api-access-jsrfd" (OuterVolumeSpecName: "kube-api-access-jsrfd") pod "d6814476-68e8-4541-91b5-e5a159982ff5" (UID: "d6814476-68e8-4541-91b5-e5a159982ff5"). InnerVolumeSpecName "kube-api-access-jsrfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:36:38.107168 master-0 kubenswrapper[31411]: I0224 02:36:38.107127 31411 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 24 02:36:38.107340 master-0 kubenswrapper[31411]: I0224 02:36:38.107314 31411 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-57eabf84-c6fa-42eb-b9cc-5a07d1a482b8" (UniqueName: "kubernetes.io/csi/topolvm.io^29278cd9-fc15-4e42-b224-808884eb5fd0") on node "master-0" Feb 24 02:36:38.150358 master-0 kubenswrapper[31411]: I0224 02:36:38.145324 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6814476-68e8-4541-91b5-e5a159982ff5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d6814476-68e8-4541-91b5-e5a159982ff5" (UID: "d6814476-68e8-4541-91b5-e5a159982ff5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:36:38.172954 master-0 kubenswrapper[31411]: I0224 02:36:38.172896 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6814476-68e8-4541-91b5-e5a159982ff5-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:38.172954 master-0 kubenswrapper[31411]: I0224 02:36:38.172946 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsrfd\" (UniqueName: \"kubernetes.io/projected/d6814476-68e8-4541-91b5-e5a159982ff5-kube-api-access-jsrfd\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:38.172954 master-0 kubenswrapper[31411]: I0224 02:36:38.172962 31411 reconciler_common.go:293] "Volume detached for volume \"pvc-57eabf84-c6fa-42eb-b9cc-5a07d1a482b8\" (UniqueName: \"kubernetes.io/csi/topolvm.io^29278cd9-fc15-4e42-b224-808884eb5fd0\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:38.519787 master-0 kubenswrapper[31411]: I0224 02:36:38.519675 31411 generic.go:334] "Generic (PLEG): container finished" podID="49237a00-8280-4287-a6fd-9fdf6a486c95" containerID="bc0d39e514b6aaeeaeac1c835533735f6eb360a0428ecb221dc08d014e43c8fb" exitCode=0 Feb 24 02:36:38.519787 master-0 kubenswrapper[31411]: I0224 02:36:38.519778 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84556f859-6lpst" event={"ID":"49237a00-8280-4287-a6fd-9fdf6a486c95","Type":"ContainerDied","Data":"bc0d39e514b6aaeeaeac1c835533735f6eb360a0428ecb221dc08d014e43c8fb"} Feb 24 02:36:38.524410 master-0 kubenswrapper[31411]: I0224 02:36:38.524378 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8705a-default-external-api-0" event={"ID":"646d5895-0594-419d-bc57-3beb2730117e","Type":"ContainerDied","Data":"11aaafe12f3c30b5e3c71a37c9f04a1b82d2744958da0fc37a85b7c6e89e99f5"} Feb 24 02:36:38.524487 master-0 kubenswrapper[31411]: I0224 02:36:38.524429 31411 scope.go:117] "RemoveContainer" containerID="779faa0a04810831e960b3239c3fff4e132938adfa66533570a15a2aeeed4b79" Feb 24 02:36:38.524634 master-0 kubenswrapper[31411]: I0224 02:36:38.524604 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:38.530322 master-0 kubenswrapper[31411]: I0224 02:36:38.530262 31411 generic.go:334] "Generic (PLEG): container finished" podID="7edb7291-5510-45f1-810f-de8e6bf08cd0" containerID="a396992afc6bd6a8dd79fe828cd981509e7cf8dd75f880184e243de5f99c769a" exitCode=0 Feb 24 02:36:38.530447 master-0 kubenswrapper[31411]: I0224 02:36:38.530342 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-njvpx" event={"ID":"7edb7291-5510-45f1-810f-de8e6bf08cd0","Type":"ContainerDied","Data":"a396992afc6bd6a8dd79fe828cd981509e7cf8dd75f880184e243de5f99c769a"} Feb 24 02:36:38.533932 master-0 kubenswrapper[31411]: I0224 02:36:38.533810 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:38.533932 master-0 kubenswrapper[31411]: I0224 02:36:38.533850 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8705a-default-internal-api-0" event={"ID":"d6814476-68e8-4541-91b5-e5a159982ff5","Type":"ContainerDied","Data":"5cce40d8d29e9be9445938ac036a92b7bca01c79ad779ad52b6f20cf11b6fe70"} Feb 24 02:36:38.701549 master-0 kubenswrapper[31411]: I0224 02:36:38.676563 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-8705a-default-external-api-0"] Feb 24 02:36:38.751055 master-0 kubenswrapper[31411]: I0224 02:36:38.750991 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-8705a-default-external-api-0"] Feb 24 02:36:38.772541 master-0 kubenswrapper[31411]: I0224 02:36:38.772401 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-8705a-default-internal-api-0"] Feb 24 02:36:38.811340 master-0 kubenswrapper[31411]: I0224 02:36:38.811272 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-8705a-default-internal-api-0"] Feb 24 02:36:38.836755 master-0 kubenswrapper[31411]: I0224 02:36:38.836680 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-8705a-default-external-api-0"] Feb 24 02:36:38.837379 master-0 kubenswrapper[31411]: E0224 02:36:38.837349 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6814476-68e8-4541-91b5-e5a159982ff5" containerName="glance-log" Feb 24 02:36:38.837379 master-0 kubenswrapper[31411]: I0224 02:36:38.837371 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6814476-68e8-4541-91b5-e5a159982ff5" containerName="glance-log" Feb 24 02:36:38.837452 master-0 kubenswrapper[31411]: E0224 02:36:38.837401 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="646d5895-0594-419d-bc57-3beb2730117e" containerName="glance-log" Feb 24 02:36:38.837452 master-0 kubenswrapper[31411]: I0224 02:36:38.837409 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="646d5895-0594-419d-bc57-3beb2730117e" containerName="glance-log" Feb 24 02:36:38.837452 master-0 kubenswrapper[31411]: E0224 02:36:38.837453 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6814476-68e8-4541-91b5-e5a159982ff5" containerName="glance-httpd" Feb 24 02:36:38.837545 master-0 kubenswrapper[31411]: I0224 02:36:38.837462 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6814476-68e8-4541-91b5-e5a159982ff5" containerName="glance-httpd" Feb 24 02:36:38.837545 master-0 kubenswrapper[31411]: E0224 02:36:38.837492 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="646d5895-0594-419d-bc57-3beb2730117e" containerName="glance-httpd" Feb 24 02:36:38.837545 master-0 kubenswrapper[31411]: I0224 02:36:38.837501 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="646d5895-0594-419d-bc57-3beb2730117e" containerName="glance-httpd" Feb 24 02:36:38.837775 master-0 kubenswrapper[31411]: I0224 02:36:38.837748 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="646d5895-0594-419d-bc57-3beb2730117e" containerName="glance-httpd" Feb 24 02:36:38.837819 master-0 kubenswrapper[31411]: I0224 02:36:38.837786 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6814476-68e8-4541-91b5-e5a159982ff5" containerName="glance-log" Feb 24 02:36:38.837850 master-0 kubenswrapper[31411]: I0224 02:36:38.837821 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6814476-68e8-4541-91b5-e5a159982ff5" containerName="glance-httpd" Feb 24 02:36:38.837850 master-0 kubenswrapper[31411]: I0224 02:36:38.837837 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="646d5895-0594-419d-bc57-3beb2730117e" containerName="glance-log" Feb 24 02:36:38.839135 master-0 kubenswrapper[31411]: I0224 02:36:38.839103 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:38.842255 master-0 kubenswrapper[31411]: I0224 02:36:38.842207 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 24 02:36:38.842662 master-0 kubenswrapper[31411]: I0224 02:36:38.842617 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 24 02:36:38.842936 master-0 kubenswrapper[31411]: I0224 02:36:38.842914 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-8705a-default-external-config-data" Feb 24 02:36:38.879165 master-0 kubenswrapper[31411]: I0224 02:36:38.878741 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8705a-default-external-api-0"] Feb 24 02:36:38.910714 master-0 kubenswrapper[31411]: I0224 02:36:38.910645 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-8705a-default-internal-api-0"] Feb 24 02:36:38.915452 master-0 kubenswrapper[31411]: I0224 02:36:38.915357 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:38.920612 master-0 kubenswrapper[31411]: I0224 02:36:38.920559 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 24 02:36:38.920712 master-0 kubenswrapper[31411]: I0224 02:36:38.920670 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-8705a-default-internal-config-data" Feb 24 02:36:38.927262 master-0 kubenswrapper[31411]: I0224 02:36:38.927226 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-77a90a1f-3b19-443f-bfa7-9776b1f847b6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a5d836f3-76db-4c3b-84c0-25ab845f7f66\") pod \"glance-8705a-default-external-api-0\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:38.927372 master-0 kubenswrapper[31411]: I0224 02:36:38.927341 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e330fef-38b6-4b5d-b001-886ecfdd4028-config-data\") pod \"glance-8705a-default-external-api-0\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:38.927531 master-0 kubenswrapper[31411]: I0224 02:36:38.927445 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e330fef-38b6-4b5d-b001-886ecfdd4028-scripts\") pod \"glance-8705a-default-external-api-0\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:38.927531 master-0 kubenswrapper[31411]: I0224 02:36:38.927484 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e330fef-38b6-4b5d-b001-886ecfdd4028-public-tls-certs\") pod \"glance-8705a-default-external-api-0\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:38.928138 master-0 kubenswrapper[31411]: I0224 02:36:38.927543 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmgph\" (UniqueName: \"kubernetes.io/projected/3e330fef-38b6-4b5d-b001-886ecfdd4028-kube-api-access-kmgph\") pod \"glance-8705a-default-external-api-0\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:38.928138 master-0 kubenswrapper[31411]: I0224 02:36:38.927603 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e330fef-38b6-4b5d-b001-886ecfdd4028-combined-ca-bundle\") pod \"glance-8705a-default-external-api-0\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:38.928138 master-0 kubenswrapper[31411]: I0224 02:36:38.927700 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e330fef-38b6-4b5d-b001-886ecfdd4028-logs\") pod \"glance-8705a-default-external-api-0\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:38.928138 master-0 kubenswrapper[31411]: I0224 02:36:38.927931 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e330fef-38b6-4b5d-b001-886ecfdd4028-httpd-run\") pod \"glance-8705a-default-external-api-0\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:38.953967 master-0 kubenswrapper[31411]: I0224 02:36:38.953820 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8705a-default-internal-api-0"] Feb 24 02:36:39.032359 master-0 kubenswrapper[31411]: I0224 02:36:39.032269 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e330fef-38b6-4b5d-b001-886ecfdd4028-combined-ca-bundle\") pod \"glance-8705a-default-external-api-0\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:39.032359 master-0 kubenswrapper[31411]: I0224 02:36:39.032357 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e330fef-38b6-4b5d-b001-886ecfdd4028-logs\") pod \"glance-8705a-default-external-api-0\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:39.032771 master-0 kubenswrapper[31411]: I0224 02:36:39.032390 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-57eabf84-c6fa-42eb-b9cc-5a07d1a482b8\" (UniqueName: \"kubernetes.io/csi/topolvm.io^29278cd9-fc15-4e42-b224-808884eb5fd0\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:39.032771 master-0 kubenswrapper[31411]: I0224 02:36:39.032437 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d7d5c157-238f-4062-8e57-dccee5fa4f9e-httpd-run\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:39.032771 master-0 kubenswrapper[31411]: I0224 02:36:39.032473 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e330fef-38b6-4b5d-b001-886ecfdd4028-httpd-run\") pod \"glance-8705a-default-external-api-0\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:39.032771 master-0 kubenswrapper[31411]: I0224 02:36:39.032500 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7d5c157-238f-4062-8e57-dccee5fa4f9e-internal-tls-certs\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:39.032771 master-0 kubenswrapper[31411]: I0224 02:36:39.032558 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7d5c157-238f-4062-8e57-dccee5fa4f9e-config-data\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:39.032771 master-0 kubenswrapper[31411]: I0224 02:36:39.032642 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7d5c157-238f-4062-8e57-dccee5fa4f9e-logs\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:39.032771 master-0 kubenswrapper[31411]: I0224 02:36:39.032687 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7d5c157-238f-4062-8e57-dccee5fa4f9e-scripts\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:39.032771 master-0 kubenswrapper[31411]: I0224 02:36:39.032708 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-77a90a1f-3b19-443f-bfa7-9776b1f847b6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a5d836f3-76db-4c3b-84c0-25ab845f7f66\") pod \"glance-8705a-default-external-api-0\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:39.032771 master-0 kubenswrapper[31411]: I0224 02:36:39.032738 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7d5c157-238f-4062-8e57-dccee5fa4f9e-combined-ca-bundle\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:39.032771 master-0 kubenswrapper[31411]: I0224 02:36:39.032757 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz5wk\" (UniqueName: \"kubernetes.io/projected/d7d5c157-238f-4062-8e57-dccee5fa4f9e-kube-api-access-tz5wk\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:39.032771 master-0 kubenswrapper[31411]: I0224 02:36:39.032781 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e330fef-38b6-4b5d-b001-886ecfdd4028-config-data\") pod \"glance-8705a-default-external-api-0\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:39.033114 master-0 kubenswrapper[31411]: I0224 02:36:39.032843 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e330fef-38b6-4b5d-b001-886ecfdd4028-scripts\") pod \"glance-8705a-default-external-api-0\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:39.033114 master-0 kubenswrapper[31411]: I0224 02:36:39.032873 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e330fef-38b6-4b5d-b001-886ecfdd4028-public-tls-certs\") pod \"glance-8705a-default-external-api-0\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:39.037482 master-0 kubenswrapper[31411]: I0224 02:36:39.037433 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e330fef-38b6-4b5d-b001-886ecfdd4028-scripts\") pod \"glance-8705a-default-external-api-0\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:39.037634 master-0 kubenswrapper[31411]: I0224 02:36:39.037609 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmgph\" (UniqueName: \"kubernetes.io/projected/3e330fef-38b6-4b5d-b001-886ecfdd4028-kube-api-access-kmgph\") pod \"glance-8705a-default-external-api-0\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:39.037879 master-0 kubenswrapper[31411]: I0224 02:36:39.037829 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e330fef-38b6-4b5d-b001-886ecfdd4028-logs\") pod \"glance-8705a-default-external-api-0\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:39.038047 master-0 kubenswrapper[31411]: I0224 02:36:39.038014 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e330fef-38b6-4b5d-b001-886ecfdd4028-httpd-run\") pod \"glance-8705a-default-external-api-0\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:39.038525 master-0 kubenswrapper[31411]: I0224 02:36:39.038494 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e330fef-38b6-4b5d-b001-886ecfdd4028-config-data\") pod \"glance-8705a-default-external-api-0\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:39.044336 master-0 kubenswrapper[31411]: I0224 02:36:39.044309 31411 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 02:36:39.044406 master-0 kubenswrapper[31411]: I0224 02:36:39.044356 31411 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-77a90a1f-3b19-443f-bfa7-9776b1f847b6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a5d836f3-76db-4c3b-84c0-25ab845f7f66\") pod \"glance-8705a-default-external-api-0\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/e389381360593e6090fc0484f10effeb9de4577cbce267bb52534b321405c56c/globalmount\"" pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:39.044795 master-0 kubenswrapper[31411]: I0224 02:36:39.044769 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e330fef-38b6-4b5d-b001-886ecfdd4028-combined-ca-bundle\") pod \"glance-8705a-default-external-api-0\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:39.054878 master-0 kubenswrapper[31411]: I0224 02:36:39.054836 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmgph\" (UniqueName: \"kubernetes.io/projected/3e330fef-38b6-4b5d-b001-886ecfdd4028-kube-api-access-kmgph\") pod \"glance-8705a-default-external-api-0\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:39.055486 master-0 kubenswrapper[31411]: I0224 02:36:39.055436 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e330fef-38b6-4b5d-b001-886ecfdd4028-public-tls-certs\") pod \"glance-8705a-default-external-api-0\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:39.133445 master-0 kubenswrapper[31411]: I0224 02:36:39.133190 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="646d5895-0594-419d-bc57-3beb2730117e" path="/var/lib/kubelet/pods/646d5895-0594-419d-bc57-3beb2730117e/volumes" Feb 24 02:36:39.134367 master-0 kubenswrapper[31411]: I0224 02:36:39.134331 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6814476-68e8-4541-91b5-e5a159982ff5" path="/var/lib/kubelet/pods/d6814476-68e8-4541-91b5-e5a159982ff5/volumes" Feb 24 02:36:39.140842 master-0 kubenswrapper[31411]: I0224 02:36:39.140777 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7d5c157-238f-4062-8e57-dccee5fa4f9e-config-data\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:39.140842 master-0 kubenswrapper[31411]: I0224 02:36:39.140833 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7d5c157-238f-4062-8e57-dccee5fa4f9e-logs\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:39.141288 master-0 kubenswrapper[31411]: I0224 02:36:39.141247 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7d5c157-238f-4062-8e57-dccee5fa4f9e-scripts\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:39.141413 master-0 kubenswrapper[31411]: I0224 02:36:39.141387 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7d5c157-238f-4062-8e57-dccee5fa4f9e-combined-ca-bundle\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:39.141463 master-0 kubenswrapper[31411]: I0224 02:36:39.141405 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7d5c157-238f-4062-8e57-dccee5fa4f9e-logs\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:39.141463 master-0 kubenswrapper[31411]: I0224 02:36:39.141430 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz5wk\" (UniqueName: \"kubernetes.io/projected/d7d5c157-238f-4062-8e57-dccee5fa4f9e-kube-api-access-tz5wk\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:39.142040 master-0 kubenswrapper[31411]: I0224 02:36:39.142008 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-57eabf84-c6fa-42eb-b9cc-5a07d1a482b8\" (UniqueName: \"kubernetes.io/csi/topolvm.io^29278cd9-fc15-4e42-b224-808884eb5fd0\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:39.142127 master-0 kubenswrapper[31411]: I0224 02:36:39.142105 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d7d5c157-238f-4062-8e57-dccee5fa4f9e-httpd-run\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:39.142228 master-0 kubenswrapper[31411]: I0224 02:36:39.142200 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7d5c157-238f-4062-8e57-dccee5fa4f9e-internal-tls-certs\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:39.144516 master-0 kubenswrapper[31411]: I0224 02:36:39.144486 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d7d5c157-238f-4062-8e57-dccee5fa4f9e-httpd-run\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:39.146366 master-0 kubenswrapper[31411]: I0224 02:36:39.146292 31411 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 02:36:39.146366 master-0 kubenswrapper[31411]: I0224 02:36:39.146320 31411 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-57eabf84-c6fa-42eb-b9cc-5a07d1a482b8\" (UniqueName: \"kubernetes.io/csi/topolvm.io^29278cd9-fc15-4e42-b224-808884eb5fd0\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/568a2d9cea509129bb10d2a8daaed82c1a66b4ccaea30faa55e2d5b91b5cf92d/globalmount\"" pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:39.149078 master-0 kubenswrapper[31411]: I0224 02:36:39.149033 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7d5c157-238f-4062-8e57-dccee5fa4f9e-scripts\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:39.151223 master-0 kubenswrapper[31411]: I0224 02:36:39.151179 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7d5c157-238f-4062-8e57-dccee5fa4f9e-config-data\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:39.152804 master-0 kubenswrapper[31411]: I0224 02:36:39.152771 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7d5c157-238f-4062-8e57-dccee5fa4f9e-internal-tls-certs\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:39.160493 master-0 kubenswrapper[31411]: I0224 02:36:39.160457 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz5wk\" (UniqueName: \"kubernetes.io/projected/d7d5c157-238f-4062-8e57-dccee5fa4f9e-kube-api-access-tz5wk\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:39.165799 master-0 kubenswrapper[31411]: I0224 02:36:39.165760 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7d5c157-238f-4062-8e57-dccee5fa4f9e-combined-ca-bundle\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:40.336707 master-0 kubenswrapper[31411]: I0224 02:36:40.336640 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-77a90a1f-3b19-443f-bfa7-9776b1f847b6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a5d836f3-76db-4c3b-84c0-25ab845f7f66\") pod \"glance-8705a-default-external-api-0\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:40.389422 master-0 kubenswrapper[31411]: I0224 02:36:40.389230 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:36:41.582983 master-0 kubenswrapper[31411]: I0224 02:36:41.582930 31411 generic.go:334] "Generic (PLEG): container finished" podID="4e6b0cc7-1005-47be-bc50-5b6057e2407d" containerID="836773369c211dfb595440864cdcc202fb44fd758d7990ee88075dba5852c88d" exitCode=0 Feb 24 02:36:41.583638 master-0 kubenswrapper[31411]: I0224 02:36:41.583173 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wdcb6" event={"ID":"4e6b0cc7-1005-47be-bc50-5b6057e2407d","Type":"ContainerDied","Data":"836773369c211dfb595440864cdcc202fb44fd758d7990ee88075dba5852c88d"} Feb 24 02:36:41.745683 master-0 kubenswrapper[31411]: I0224 02:36:41.745618 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-57eabf84-c6fa-42eb-b9cc-5a07d1a482b8\" (UniqueName: \"kubernetes.io/csi/topolvm.io^29278cd9-fc15-4e42-b224-808884eb5fd0\") pod \"glance-8705a-default-internal-api-0\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:41.947685 master-0 kubenswrapper[31411]: I0224 02:36:41.947603 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:36:46.755400 master-0 kubenswrapper[31411]: I0224 02:36:46.755313 31411 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-84556f859-6lpst" podUID="49237a00-8280-4287-a6fd-9fdf6a486c95" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.194:5353: i/o timeout" Feb 24 02:36:48.221468 master-0 kubenswrapper[31411]: I0224 02:36:48.221406 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wdcb6" Feb 24 02:36:48.233391 master-0 kubenswrapper[31411]: I0224 02:36:48.233346 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84556f859-6lpst" Feb 24 02:36:48.362900 master-0 kubenswrapper[31411]: I0224 02:36:48.362642 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcvh2\" (UniqueName: \"kubernetes.io/projected/49237a00-8280-4287-a6fd-9fdf6a486c95-kube-api-access-vcvh2\") pod \"49237a00-8280-4287-a6fd-9fdf6a486c95\" (UID: \"49237a00-8280-4287-a6fd-9fdf6a486c95\") " Feb 24 02:36:48.362900 master-0 kubenswrapper[31411]: I0224 02:36:48.362751 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49237a00-8280-4287-a6fd-9fdf6a486c95-config\") pod \"49237a00-8280-4287-a6fd-9fdf6a486c95\" (UID: \"49237a00-8280-4287-a6fd-9fdf6a486c95\") " Feb 24 02:36:48.362900 master-0 kubenswrapper[31411]: I0224 02:36:48.362789 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e6b0cc7-1005-47be-bc50-5b6057e2407d-combined-ca-bundle\") pod \"4e6b0cc7-1005-47be-bc50-5b6057e2407d\" (UID: \"4e6b0cc7-1005-47be-bc50-5b6057e2407d\") " Feb 24 02:36:48.363344 master-0 kubenswrapper[31411]: I0224 02:36:48.363286 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49237a00-8280-4287-a6fd-9fdf6a486c95-ovsdbserver-sb\") pod \"49237a00-8280-4287-a6fd-9fdf6a486c95\" (UID: \"49237a00-8280-4287-a6fd-9fdf6a486c95\") " Feb 24 02:36:48.363512 master-0 kubenswrapper[31411]: I0224 02:36:48.363466 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49237a00-8280-4287-a6fd-9fdf6a486c95-dns-swift-storage-0\") pod \"49237a00-8280-4287-a6fd-9fdf6a486c95\" (UID: \"49237a00-8280-4287-a6fd-9fdf6a486c95\") " Feb 24 02:36:48.363620 master-0 kubenswrapper[31411]: I0224 02:36:48.363556 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49237a00-8280-4287-a6fd-9fdf6a486c95-ovsdbserver-nb\") pod \"49237a00-8280-4287-a6fd-9fdf6a486c95\" (UID: \"49237a00-8280-4287-a6fd-9fdf6a486c95\") " Feb 24 02:36:48.363620 master-0 kubenswrapper[31411]: I0224 02:36:48.363600 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49237a00-8280-4287-a6fd-9fdf6a486c95-dns-svc\") pod \"49237a00-8280-4287-a6fd-9fdf6a486c95\" (UID: \"49237a00-8280-4287-a6fd-9fdf6a486c95\") " Feb 24 02:36:48.363777 master-0 kubenswrapper[31411]: I0224 02:36:48.363634 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e6b0cc7-1005-47be-bc50-5b6057e2407d-credential-keys\") pod \"4e6b0cc7-1005-47be-bc50-5b6057e2407d\" (UID: \"4e6b0cc7-1005-47be-bc50-5b6057e2407d\") " Feb 24 02:36:48.363777 master-0 kubenswrapper[31411]: I0224 02:36:48.363695 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e6b0cc7-1005-47be-bc50-5b6057e2407d-scripts\") pod \"4e6b0cc7-1005-47be-bc50-5b6057e2407d\" (UID: \"4e6b0cc7-1005-47be-bc50-5b6057e2407d\") " Feb 24 02:36:48.363777 master-0 kubenswrapper[31411]: I0224 02:36:48.363726 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxcsc\" (UniqueName: \"kubernetes.io/projected/4e6b0cc7-1005-47be-bc50-5b6057e2407d-kube-api-access-qxcsc\") pod \"4e6b0cc7-1005-47be-bc50-5b6057e2407d\" (UID: \"4e6b0cc7-1005-47be-bc50-5b6057e2407d\") " Feb 24 02:36:48.363978 master-0 kubenswrapper[31411]: I0224 02:36:48.363855 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e6b0cc7-1005-47be-bc50-5b6057e2407d-fernet-keys\") pod \"4e6b0cc7-1005-47be-bc50-5b6057e2407d\" (UID: \"4e6b0cc7-1005-47be-bc50-5b6057e2407d\") " Feb 24 02:36:48.363978 master-0 kubenswrapper[31411]: I0224 02:36:48.363902 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e6b0cc7-1005-47be-bc50-5b6057e2407d-config-data\") pod \"4e6b0cc7-1005-47be-bc50-5b6057e2407d\" (UID: \"4e6b0cc7-1005-47be-bc50-5b6057e2407d\") " Feb 24 02:36:48.389721 master-0 kubenswrapper[31411]: I0224 02:36:48.374119 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49237a00-8280-4287-a6fd-9fdf6a486c95-kube-api-access-vcvh2" (OuterVolumeSpecName: "kube-api-access-vcvh2") pod "49237a00-8280-4287-a6fd-9fdf6a486c95" (UID: "49237a00-8280-4287-a6fd-9fdf6a486c95"). InnerVolumeSpecName "kube-api-access-vcvh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:36:48.390864 master-0 kubenswrapper[31411]: I0224 02:36:48.390691 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e6b0cc7-1005-47be-bc50-5b6057e2407d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4e6b0cc7-1005-47be-bc50-5b6057e2407d" (UID: "4e6b0cc7-1005-47be-bc50-5b6057e2407d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:36:48.391032 master-0 kubenswrapper[31411]: I0224 02:36:48.390978 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e6b0cc7-1005-47be-bc50-5b6057e2407d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "4e6b0cc7-1005-47be-bc50-5b6057e2407d" (UID: "4e6b0cc7-1005-47be-bc50-5b6057e2407d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:36:48.391119 master-0 kubenswrapper[31411]: I0224 02:36:48.391044 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e6b0cc7-1005-47be-bc50-5b6057e2407d-kube-api-access-qxcsc" (OuterVolumeSpecName: "kube-api-access-qxcsc") pod "4e6b0cc7-1005-47be-bc50-5b6057e2407d" (UID: "4e6b0cc7-1005-47be-bc50-5b6057e2407d"). InnerVolumeSpecName "kube-api-access-qxcsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:36:48.415654 master-0 kubenswrapper[31411]: I0224 02:36:48.415328 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e6b0cc7-1005-47be-bc50-5b6057e2407d-scripts" (OuterVolumeSpecName: "scripts") pod "4e6b0cc7-1005-47be-bc50-5b6057e2407d" (UID: "4e6b0cc7-1005-47be-bc50-5b6057e2407d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:36:48.441997 master-0 kubenswrapper[31411]: I0224 02:36:48.441945 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49237a00-8280-4287-a6fd-9fdf6a486c95-config" (OuterVolumeSpecName: "config") pod "49237a00-8280-4287-a6fd-9fdf6a486c95" (UID: "49237a00-8280-4287-a6fd-9fdf6a486c95"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:36:48.448721 master-0 kubenswrapper[31411]: I0224 02:36:48.448647 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e6b0cc7-1005-47be-bc50-5b6057e2407d-config-data" (OuterVolumeSpecName: "config-data") pod "4e6b0cc7-1005-47be-bc50-5b6057e2407d" (UID: "4e6b0cc7-1005-47be-bc50-5b6057e2407d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:36:48.449925 master-0 kubenswrapper[31411]: I0224 02:36:48.449867 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49237a00-8280-4287-a6fd-9fdf6a486c95-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "49237a00-8280-4287-a6fd-9fdf6a486c95" (UID: "49237a00-8280-4287-a6fd-9fdf6a486c95"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:36:48.453937 master-0 kubenswrapper[31411]: I0224 02:36:48.453797 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e6b0cc7-1005-47be-bc50-5b6057e2407d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e6b0cc7-1005-47be-bc50-5b6057e2407d" (UID: "4e6b0cc7-1005-47be-bc50-5b6057e2407d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:36:48.461810 master-0 kubenswrapper[31411]: I0224 02:36:48.461765 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49237a00-8280-4287-a6fd-9fdf6a486c95-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "49237a00-8280-4287-a6fd-9fdf6a486c95" (UID: "49237a00-8280-4287-a6fd-9fdf6a486c95"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:36:48.462272 master-0 kubenswrapper[31411]: I0224 02:36:48.462229 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49237a00-8280-4287-a6fd-9fdf6a486c95-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "49237a00-8280-4287-a6fd-9fdf6a486c95" (UID: "49237a00-8280-4287-a6fd-9fdf6a486c95"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:36:48.467706 master-0 kubenswrapper[31411]: I0224 02:36:48.467654 31411 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49237a00-8280-4287-a6fd-9fdf6a486c95-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:48.467706 master-0 kubenswrapper[31411]: I0224 02:36:48.467700 31411 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49237a00-8280-4287-a6fd-9fdf6a486c95-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:48.467706 master-0 kubenswrapper[31411]: I0224 02:36:48.467712 31411 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49237a00-8280-4287-a6fd-9fdf6a486c95-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:48.467850 master-0 kubenswrapper[31411]: I0224 02:36:48.467723 31411 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e6b0cc7-1005-47be-bc50-5b6057e2407d-credential-keys\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:48.467850 master-0 kubenswrapper[31411]: I0224 02:36:48.467736 31411 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e6b0cc7-1005-47be-bc50-5b6057e2407d-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:48.467850 master-0 kubenswrapper[31411]: I0224 02:36:48.467750 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxcsc\" (UniqueName: \"kubernetes.io/projected/4e6b0cc7-1005-47be-bc50-5b6057e2407d-kube-api-access-qxcsc\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:48.467850 master-0 kubenswrapper[31411]: I0224 02:36:48.467759 31411 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e6b0cc7-1005-47be-bc50-5b6057e2407d-fernet-keys\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:48.467850 master-0 kubenswrapper[31411]: I0224 02:36:48.467767 31411 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e6b0cc7-1005-47be-bc50-5b6057e2407d-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:48.467850 master-0 kubenswrapper[31411]: I0224 02:36:48.467779 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcvh2\" (UniqueName: \"kubernetes.io/projected/49237a00-8280-4287-a6fd-9fdf6a486c95-kube-api-access-vcvh2\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:48.467850 master-0 kubenswrapper[31411]: I0224 02:36:48.467787 31411 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49237a00-8280-4287-a6fd-9fdf6a486c95-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:48.467850 master-0 kubenswrapper[31411]: I0224 02:36:48.467797 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e6b0cc7-1005-47be-bc50-5b6057e2407d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:48.482641 master-0 kubenswrapper[31411]: I0224 02:36:48.482556 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49237a00-8280-4287-a6fd-9fdf6a486c95-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "49237a00-8280-4287-a6fd-9fdf6a486c95" (UID: "49237a00-8280-4287-a6fd-9fdf6a486c95"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:36:48.573076 master-0 kubenswrapper[31411]: I0224 02:36:48.572971 31411 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49237a00-8280-4287-a6fd-9fdf6a486c95-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:48.695644 master-0 kubenswrapper[31411]: I0224 02:36:48.695449 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84556f859-6lpst" event={"ID":"49237a00-8280-4287-a6fd-9fdf6a486c95","Type":"ContainerDied","Data":"764312acfdf8be88a514b6108c4ef989edfe6f6f49b33150c19cd99094c6c3a3"} Feb 24 02:36:48.695644 master-0 kubenswrapper[31411]: I0224 02:36:48.695509 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84556f859-6lpst" Feb 24 02:36:48.698255 master-0 kubenswrapper[31411]: I0224 02:36:48.698199 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wdcb6" event={"ID":"4e6b0cc7-1005-47be-bc50-5b6057e2407d","Type":"ContainerDied","Data":"b0334569865c691ea1bbd65d37ef31df922a690a52b652f140e63e33836ecbb3"} Feb 24 02:36:48.698337 master-0 kubenswrapper[31411]: I0224 02:36:48.698262 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wdcb6" Feb 24 02:36:48.698337 master-0 kubenswrapper[31411]: I0224 02:36:48.698255 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0334569865c691ea1bbd65d37ef31df922a690a52b652f140e63e33836ecbb3" Feb 24 02:36:48.782071 master-0 kubenswrapper[31411]: I0224 02:36:48.781955 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84556f859-6lpst"] Feb 24 02:36:48.796089 master-0 kubenswrapper[31411]: I0224 02:36:48.796024 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84556f859-6lpst"] Feb 24 02:36:49.118854 master-0 kubenswrapper[31411]: I0224 02:36:49.118737 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49237a00-8280-4287-a6fd-9fdf6a486c95" path="/var/lib/kubelet/pods/49237a00-8280-4287-a6fd-9fdf6a486c95/volumes" Feb 24 02:36:49.577598 master-0 kubenswrapper[31411]: I0224 02:36:49.572635 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-8f98fb65f-btxw6"] Feb 24 02:36:49.577598 master-0 kubenswrapper[31411]: E0224 02:36:49.573294 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49237a00-8280-4287-a6fd-9fdf6a486c95" containerName="dnsmasq-dns" Feb 24 02:36:49.577598 master-0 kubenswrapper[31411]: I0224 02:36:49.573307 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="49237a00-8280-4287-a6fd-9fdf6a486c95" containerName="dnsmasq-dns" Feb 24 02:36:49.577598 master-0 kubenswrapper[31411]: E0224 02:36:49.573325 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e6b0cc7-1005-47be-bc50-5b6057e2407d" containerName="keystone-bootstrap" Feb 24 02:36:49.577598 master-0 kubenswrapper[31411]: I0224 02:36:49.573332 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e6b0cc7-1005-47be-bc50-5b6057e2407d" containerName="keystone-bootstrap" Feb 24 02:36:49.577598 master-0 kubenswrapper[31411]: E0224 02:36:49.573351 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49237a00-8280-4287-a6fd-9fdf6a486c95" containerName="init" Feb 24 02:36:49.577598 master-0 kubenswrapper[31411]: I0224 02:36:49.573358 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="49237a00-8280-4287-a6fd-9fdf6a486c95" containerName="init" Feb 24 02:36:49.577598 master-0 kubenswrapper[31411]: I0224 02:36:49.573585 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="49237a00-8280-4287-a6fd-9fdf6a486c95" containerName="dnsmasq-dns" Feb 24 02:36:49.577598 master-0 kubenswrapper[31411]: I0224 02:36:49.573635 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e6b0cc7-1005-47be-bc50-5b6057e2407d" containerName="keystone-bootstrap" Feb 24 02:36:49.577598 master-0 kubenswrapper[31411]: I0224 02:36:49.574415 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8f98fb65f-btxw6" Feb 24 02:36:49.578466 master-0 kubenswrapper[31411]: I0224 02:36:49.578158 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 24 02:36:49.578466 master-0 kubenswrapper[31411]: I0224 02:36:49.578379 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 24 02:36:49.578529 master-0 kubenswrapper[31411]: I0224 02:36:49.578510 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 24 02:36:49.579615 master-0 kubenswrapper[31411]: I0224 02:36:49.578658 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 24 02:36:49.580963 master-0 kubenswrapper[31411]: I0224 02:36:49.580922 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 24 02:36:49.604229 master-0 kubenswrapper[31411]: I0224 02:36:49.601862 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8f98fb65f-btxw6"] Feb 24 02:36:49.664985 master-0 kubenswrapper[31411]: I0224 02:36:49.664900 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-njvpx" Feb 24 02:36:49.713273 master-0 kubenswrapper[31411]: I0224 02:36:49.710167 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-njvpx" event={"ID":"7edb7291-5510-45f1-810f-de8e6bf08cd0","Type":"ContainerDied","Data":"192e091c935f2d5576ba7c5e67ec86e5cddab711bb6fb7ef60fe8a51faf4d7d2"} Feb 24 02:36:49.713273 master-0 kubenswrapper[31411]: I0224 02:36:49.710223 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="192e091c935f2d5576ba7c5e67ec86e5cddab711bb6fb7ef60fe8a51faf4d7d2" Feb 24 02:36:49.713273 master-0 kubenswrapper[31411]: I0224 02:36:49.710327 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-njvpx" Feb 24 02:36:49.732266 master-0 kubenswrapper[31411]: I0224 02:36:49.732106 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14b140d8-b731-46c1-bb66-63e4345873c0-config-data\") pod \"keystone-8f98fb65f-btxw6\" (UID: \"14b140d8-b731-46c1-bb66-63e4345873c0\") " pod="openstack/keystone-8f98fb65f-btxw6" Feb 24 02:36:49.732266 master-0 kubenswrapper[31411]: I0224 02:36:49.732207 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qxqf\" (UniqueName: \"kubernetes.io/projected/14b140d8-b731-46c1-bb66-63e4345873c0-kube-api-access-2qxqf\") pod \"keystone-8f98fb65f-btxw6\" (UID: \"14b140d8-b731-46c1-bb66-63e4345873c0\") " pod="openstack/keystone-8f98fb65f-btxw6" Feb 24 02:36:49.732266 master-0 kubenswrapper[31411]: I0224 02:36:49.732262 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14b140d8-b731-46c1-bb66-63e4345873c0-internal-tls-certs\") pod \"keystone-8f98fb65f-btxw6\" (UID: \"14b140d8-b731-46c1-bb66-63e4345873c0\") " pod="openstack/keystone-8f98fb65f-btxw6" Feb 24 02:36:49.732547 master-0 kubenswrapper[31411]: I0224 02:36:49.732281 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14b140d8-b731-46c1-bb66-63e4345873c0-public-tls-certs\") pod \"keystone-8f98fb65f-btxw6\" (UID: \"14b140d8-b731-46c1-bb66-63e4345873c0\") " pod="openstack/keystone-8f98fb65f-btxw6" Feb 24 02:36:49.732626 master-0 kubenswrapper[31411]: I0224 02:36:49.732525 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14b140d8-b731-46c1-bb66-63e4345873c0-scripts\") pod \"keystone-8f98fb65f-btxw6\" (UID: \"14b140d8-b731-46c1-bb66-63e4345873c0\") " pod="openstack/keystone-8f98fb65f-btxw6" Feb 24 02:36:49.732808 master-0 kubenswrapper[31411]: I0224 02:36:49.732744 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/14b140d8-b731-46c1-bb66-63e4345873c0-credential-keys\") pod \"keystone-8f98fb65f-btxw6\" (UID: \"14b140d8-b731-46c1-bb66-63e4345873c0\") " pod="openstack/keystone-8f98fb65f-btxw6" Feb 24 02:36:49.733198 master-0 kubenswrapper[31411]: I0224 02:36:49.733154 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14b140d8-b731-46c1-bb66-63e4345873c0-combined-ca-bundle\") pod \"keystone-8f98fb65f-btxw6\" (UID: \"14b140d8-b731-46c1-bb66-63e4345873c0\") " pod="openstack/keystone-8f98fb65f-btxw6" Feb 24 02:36:49.733344 master-0 kubenswrapper[31411]: I0224 02:36:49.733323 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/14b140d8-b731-46c1-bb66-63e4345873c0-fernet-keys\") pod \"keystone-8f98fb65f-btxw6\" (UID: \"14b140d8-b731-46c1-bb66-63e4345873c0\") " pod="openstack/keystone-8f98fb65f-btxw6" Feb 24 02:36:49.835688 master-0 kubenswrapper[31411]: I0224 02:36:49.835494 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7edb7291-5510-45f1-810f-de8e6bf08cd0-scripts\") pod \"7edb7291-5510-45f1-810f-de8e6bf08cd0\" (UID: \"7edb7291-5510-45f1-810f-de8e6bf08cd0\") " Feb 24 02:36:49.835688 master-0 kubenswrapper[31411]: I0224 02:36:49.835647 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-824bs\" (UniqueName: \"kubernetes.io/projected/7edb7291-5510-45f1-810f-de8e6bf08cd0-kube-api-access-824bs\") pod \"7edb7291-5510-45f1-810f-de8e6bf08cd0\" (UID: \"7edb7291-5510-45f1-810f-de8e6bf08cd0\") " Feb 24 02:36:49.835688 master-0 kubenswrapper[31411]: I0224 02:36:49.835673 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7edb7291-5510-45f1-810f-de8e6bf08cd0-config-data\") pod \"7edb7291-5510-45f1-810f-de8e6bf08cd0\" (UID: \"7edb7291-5510-45f1-810f-de8e6bf08cd0\") " Feb 24 02:36:49.836058 master-0 kubenswrapper[31411]: I0224 02:36:49.835707 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7edb7291-5510-45f1-810f-de8e6bf08cd0-combined-ca-bundle\") pod \"7edb7291-5510-45f1-810f-de8e6bf08cd0\" (UID: \"7edb7291-5510-45f1-810f-de8e6bf08cd0\") " Feb 24 02:36:49.836058 master-0 kubenswrapper[31411]: I0224 02:36:49.835869 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7edb7291-5510-45f1-810f-de8e6bf08cd0-logs\") pod \"7edb7291-5510-45f1-810f-de8e6bf08cd0\" (UID: \"7edb7291-5510-45f1-810f-de8e6bf08cd0\") " Feb 24 02:36:49.836204 master-0 kubenswrapper[31411]: I0224 02:36:49.836165 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qxqf\" (UniqueName: \"kubernetes.io/projected/14b140d8-b731-46c1-bb66-63e4345873c0-kube-api-access-2qxqf\") pod \"keystone-8f98fb65f-btxw6\" (UID: \"14b140d8-b731-46c1-bb66-63e4345873c0\") " pod="openstack/keystone-8f98fb65f-btxw6" Feb 24 02:36:49.836269 master-0 kubenswrapper[31411]: I0224 02:36:49.836219 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14b140d8-b731-46c1-bb66-63e4345873c0-internal-tls-certs\") pod \"keystone-8f98fb65f-btxw6\" (UID: \"14b140d8-b731-46c1-bb66-63e4345873c0\") " pod="openstack/keystone-8f98fb65f-btxw6" Feb 24 02:36:49.836269 master-0 kubenswrapper[31411]: I0224 02:36:49.836252 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14b140d8-b731-46c1-bb66-63e4345873c0-public-tls-certs\") pod \"keystone-8f98fb65f-btxw6\" (UID: \"14b140d8-b731-46c1-bb66-63e4345873c0\") " pod="openstack/keystone-8f98fb65f-btxw6" Feb 24 02:36:49.836360 master-0 kubenswrapper[31411]: I0224 02:36:49.836324 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14b140d8-b731-46c1-bb66-63e4345873c0-scripts\") pod \"keystone-8f98fb65f-btxw6\" (UID: \"14b140d8-b731-46c1-bb66-63e4345873c0\") " pod="openstack/keystone-8f98fb65f-btxw6" Feb 24 02:36:49.836822 master-0 kubenswrapper[31411]: I0224 02:36:49.836785 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/14b140d8-b731-46c1-bb66-63e4345873c0-credential-keys\") pod \"keystone-8f98fb65f-btxw6\" (UID: \"14b140d8-b731-46c1-bb66-63e4345873c0\") " pod="openstack/keystone-8f98fb65f-btxw6" Feb 24 02:36:49.836895 master-0 kubenswrapper[31411]: I0224 02:36:49.836882 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14b140d8-b731-46c1-bb66-63e4345873c0-combined-ca-bundle\") pod \"keystone-8f98fb65f-btxw6\" (UID: \"14b140d8-b731-46c1-bb66-63e4345873c0\") " pod="openstack/keystone-8f98fb65f-btxw6" Feb 24 02:36:49.836969 master-0 kubenswrapper[31411]: I0224 02:36:49.836933 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/14b140d8-b731-46c1-bb66-63e4345873c0-fernet-keys\") pod \"keystone-8f98fb65f-btxw6\" (UID: \"14b140d8-b731-46c1-bb66-63e4345873c0\") " pod="openstack/keystone-8f98fb65f-btxw6" Feb 24 02:36:49.837080 master-0 kubenswrapper[31411]: I0224 02:36:49.837047 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14b140d8-b731-46c1-bb66-63e4345873c0-config-data\") pod \"keystone-8f98fb65f-btxw6\" (UID: \"14b140d8-b731-46c1-bb66-63e4345873c0\") " pod="openstack/keystone-8f98fb65f-btxw6" Feb 24 02:36:49.839046 master-0 kubenswrapper[31411]: I0224 02:36:49.838995 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7edb7291-5510-45f1-810f-de8e6bf08cd0-logs" (OuterVolumeSpecName: "logs") pod "7edb7291-5510-45f1-810f-de8e6bf08cd0" (UID: "7edb7291-5510-45f1-810f-de8e6bf08cd0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:36:49.844068 master-0 kubenswrapper[31411]: I0224 02:36:49.844027 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/14b140d8-b731-46c1-bb66-63e4345873c0-credential-keys\") pod \"keystone-8f98fb65f-btxw6\" (UID: \"14b140d8-b731-46c1-bb66-63e4345873c0\") " pod="openstack/keystone-8f98fb65f-btxw6" Feb 24 02:36:49.844340 master-0 kubenswrapper[31411]: I0224 02:36:49.844309 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14b140d8-b731-46c1-bb66-63e4345873c0-config-data\") pod \"keystone-8f98fb65f-btxw6\" (UID: \"14b140d8-b731-46c1-bb66-63e4345873c0\") " pod="openstack/keystone-8f98fb65f-btxw6" Feb 24 02:36:49.845179 master-0 kubenswrapper[31411]: I0224 02:36:49.845119 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14b140d8-b731-46c1-bb66-63e4345873c0-scripts\") pod \"keystone-8f98fb65f-btxw6\" (UID: \"14b140d8-b731-46c1-bb66-63e4345873c0\") " pod="openstack/keystone-8f98fb65f-btxw6" Feb 24 02:36:49.845856 master-0 kubenswrapper[31411]: I0224 02:36:49.845806 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7edb7291-5510-45f1-810f-de8e6bf08cd0-kube-api-access-824bs" (OuterVolumeSpecName: "kube-api-access-824bs") pod "7edb7291-5510-45f1-810f-de8e6bf08cd0" (UID: "7edb7291-5510-45f1-810f-de8e6bf08cd0"). InnerVolumeSpecName "kube-api-access-824bs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:36:49.845947 master-0 kubenswrapper[31411]: I0224 02:36:49.845832 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7edb7291-5510-45f1-810f-de8e6bf08cd0-scripts" (OuterVolumeSpecName: "scripts") pod "7edb7291-5510-45f1-810f-de8e6bf08cd0" (UID: "7edb7291-5510-45f1-810f-de8e6bf08cd0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:36:49.846932 master-0 kubenswrapper[31411]: I0224 02:36:49.846747 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14b140d8-b731-46c1-bb66-63e4345873c0-internal-tls-certs\") pod \"keystone-8f98fb65f-btxw6\" (UID: \"14b140d8-b731-46c1-bb66-63e4345873c0\") " pod="openstack/keystone-8f98fb65f-btxw6" Feb 24 02:36:49.846932 master-0 kubenswrapper[31411]: I0224 02:36:49.846870 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14b140d8-b731-46c1-bb66-63e4345873c0-public-tls-certs\") pod \"keystone-8f98fb65f-btxw6\" (UID: \"14b140d8-b731-46c1-bb66-63e4345873c0\") " pod="openstack/keystone-8f98fb65f-btxw6" Feb 24 02:36:49.848852 master-0 kubenswrapper[31411]: I0224 02:36:49.848803 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14b140d8-b731-46c1-bb66-63e4345873c0-combined-ca-bundle\") pod \"keystone-8f98fb65f-btxw6\" (UID: \"14b140d8-b731-46c1-bb66-63e4345873c0\") " pod="openstack/keystone-8f98fb65f-btxw6" Feb 24 02:36:49.871519 master-0 kubenswrapper[31411]: I0224 02:36:49.871455 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qxqf\" (UniqueName: \"kubernetes.io/projected/14b140d8-b731-46c1-bb66-63e4345873c0-kube-api-access-2qxqf\") pod \"keystone-8f98fb65f-btxw6\" (UID: \"14b140d8-b731-46c1-bb66-63e4345873c0\") " pod="openstack/keystone-8f98fb65f-btxw6" Feb 24 02:36:49.872240 master-0 kubenswrapper[31411]: I0224 02:36:49.872183 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/14b140d8-b731-46c1-bb66-63e4345873c0-fernet-keys\") pod \"keystone-8f98fb65f-btxw6\" (UID: \"14b140d8-b731-46c1-bb66-63e4345873c0\") " pod="openstack/keystone-8f98fb65f-btxw6" Feb 24 02:36:49.878430 master-0 kubenswrapper[31411]: I0224 02:36:49.878333 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7edb7291-5510-45f1-810f-de8e6bf08cd0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7edb7291-5510-45f1-810f-de8e6bf08cd0" (UID: "7edb7291-5510-45f1-810f-de8e6bf08cd0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:36:49.918438 master-0 kubenswrapper[31411]: I0224 02:36:49.915247 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7edb7291-5510-45f1-810f-de8e6bf08cd0-config-data" (OuterVolumeSpecName: "config-data") pod "7edb7291-5510-45f1-810f-de8e6bf08cd0" (UID: "7edb7291-5510-45f1-810f-de8e6bf08cd0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:36:49.939806 master-0 kubenswrapper[31411]: I0224 02:36:49.939647 31411 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7edb7291-5510-45f1-810f-de8e6bf08cd0-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:49.939806 master-0 kubenswrapper[31411]: I0224 02:36:49.939692 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-824bs\" (UniqueName: \"kubernetes.io/projected/7edb7291-5510-45f1-810f-de8e6bf08cd0-kube-api-access-824bs\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:49.939806 master-0 kubenswrapper[31411]: I0224 02:36:49.939706 31411 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7edb7291-5510-45f1-810f-de8e6bf08cd0-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:49.939806 master-0 kubenswrapper[31411]: I0224 02:36:49.939715 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7edb7291-5510-45f1-810f-de8e6bf08cd0-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:49.939806 master-0 kubenswrapper[31411]: I0224 02:36:49.939724 31411 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7edb7291-5510-45f1-810f-de8e6bf08cd0-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:49.980558 master-0 kubenswrapper[31411]: I0224 02:36:49.980492 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8f98fb65f-btxw6" Feb 24 02:36:50.939279 master-0 kubenswrapper[31411]: I0224 02:36:50.939171 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-f597cf46d-llslv"] Feb 24 02:36:50.941220 master-0 kubenswrapper[31411]: E0224 02:36:50.941172 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7edb7291-5510-45f1-810f-de8e6bf08cd0" containerName="placement-db-sync" Feb 24 02:36:50.941220 master-0 kubenswrapper[31411]: I0224 02:36:50.941214 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="7edb7291-5510-45f1-810f-de8e6bf08cd0" containerName="placement-db-sync" Feb 24 02:36:50.942059 master-0 kubenswrapper[31411]: I0224 02:36:50.941760 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="7edb7291-5510-45f1-810f-de8e6bf08cd0" containerName="placement-db-sync" Feb 24 02:36:50.948558 master-0 kubenswrapper[31411]: I0224 02:36:50.948518 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f597cf46d-llslv" Feb 24 02:36:50.953405 master-0 kubenswrapper[31411]: I0224 02:36:50.953361 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 24 02:36:50.953551 master-0 kubenswrapper[31411]: I0224 02:36:50.953537 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 24 02:36:50.954049 master-0 kubenswrapper[31411]: I0224 02:36:50.954020 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 24 02:36:50.961494 master-0 kubenswrapper[31411]: I0224 02:36:50.960095 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 24 02:36:50.971390 master-0 kubenswrapper[31411]: I0224 02:36:50.969586 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f597cf46d-llslv"] Feb 24 02:36:51.066988 master-0 kubenswrapper[31411]: I0224 02:36:51.066912 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b01bb6c-8488-4275-a2b0-ee35dbd9eb39-public-tls-certs\") pod \"placement-f597cf46d-llslv\" (UID: \"2b01bb6c-8488-4275-a2b0-ee35dbd9eb39\") " pod="openstack/placement-f597cf46d-llslv" Feb 24 02:36:51.067634 master-0 kubenswrapper[31411]: I0224 02:36:51.067009 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b01bb6c-8488-4275-a2b0-ee35dbd9eb39-logs\") pod \"placement-f597cf46d-llslv\" (UID: \"2b01bb6c-8488-4275-a2b0-ee35dbd9eb39\") " pod="openstack/placement-f597cf46d-llslv" Feb 24 02:36:51.067634 master-0 kubenswrapper[31411]: I0224 02:36:51.067096 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b01bb6c-8488-4275-a2b0-ee35dbd9eb39-internal-tls-certs\") pod \"placement-f597cf46d-llslv\" (UID: \"2b01bb6c-8488-4275-a2b0-ee35dbd9eb39\") " pod="openstack/placement-f597cf46d-llslv" Feb 24 02:36:51.067634 master-0 kubenswrapper[31411]: I0224 02:36:51.067212 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stwrq\" (UniqueName: \"kubernetes.io/projected/2b01bb6c-8488-4275-a2b0-ee35dbd9eb39-kube-api-access-stwrq\") pod \"placement-f597cf46d-llslv\" (UID: \"2b01bb6c-8488-4275-a2b0-ee35dbd9eb39\") " pod="openstack/placement-f597cf46d-llslv" Feb 24 02:36:51.067634 master-0 kubenswrapper[31411]: I0224 02:36:51.067435 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b01bb6c-8488-4275-a2b0-ee35dbd9eb39-config-data\") pod \"placement-f597cf46d-llslv\" (UID: \"2b01bb6c-8488-4275-a2b0-ee35dbd9eb39\") " pod="openstack/placement-f597cf46d-llslv" Feb 24 02:36:51.067634 master-0 kubenswrapper[31411]: I0224 02:36:51.067537 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b01bb6c-8488-4275-a2b0-ee35dbd9eb39-scripts\") pod \"placement-f597cf46d-llslv\" (UID: \"2b01bb6c-8488-4275-a2b0-ee35dbd9eb39\") " pod="openstack/placement-f597cf46d-llslv" Feb 24 02:36:51.067798 master-0 kubenswrapper[31411]: I0224 02:36:51.067654 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b01bb6c-8488-4275-a2b0-ee35dbd9eb39-combined-ca-bundle\") pod \"placement-f597cf46d-llslv\" (UID: \"2b01bb6c-8488-4275-a2b0-ee35dbd9eb39\") " pod="openstack/placement-f597cf46d-llslv" Feb 24 02:36:51.170176 master-0 kubenswrapper[31411]: I0224 02:36:51.170096 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stwrq\" (UniqueName: \"kubernetes.io/projected/2b01bb6c-8488-4275-a2b0-ee35dbd9eb39-kube-api-access-stwrq\") pod \"placement-f597cf46d-llslv\" (UID: \"2b01bb6c-8488-4275-a2b0-ee35dbd9eb39\") " pod="openstack/placement-f597cf46d-llslv" Feb 24 02:36:51.170464 master-0 kubenswrapper[31411]: I0224 02:36:51.170230 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b01bb6c-8488-4275-a2b0-ee35dbd9eb39-config-data\") pod \"placement-f597cf46d-llslv\" (UID: \"2b01bb6c-8488-4275-a2b0-ee35dbd9eb39\") " pod="openstack/placement-f597cf46d-llslv" Feb 24 02:36:51.170464 master-0 kubenswrapper[31411]: I0224 02:36:51.170277 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b01bb6c-8488-4275-a2b0-ee35dbd9eb39-scripts\") pod \"placement-f597cf46d-llslv\" (UID: \"2b01bb6c-8488-4275-a2b0-ee35dbd9eb39\") " pod="openstack/placement-f597cf46d-llslv" Feb 24 02:36:51.170464 master-0 kubenswrapper[31411]: I0224 02:36:51.170334 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b01bb6c-8488-4275-a2b0-ee35dbd9eb39-combined-ca-bundle\") pod \"placement-f597cf46d-llslv\" (UID: \"2b01bb6c-8488-4275-a2b0-ee35dbd9eb39\") " pod="openstack/placement-f597cf46d-llslv" Feb 24 02:36:51.170464 master-0 kubenswrapper[31411]: I0224 02:36:51.170432 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b01bb6c-8488-4275-a2b0-ee35dbd9eb39-public-tls-certs\") pod \"placement-f597cf46d-llslv\" (UID: \"2b01bb6c-8488-4275-a2b0-ee35dbd9eb39\") " pod="openstack/placement-f597cf46d-llslv" Feb 24 02:36:51.170974 master-0 kubenswrapper[31411]: I0224 02:36:51.170850 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b01bb6c-8488-4275-a2b0-ee35dbd9eb39-logs\") pod \"placement-f597cf46d-llslv\" (UID: \"2b01bb6c-8488-4275-a2b0-ee35dbd9eb39\") " pod="openstack/placement-f597cf46d-llslv" Feb 24 02:36:51.171317 master-0 kubenswrapper[31411]: I0224 02:36:51.171266 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b01bb6c-8488-4275-a2b0-ee35dbd9eb39-logs\") pod \"placement-f597cf46d-llslv\" (UID: \"2b01bb6c-8488-4275-a2b0-ee35dbd9eb39\") " pod="openstack/placement-f597cf46d-llslv" Feb 24 02:36:51.171368 master-0 kubenswrapper[31411]: I0224 02:36:51.171327 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b01bb6c-8488-4275-a2b0-ee35dbd9eb39-internal-tls-certs\") pod \"placement-f597cf46d-llslv\" (UID: \"2b01bb6c-8488-4275-a2b0-ee35dbd9eb39\") " pod="openstack/placement-f597cf46d-llslv" Feb 24 02:36:51.174914 master-0 kubenswrapper[31411]: I0224 02:36:51.174827 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b01bb6c-8488-4275-a2b0-ee35dbd9eb39-config-data\") pod \"placement-f597cf46d-llslv\" (UID: \"2b01bb6c-8488-4275-a2b0-ee35dbd9eb39\") " pod="openstack/placement-f597cf46d-llslv" Feb 24 02:36:51.176297 master-0 kubenswrapper[31411]: I0224 02:36:51.176199 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b01bb6c-8488-4275-a2b0-ee35dbd9eb39-scripts\") pod \"placement-f597cf46d-llslv\" (UID: \"2b01bb6c-8488-4275-a2b0-ee35dbd9eb39\") " pod="openstack/placement-f597cf46d-llslv" Feb 24 02:36:51.177264 master-0 kubenswrapper[31411]: I0224 02:36:51.176789 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b01bb6c-8488-4275-a2b0-ee35dbd9eb39-internal-tls-certs\") pod \"placement-f597cf46d-llslv\" (UID: \"2b01bb6c-8488-4275-a2b0-ee35dbd9eb39\") " pod="openstack/placement-f597cf46d-llslv" Feb 24 02:36:51.179709 master-0 kubenswrapper[31411]: I0224 02:36:51.178933 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b01bb6c-8488-4275-a2b0-ee35dbd9eb39-public-tls-certs\") pod \"placement-f597cf46d-llslv\" (UID: \"2b01bb6c-8488-4275-a2b0-ee35dbd9eb39\") " pod="openstack/placement-f597cf46d-llslv" Feb 24 02:36:51.183782 master-0 kubenswrapper[31411]: I0224 02:36:51.183746 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b01bb6c-8488-4275-a2b0-ee35dbd9eb39-combined-ca-bundle\") pod \"placement-f597cf46d-llslv\" (UID: \"2b01bb6c-8488-4275-a2b0-ee35dbd9eb39\") " pod="openstack/placement-f597cf46d-llslv" Feb 24 02:36:51.186166 master-0 kubenswrapper[31411]: I0224 02:36:51.186122 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stwrq\" (UniqueName: \"kubernetes.io/projected/2b01bb6c-8488-4275-a2b0-ee35dbd9eb39-kube-api-access-stwrq\") pod \"placement-f597cf46d-llslv\" (UID: \"2b01bb6c-8488-4275-a2b0-ee35dbd9eb39\") " pod="openstack/placement-f597cf46d-llslv" Feb 24 02:36:51.267093 master-0 kubenswrapper[31411]: I0224 02:36:51.267024 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f597cf46d-llslv" Feb 24 02:36:51.755904 master-0 kubenswrapper[31411]: I0224 02:36:51.755792 31411 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-84556f859-6lpst" podUID="49237a00-8280-4287-a6fd-9fdf6a486c95" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.194:5353: i/o timeout" Feb 24 02:36:51.955484 master-0 kubenswrapper[31411]: I0224 02:36:51.955449 31411 scope.go:117] "RemoveContainer" containerID="dd82f27f0c57a76f84c5efda33e8d6c01072fe917998ad05ab2156d3341121d6" Feb 24 02:36:53.836007 master-0 kubenswrapper[31411]: I0224 02:36:53.835677 31411 scope.go:117] "RemoveContainer" containerID="e384e2d0410bedeb4234d1124efb7db793999cb383a58aa2d2afc9cc3a62b1ef" Feb 24 02:36:53.995222 master-0 kubenswrapper[31411]: I0224 02:36:53.995180 31411 scope.go:117] "RemoveContainer" containerID="ee3a47bc61fe2b5bc13160578709c20ad39bac3d0df18cb4d84e91cc9b4a6a01" Feb 24 02:36:54.041338 master-0 kubenswrapper[31411]: I0224 02:36:54.040493 31411 scope.go:117] "RemoveContainer" containerID="bc0d39e514b6aaeeaeac1c835533735f6eb360a0428ecb221dc08d014e43c8fb" Feb 24 02:36:54.073637 master-0 kubenswrapper[31411]: I0224 02:36:54.073596 31411 scope.go:117] "RemoveContainer" containerID="a61e37267045d9a0a6799656e5f6d8df89b360970163b968a58b7750b629bd84" Feb 24 02:36:54.391250 master-0 kubenswrapper[31411]: W0224 02:36:54.390703 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e330fef_38b6_4b5d_b001_886ecfdd4028.slice/crio-012d64fe28e7c9831f2ab24bc518fc8d7b005b340b7e3147a5d25b756f0f93d4 WatchSource:0}: Error finding container 012d64fe28e7c9831f2ab24bc518fc8d7b005b340b7e3147a5d25b756f0f93d4: Status 404 returned error can't find the container with id 012d64fe28e7c9831f2ab24bc518fc8d7b005b340b7e3147a5d25b756f0f93d4 Feb 24 02:36:54.412150 master-0 kubenswrapper[31411]: I0224 02:36:54.412081 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8705a-default-external-api-0"] Feb 24 02:36:54.464584 master-0 kubenswrapper[31411]: I0224 02:36:54.464521 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8705a-default-internal-api-0"] Feb 24 02:36:54.524019 master-0 kubenswrapper[31411]: I0224 02:36:54.522695 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8f98fb65f-btxw6"] Feb 24 02:36:54.534963 master-0 kubenswrapper[31411]: W0224 02:36:54.534902 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14b140d8_b731_46c1_bb66_63e4345873c0.slice/crio-342cd6299c4ca81ac672a9c14dc2a8fdcdcf51d9c55877e3b354eb9a08bad9b1 WatchSource:0}: Error finding container 342cd6299c4ca81ac672a9c14dc2a8fdcdcf51d9c55877e3b354eb9a08bad9b1: Status 404 returned error can't find the container with id 342cd6299c4ca81ac672a9c14dc2a8fdcdcf51d9c55877e3b354eb9a08bad9b1 Feb 24 02:36:54.537502 master-0 kubenswrapper[31411]: I0224 02:36:54.537450 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f597cf46d-llslv"] Feb 24 02:36:54.801820 master-0 kubenswrapper[31411]: I0224 02:36:54.801769 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f597cf46d-llslv" event={"ID":"2b01bb6c-8488-4275-a2b0-ee35dbd9eb39","Type":"ContainerStarted","Data":"f58a69c685a190cbbadef06a106fd1244dc8cee60f9e24745b13e85e35cef5c2"} Feb 24 02:36:54.808552 master-0 kubenswrapper[31411]: I0224 02:36:54.808505 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8f98fb65f-btxw6" event={"ID":"14b140d8-b731-46c1-bb66-63e4345873c0","Type":"ContainerStarted","Data":"342cd6299c4ca81ac672a9c14dc2a8fdcdcf51d9c55877e3b354eb9a08bad9b1"} Feb 24 02:36:54.811500 master-0 kubenswrapper[31411]: I0224 02:36:54.811470 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8705a-default-internal-api-0" event={"ID":"d7d5c157-238f-4062-8e57-dccee5fa4f9e","Type":"ContainerStarted","Data":"80ca5bbe767d764d34da8388315905b2af3069e5506597963e2473e36dcb7731"} Feb 24 02:36:54.813820 master-0 kubenswrapper[31411]: I0224 02:36:54.813783 31411 generic.go:334] "Generic (PLEG): container finished" podID="a3a705bf-9636-4410-a44a-6ff6907d4179" containerID="5099be0b63b83ad0af9fae73dcbd9a1cb30dd6e86579c9efb5b56f154efb3c01" exitCode=0 Feb 24 02:36:54.813890 master-0 kubenswrapper[31411]: I0224 02:36:54.813837 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-jzr8b" event={"ID":"a3a705bf-9636-4410-a44a-6ff6907d4179","Type":"ContainerDied","Data":"5099be0b63b83ad0af9fae73dcbd9a1cb30dd6e86579c9efb5b56f154efb3c01"} Feb 24 02:36:54.816688 master-0 kubenswrapper[31411]: I0224 02:36:54.816654 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8705a-default-external-api-0" event={"ID":"3e330fef-38b6-4b5d-b001-886ecfdd4028","Type":"ContainerStarted","Data":"012d64fe28e7c9831f2ab24bc518fc8d7b005b340b7e3147a5d25b756f0f93d4"} Feb 24 02:36:54.818393 master-0 kubenswrapper[31411]: I0224 02:36:54.818329 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-db-sync-mhchn" event={"ID":"c356cf44-9774-4260-9463-2960be302f0e","Type":"ContainerStarted","Data":"df4c2d83721af90c423447bcf1b9b69d8aac7ee0523be5a3ea4ac895cf7ffb24"} Feb 24 02:36:54.863654 master-0 kubenswrapper[31411]: I0224 02:36:54.861132 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-6ac23-db-sync-mhchn" podStartSLOduration=3.073411364 podStartE2EDuration="29.861094314s" podCreationTimestamp="2026-02-24 02:36:25 +0000 UTC" firstStartedPulling="2026-02-24 02:36:27.006157995 +0000 UTC m=+930.223355841" lastFinishedPulling="2026-02-24 02:36:53.793840945 +0000 UTC m=+957.011038791" observedRunningTime="2026-02-24 02:36:54.859981732 +0000 UTC m=+958.077179618" watchObservedRunningTime="2026-02-24 02:36:54.861094314 +0000 UTC m=+958.078292160" Feb 24 02:36:55.184702 master-0 kubenswrapper[31411]: E0224 02:36:55.183915 31411 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 24 02:36:55.184702 master-0 kubenswrapper[31411]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/a3a705bf-9636-4410-a44a-6ff6907d4179/volume-subpaths/config-data/ironic-db-sync/3` to `var/lib/kolla/config_files/config.json`: No such file or directory Feb 24 02:36:55.184702 master-0 kubenswrapper[31411]: > podSandboxID="4132648f2993f7a672d22ddb371ede07a50dc3610cf1142ca93b21a22ec87fb2" Feb 24 02:36:55.184702 master-0 kubenswrapper[31411]: E0224 02:36:55.184120 31411 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 02:36:55.184702 master-0 kubenswrapper[31411]: container &Container{Name:ironic-db-sync,Image:quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:a64e15599f122be2556f06a936194cbabe1d7b41aa848506abe44ebc54a3a556,Command:[/bin/bash],Args:[-c /usr/local/bin/container-scripts/dbsync.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-merged,ReadOnly:false,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-podinfo,ReadOnly:false,MountPath:/etc/podinfo,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6q8vp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-db-sync-jzr8b_openstack(a3a705bf-9636-4410-a44a-6ff6907d4179): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/a3a705bf-9636-4410-a44a-6ff6907d4179/volume-subpaths/config-data/ironic-db-sync/3` to `var/lib/kolla/config_files/config.json`: No such file or directory Feb 24 02:36:55.184702 master-0 kubenswrapper[31411]: > logger="UnhandledError" Feb 24 02:36:55.191475 master-0 kubenswrapper[31411]: E0224 02:36:55.191411 31411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-db-sync\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/a3a705bf-9636-4410-a44a-6ff6907d4179/volume-subpaths/config-data/ironic-db-sync/3` to `var/lib/kolla/config_files/config.json`: No such file or directory\\n\"" pod="openstack/ironic-db-sync-jzr8b" podUID="a3a705bf-9636-4410-a44a-6ff6907d4179" Feb 24 02:36:55.835776 master-0 kubenswrapper[31411]: I0224 02:36:55.835698 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8f98fb65f-btxw6" event={"ID":"14b140d8-b731-46c1-bb66-63e4345873c0","Type":"ContainerStarted","Data":"2bb4bd47683ecb9ffe1ec27a80e254edda5215e84732f48e0cfdb03f9f50bab7"} Feb 24 02:36:55.837941 master-0 kubenswrapper[31411]: I0224 02:36:55.837012 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-8f98fb65f-btxw6" Feb 24 02:36:55.844732 master-0 kubenswrapper[31411]: I0224 02:36:55.839105 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8705a-default-external-api-0" event={"ID":"3e330fef-38b6-4b5d-b001-886ecfdd4028","Type":"ContainerStarted","Data":"4e2c4e26527f21cb6fd2affdd10a05d6c19198a22fef1d0abeba78f369c2da3a"} Feb 24 02:36:55.844732 master-0 kubenswrapper[31411]: I0224 02:36:55.841549 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8705a-default-internal-api-0" event={"ID":"d7d5c157-238f-4062-8e57-dccee5fa4f9e","Type":"ContainerStarted","Data":"d22d5d85603b1447841eb0a9617422fe92c19152b0c341477cd29d81d8ba7274"} Feb 24 02:36:55.844732 master-0 kubenswrapper[31411]: I0224 02:36:55.844458 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f597cf46d-llslv" event={"ID":"2b01bb6c-8488-4275-a2b0-ee35dbd9eb39","Type":"ContainerStarted","Data":"e52ffb29a3999e68458d10e1018e5619e2bbea2ce4fac7b559826971fb19beed"} Feb 24 02:36:55.844732 master-0 kubenswrapper[31411]: I0224 02:36:55.844559 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-f597cf46d-llslv" Feb 24 02:36:55.844732 master-0 kubenswrapper[31411]: I0224 02:36:55.844608 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f597cf46d-llslv" event={"ID":"2b01bb6c-8488-4275-a2b0-ee35dbd9eb39","Type":"ContainerStarted","Data":"6bcda1e404dfafb9b5abc35db8aac8a5d0430f86eebee7fee79b064f3c759016"} Feb 24 02:36:55.844732 master-0 kubenswrapper[31411]: I0224 02:36:55.844633 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-f597cf46d-llslv" Feb 24 02:36:55.890143 master-0 kubenswrapper[31411]: I0224 02:36:55.890017 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-8f98fb65f-btxw6" podStartSLOduration=6.889991137 podStartE2EDuration="6.889991137s" podCreationTimestamp="2026-02-24 02:36:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:36:55.861650622 +0000 UTC m=+959.078848468" watchObservedRunningTime="2026-02-24 02:36:55.889991137 +0000 UTC m=+959.107189003" Feb 24 02:36:55.916095 master-0 kubenswrapper[31411]: I0224 02:36:55.914241 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-f597cf46d-llslv" podStartSLOduration=5.914209816 podStartE2EDuration="5.914209816s" podCreationTimestamp="2026-02-24 02:36:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:36:55.902658282 +0000 UTC m=+959.119856148" watchObservedRunningTime="2026-02-24 02:36:55.914209816 +0000 UTC m=+959.131407672" Feb 24 02:36:56.873773 master-0 kubenswrapper[31411]: I0224 02:36:56.873444 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-jzr8b" event={"ID":"a3a705bf-9636-4410-a44a-6ff6907d4179","Type":"ContainerStarted","Data":"47500ce406e24c05c3cb656695bc67b302bb1c3089aed248cbc1372b1c351a3e"} Feb 24 02:36:56.878247 master-0 kubenswrapper[31411]: I0224 02:36:56.878185 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8705a-default-external-api-0" event={"ID":"3e330fef-38b6-4b5d-b001-886ecfdd4028","Type":"ContainerStarted","Data":"e7ed8eeb5bf5c7ee8f1fab4c4a4bb1d397726e2e8d6ff8b11297690c72eea8cf"} Feb 24 02:36:56.883738 master-0 kubenswrapper[31411]: I0224 02:36:56.883663 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8705a-default-internal-api-0" event={"ID":"d7d5c157-238f-4062-8e57-dccee5fa4f9e","Type":"ContainerStarted","Data":"ab53938e2cb3d0282666094cb678cbb91c93a2a0e167f53582f18f100dd71ac2"} Feb 24 02:36:56.974526 master-0 kubenswrapper[31411]: I0224 02:36:56.974408 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-db-sync-jzr8b" podStartSLOduration=5.437937065 podStartE2EDuration="21.974390136s" podCreationTimestamp="2026-02-24 02:36:35 +0000 UTC" firstStartedPulling="2026-02-24 02:36:37.298157447 +0000 UTC m=+940.515355293" lastFinishedPulling="2026-02-24 02:36:53.834610518 +0000 UTC m=+957.051808364" observedRunningTime="2026-02-24 02:36:56.971299119 +0000 UTC m=+960.188497005" watchObservedRunningTime="2026-02-24 02:36:56.974390136 +0000 UTC m=+960.191587982" Feb 24 02:36:57.219040 master-0 kubenswrapper[31411]: I0224 02:36:57.218899 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-8705a-default-external-api-0" podStartSLOduration=19.218872778 podStartE2EDuration="19.218872778s" podCreationTimestamp="2026-02-24 02:36:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:36:57.203434095 +0000 UTC m=+960.420631971" watchObservedRunningTime="2026-02-24 02:36:57.218872778 +0000 UTC m=+960.436070634" Feb 24 02:36:57.464861 master-0 kubenswrapper[31411]: I0224 02:36:57.464759 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-8705a-default-internal-api-0" podStartSLOduration=19.46473303 podStartE2EDuration="19.46473303s" podCreationTimestamp="2026-02-24 02:36:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:36:57.433211827 +0000 UTC m=+960.650409713" watchObservedRunningTime="2026-02-24 02:36:57.46473303 +0000 UTC m=+960.681930886" Feb 24 02:36:57.899728 master-0 kubenswrapper[31411]: I0224 02:36:57.898261 31411 generic.go:334] "Generic (PLEG): container finished" podID="8d7c1b26-0a14-4626-b7ed-ec82103e883c" containerID="6549e94a090b89b6149dbf8755d38f83b1b705346f90e412b1f91fca52cc279c" exitCode=0 Feb 24 02:36:57.899728 master-0 kubenswrapper[31411]: I0224 02:36:57.898345 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-k6pnr" event={"ID":"8d7c1b26-0a14-4626-b7ed-ec82103e883c","Type":"ContainerDied","Data":"6549e94a090b89b6149dbf8755d38f83b1b705346f90e412b1f91fca52cc279c"} Feb 24 02:36:59.352917 master-0 kubenswrapper[31411]: I0224 02:36:59.352871 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-k6pnr" Feb 24 02:36:59.431886 master-0 kubenswrapper[31411]: I0224 02:36:59.431834 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d7c1b26-0a14-4626-b7ed-ec82103e883c-combined-ca-bundle\") pod \"8d7c1b26-0a14-4626-b7ed-ec82103e883c\" (UID: \"8d7c1b26-0a14-4626-b7ed-ec82103e883c\") " Feb 24 02:36:59.432152 master-0 kubenswrapper[31411]: I0224 02:36:59.432130 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6288\" (UniqueName: \"kubernetes.io/projected/8d7c1b26-0a14-4626-b7ed-ec82103e883c-kube-api-access-m6288\") pod \"8d7c1b26-0a14-4626-b7ed-ec82103e883c\" (UID: \"8d7c1b26-0a14-4626-b7ed-ec82103e883c\") " Feb 24 02:36:59.432202 master-0 kubenswrapper[31411]: I0224 02:36:59.432167 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8d7c1b26-0a14-4626-b7ed-ec82103e883c-config\") pod \"8d7c1b26-0a14-4626-b7ed-ec82103e883c\" (UID: \"8d7c1b26-0a14-4626-b7ed-ec82103e883c\") " Feb 24 02:36:59.435340 master-0 kubenswrapper[31411]: I0224 02:36:59.435283 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d7c1b26-0a14-4626-b7ed-ec82103e883c-kube-api-access-m6288" (OuterVolumeSpecName: "kube-api-access-m6288") pod "8d7c1b26-0a14-4626-b7ed-ec82103e883c" (UID: "8d7c1b26-0a14-4626-b7ed-ec82103e883c"). InnerVolumeSpecName "kube-api-access-m6288". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:36:59.460681 master-0 kubenswrapper[31411]: I0224 02:36:59.460632 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d7c1b26-0a14-4626-b7ed-ec82103e883c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d7c1b26-0a14-4626-b7ed-ec82103e883c" (UID: "8d7c1b26-0a14-4626-b7ed-ec82103e883c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:36:59.496928 master-0 kubenswrapper[31411]: I0224 02:36:59.496807 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d7c1b26-0a14-4626-b7ed-ec82103e883c-config" (OuterVolumeSpecName: "config") pod "8d7c1b26-0a14-4626-b7ed-ec82103e883c" (UID: "8d7c1b26-0a14-4626-b7ed-ec82103e883c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:36:59.535712 master-0 kubenswrapper[31411]: I0224 02:36:59.535559 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d7c1b26-0a14-4626-b7ed-ec82103e883c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:59.535712 master-0 kubenswrapper[31411]: I0224 02:36:59.535654 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6288\" (UniqueName: \"kubernetes.io/projected/8d7c1b26-0a14-4626-b7ed-ec82103e883c-kube-api-access-m6288\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:59.535712 master-0 kubenswrapper[31411]: I0224 02:36:59.535668 31411 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8d7c1b26-0a14-4626-b7ed-ec82103e883c-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:36:59.935161 master-0 kubenswrapper[31411]: I0224 02:36:59.934988 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-k6pnr" event={"ID":"8d7c1b26-0a14-4626-b7ed-ec82103e883c","Type":"ContainerDied","Data":"cf15467a9b1539b1d67a6edf1125709a45c073d86057777ed7c43c4a40ca0ef3"} Feb 24 02:36:59.935161 master-0 kubenswrapper[31411]: I0224 02:36:59.935074 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf15467a9b1539b1d67a6edf1125709a45c073d86057777ed7c43c4a40ca0ef3" Feb 24 02:36:59.935161 master-0 kubenswrapper[31411]: I0224 02:36:59.935132 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-k6pnr" Feb 24 02:37:00.389995 master-0 kubenswrapper[31411]: I0224 02:37:00.389941 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:00.390955 master-0 kubenswrapper[31411]: I0224 02:37:00.390868 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:00.541826 master-0 kubenswrapper[31411]: I0224 02:37:00.533793 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-674dc645f-b7fhr"] Feb 24 02:37:00.541826 master-0 kubenswrapper[31411]: E0224 02:37:00.534437 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d7c1b26-0a14-4626-b7ed-ec82103e883c" containerName="neutron-db-sync" Feb 24 02:37:00.541826 master-0 kubenswrapper[31411]: I0224 02:37:00.534453 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d7c1b26-0a14-4626-b7ed-ec82103e883c" containerName="neutron-db-sync" Feb 24 02:37:00.541826 master-0 kubenswrapper[31411]: I0224 02:37:00.534740 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d7c1b26-0a14-4626-b7ed-ec82103e883c" containerName="neutron-db-sync" Feb 24 02:37:00.554638 master-0 kubenswrapper[31411]: I0224 02:37:00.552102 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-674dc645f-b7fhr"] Feb 24 02:37:00.554638 master-0 kubenswrapper[31411]: I0224 02:37:00.552249 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674dc645f-b7fhr" Feb 24 02:37:00.554638 master-0 kubenswrapper[31411]: I0224 02:37:00.553981 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:00.562331 master-0 kubenswrapper[31411]: I0224 02:37:00.562042 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:00.661593 master-0 kubenswrapper[31411]: I0224 02:37:00.655371 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-55455d5d8d-zzwzz"] Feb 24 02:37:00.661593 master-0 kubenswrapper[31411]: I0224 02:37:00.658037 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55455d5d8d-zzwzz" Feb 24 02:37:00.663657 master-0 kubenswrapper[31411]: I0224 02:37:00.663083 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 24 02:37:00.663657 master-0 kubenswrapper[31411]: I0224 02:37:00.663328 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 24 02:37:00.665376 master-0 kubenswrapper[31411]: I0224 02:37:00.665333 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 24 02:37:00.669597 master-0 kubenswrapper[31411]: I0224 02:37:00.669300 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-dns-svc\") pod \"dnsmasq-dns-674dc645f-b7fhr\" (UID: \"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466\") " pod="openstack/dnsmasq-dns-674dc645f-b7fhr" Feb 24 02:37:00.669597 master-0 kubenswrapper[31411]: I0224 02:37:00.669379 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-ovsdbserver-sb\") pod \"dnsmasq-dns-674dc645f-b7fhr\" (UID: \"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466\") " pod="openstack/dnsmasq-dns-674dc645f-b7fhr" Feb 24 02:37:00.669597 master-0 kubenswrapper[31411]: I0224 02:37:00.669438 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx7pm\" (UniqueName: \"kubernetes.io/projected/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-kube-api-access-fx7pm\") pod \"dnsmasq-dns-674dc645f-b7fhr\" (UID: \"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466\") " pod="openstack/dnsmasq-dns-674dc645f-b7fhr" Feb 24 02:37:00.669597 master-0 kubenswrapper[31411]: I0224 02:37:00.669480 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-ovsdbserver-nb\") pod \"dnsmasq-dns-674dc645f-b7fhr\" (UID: \"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466\") " pod="openstack/dnsmasq-dns-674dc645f-b7fhr" Feb 24 02:37:00.669597 master-0 kubenswrapper[31411]: I0224 02:37:00.669552 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-config\") pod \"dnsmasq-dns-674dc645f-b7fhr\" (UID: \"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466\") " pod="openstack/dnsmasq-dns-674dc645f-b7fhr" Feb 24 02:37:00.669597 master-0 kubenswrapper[31411]: I0224 02:37:00.669590 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-dns-swift-storage-0\") pod \"dnsmasq-dns-674dc645f-b7fhr\" (UID: \"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466\") " pod="openstack/dnsmasq-dns-674dc645f-b7fhr" Feb 24 02:37:00.684647 master-0 kubenswrapper[31411]: I0224 02:37:00.684476 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-55455d5d8d-zzwzz"] Feb 24 02:37:00.772457 master-0 kubenswrapper[31411]: I0224 02:37:00.772380 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65df10fc-36c4-4eab-aaf3-962a5294face-combined-ca-bundle\") pod \"neutron-55455d5d8d-zzwzz\" (UID: \"65df10fc-36c4-4eab-aaf3-962a5294face\") " pod="openstack/neutron-55455d5d8d-zzwzz" Feb 24 02:37:00.772457 master-0 kubenswrapper[31411]: I0224 02:37:00.772460 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-config\") pod \"dnsmasq-dns-674dc645f-b7fhr\" (UID: \"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466\") " pod="openstack/dnsmasq-dns-674dc645f-b7fhr" Feb 24 02:37:00.772787 master-0 kubenswrapper[31411]: I0224 02:37:00.772633 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-dns-swift-storage-0\") pod \"dnsmasq-dns-674dc645f-b7fhr\" (UID: \"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466\") " pod="openstack/dnsmasq-dns-674dc645f-b7fhr" Feb 24 02:37:00.772787 master-0 kubenswrapper[31411]: I0224 02:37:00.772757 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcz2q\" (UniqueName: \"kubernetes.io/projected/65df10fc-36c4-4eab-aaf3-962a5294face-kube-api-access-mcz2q\") pod \"neutron-55455d5d8d-zzwzz\" (UID: \"65df10fc-36c4-4eab-aaf3-962a5294face\") " pod="openstack/neutron-55455d5d8d-zzwzz" Feb 24 02:37:00.772855 master-0 kubenswrapper[31411]: I0224 02:37:00.772832 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-dns-svc\") pod \"dnsmasq-dns-674dc645f-b7fhr\" (UID: \"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466\") " pod="openstack/dnsmasq-dns-674dc645f-b7fhr" Feb 24 02:37:00.773008 master-0 kubenswrapper[31411]: I0224 02:37:00.772978 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-ovsdbserver-sb\") pod \"dnsmasq-dns-674dc645f-b7fhr\" (UID: \"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466\") " pod="openstack/dnsmasq-dns-674dc645f-b7fhr" Feb 24 02:37:00.773067 master-0 kubenswrapper[31411]: I0224 02:37:00.773048 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/65df10fc-36c4-4eab-aaf3-962a5294face-config\") pod \"neutron-55455d5d8d-zzwzz\" (UID: \"65df10fc-36c4-4eab-aaf3-962a5294face\") " pod="openstack/neutron-55455d5d8d-zzwzz" Feb 24 02:37:00.773168 master-0 kubenswrapper[31411]: I0224 02:37:00.773144 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/65df10fc-36c4-4eab-aaf3-962a5294face-ovndb-tls-certs\") pod \"neutron-55455d5d8d-zzwzz\" (UID: \"65df10fc-36c4-4eab-aaf3-962a5294face\") " pod="openstack/neutron-55455d5d8d-zzwzz" Feb 24 02:37:00.773228 master-0 kubenswrapper[31411]: I0224 02:37:00.773210 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx7pm\" (UniqueName: \"kubernetes.io/projected/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-kube-api-access-fx7pm\") pod \"dnsmasq-dns-674dc645f-b7fhr\" (UID: \"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466\") " pod="openstack/dnsmasq-dns-674dc645f-b7fhr" Feb 24 02:37:00.773376 master-0 kubenswrapper[31411]: I0224 02:37:00.773343 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-config\") pod \"dnsmasq-dns-674dc645f-b7fhr\" (UID: \"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466\") " pod="openstack/dnsmasq-dns-674dc645f-b7fhr" Feb 24 02:37:00.773376 master-0 kubenswrapper[31411]: I0224 02:37:00.773355 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-ovsdbserver-nb\") pod \"dnsmasq-dns-674dc645f-b7fhr\" (UID: \"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466\") " pod="openstack/dnsmasq-dns-674dc645f-b7fhr" Feb 24 02:37:00.774244 master-0 kubenswrapper[31411]: I0224 02:37:00.774210 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-ovsdbserver-nb\") pod \"dnsmasq-dns-674dc645f-b7fhr\" (UID: \"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466\") " pod="openstack/dnsmasq-dns-674dc645f-b7fhr" Feb 24 02:37:00.774244 master-0 kubenswrapper[31411]: I0224 02:37:00.774236 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-dns-svc\") pod \"dnsmasq-dns-674dc645f-b7fhr\" (UID: \"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466\") " pod="openstack/dnsmasq-dns-674dc645f-b7fhr" Feb 24 02:37:00.774357 master-0 kubenswrapper[31411]: I0224 02:37:00.774292 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/65df10fc-36c4-4eab-aaf3-962a5294face-httpd-config\") pod \"neutron-55455d5d8d-zzwzz\" (UID: \"65df10fc-36c4-4eab-aaf3-962a5294face\") " pod="openstack/neutron-55455d5d8d-zzwzz" Feb 24 02:37:00.774722 master-0 kubenswrapper[31411]: I0224 02:37:00.774669 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-ovsdbserver-sb\") pod \"dnsmasq-dns-674dc645f-b7fhr\" (UID: \"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466\") " pod="openstack/dnsmasq-dns-674dc645f-b7fhr" Feb 24 02:37:00.775058 master-0 kubenswrapper[31411]: I0224 02:37:00.775027 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-dns-swift-storage-0\") pod \"dnsmasq-dns-674dc645f-b7fhr\" (UID: \"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466\") " pod="openstack/dnsmasq-dns-674dc645f-b7fhr" Feb 24 02:37:00.792104 master-0 kubenswrapper[31411]: I0224 02:37:00.792020 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx7pm\" (UniqueName: \"kubernetes.io/projected/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-kube-api-access-fx7pm\") pod \"dnsmasq-dns-674dc645f-b7fhr\" (UID: \"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466\") " pod="openstack/dnsmasq-dns-674dc645f-b7fhr" Feb 24 02:37:00.882037 master-0 kubenswrapper[31411]: I0224 02:37:00.881956 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/65df10fc-36c4-4eab-aaf3-962a5294face-ovndb-tls-certs\") pod \"neutron-55455d5d8d-zzwzz\" (UID: \"65df10fc-36c4-4eab-aaf3-962a5294face\") " pod="openstack/neutron-55455d5d8d-zzwzz" Feb 24 02:37:00.882332 master-0 kubenswrapper[31411]: I0224 02:37:00.882312 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/65df10fc-36c4-4eab-aaf3-962a5294face-httpd-config\") pod \"neutron-55455d5d8d-zzwzz\" (UID: \"65df10fc-36c4-4eab-aaf3-962a5294face\") " pod="openstack/neutron-55455d5d8d-zzwzz" Feb 24 02:37:00.882472 master-0 kubenswrapper[31411]: I0224 02:37:00.882444 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65df10fc-36c4-4eab-aaf3-962a5294face-combined-ca-bundle\") pod \"neutron-55455d5d8d-zzwzz\" (UID: \"65df10fc-36c4-4eab-aaf3-962a5294face\") " pod="openstack/neutron-55455d5d8d-zzwzz" Feb 24 02:37:00.882601 master-0 kubenswrapper[31411]: I0224 02:37:00.882563 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcz2q\" (UniqueName: \"kubernetes.io/projected/65df10fc-36c4-4eab-aaf3-962a5294face-kube-api-access-mcz2q\") pod \"neutron-55455d5d8d-zzwzz\" (UID: \"65df10fc-36c4-4eab-aaf3-962a5294face\") " pod="openstack/neutron-55455d5d8d-zzwzz" Feb 24 02:37:00.882805 master-0 kubenswrapper[31411]: I0224 02:37:00.882778 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/65df10fc-36c4-4eab-aaf3-962a5294face-config\") pod \"neutron-55455d5d8d-zzwzz\" (UID: \"65df10fc-36c4-4eab-aaf3-962a5294face\") " pod="openstack/neutron-55455d5d8d-zzwzz" Feb 24 02:37:00.883731 master-0 kubenswrapper[31411]: I0224 02:37:00.883694 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674dc645f-b7fhr" Feb 24 02:37:00.887094 master-0 kubenswrapper[31411]: I0224 02:37:00.887055 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/65df10fc-36c4-4eab-aaf3-962a5294face-config\") pod \"neutron-55455d5d8d-zzwzz\" (UID: \"65df10fc-36c4-4eab-aaf3-962a5294face\") " pod="openstack/neutron-55455d5d8d-zzwzz" Feb 24 02:37:00.888453 master-0 kubenswrapper[31411]: I0224 02:37:00.888405 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65df10fc-36c4-4eab-aaf3-962a5294face-combined-ca-bundle\") pod \"neutron-55455d5d8d-zzwzz\" (UID: \"65df10fc-36c4-4eab-aaf3-962a5294face\") " pod="openstack/neutron-55455d5d8d-zzwzz" Feb 24 02:37:00.888498 master-0 kubenswrapper[31411]: I0224 02:37:00.888420 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/65df10fc-36c4-4eab-aaf3-962a5294face-httpd-config\") pod \"neutron-55455d5d8d-zzwzz\" (UID: \"65df10fc-36c4-4eab-aaf3-962a5294face\") " pod="openstack/neutron-55455d5d8d-zzwzz" Feb 24 02:37:00.888735 master-0 kubenswrapper[31411]: I0224 02:37:00.888683 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/65df10fc-36c4-4eab-aaf3-962a5294face-ovndb-tls-certs\") pod \"neutron-55455d5d8d-zzwzz\" (UID: \"65df10fc-36c4-4eab-aaf3-962a5294face\") " pod="openstack/neutron-55455d5d8d-zzwzz" Feb 24 02:37:00.911271 master-0 kubenswrapper[31411]: I0224 02:37:00.911196 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcz2q\" (UniqueName: \"kubernetes.io/projected/65df10fc-36c4-4eab-aaf3-962a5294face-kube-api-access-mcz2q\") pod \"neutron-55455d5d8d-zzwzz\" (UID: \"65df10fc-36c4-4eab-aaf3-962a5294face\") " pod="openstack/neutron-55455d5d8d-zzwzz" Feb 24 02:37:00.952374 master-0 kubenswrapper[31411]: I0224 02:37:00.951605 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:00.956591 master-0 kubenswrapper[31411]: I0224 02:37:00.953363 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:00.981424 master-0 kubenswrapper[31411]: I0224 02:37:00.981222 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55455d5d8d-zzwzz" Feb 24 02:37:01.418744 master-0 kubenswrapper[31411]: W0224 02:37:01.418681 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4fddb55_1b2d_44e1_8ad9_0f2fb92bd466.slice/crio-3c7f59950bda322bec1848c87ab1da878bfa745ba8749b4b3d301f9ec1f26832 WatchSource:0}: Error finding container 3c7f59950bda322bec1848c87ab1da878bfa745ba8749b4b3d301f9ec1f26832: Status 404 returned error can't find the container with id 3c7f59950bda322bec1848c87ab1da878bfa745ba8749b4b3d301f9ec1f26832 Feb 24 02:37:01.420734 master-0 kubenswrapper[31411]: I0224 02:37:01.420703 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-674dc645f-b7fhr"] Feb 24 02:37:01.691746 master-0 kubenswrapper[31411]: W0224 02:37:01.691696 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65df10fc_36c4_4eab_aaf3_962a5294face.slice/crio-a97d3681a366aef008dd8b4e672ebfda294bf7bedf2b08bec5b44d62e29da8e7 WatchSource:0}: Error finding container a97d3681a366aef008dd8b4e672ebfda294bf7bedf2b08bec5b44d62e29da8e7: Status 404 returned error can't find the container with id a97d3681a366aef008dd8b4e672ebfda294bf7bedf2b08bec5b44d62e29da8e7 Feb 24 02:37:01.699112 master-0 kubenswrapper[31411]: I0224 02:37:01.699052 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-55455d5d8d-zzwzz"] Feb 24 02:37:01.949380 master-0 kubenswrapper[31411]: I0224 02:37:01.948105 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:01.949380 master-0 kubenswrapper[31411]: I0224 02:37:01.948285 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:01.968699 master-0 kubenswrapper[31411]: I0224 02:37:01.968537 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55455d5d8d-zzwzz" event={"ID":"65df10fc-36c4-4eab-aaf3-962a5294face","Type":"ContainerStarted","Data":"a97d3681a366aef008dd8b4e672ebfda294bf7bedf2b08bec5b44d62e29da8e7"} Feb 24 02:37:01.971053 master-0 kubenswrapper[31411]: I0224 02:37:01.970910 31411 generic.go:334] "Generic (PLEG): container finished" podID="c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466" containerID="c2da429b3d451ce3a683eafbcfb11de3e297fb2e487e5a598e4ce22f5b7e6805" exitCode=0 Feb 24 02:37:01.971053 master-0 kubenswrapper[31411]: I0224 02:37:01.971041 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674dc645f-b7fhr" event={"ID":"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466","Type":"ContainerDied","Data":"c2da429b3d451ce3a683eafbcfb11de3e297fb2e487e5a598e4ce22f5b7e6805"} Feb 24 02:37:01.971174 master-0 kubenswrapper[31411]: I0224 02:37:01.971076 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674dc645f-b7fhr" event={"ID":"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466","Type":"ContainerStarted","Data":"3c7f59950bda322bec1848c87ab1da878bfa745ba8749b4b3d301f9ec1f26832"} Feb 24 02:37:02.002564 master-0 kubenswrapper[31411]: I0224 02:37:02.002518 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:02.003369 master-0 kubenswrapper[31411]: I0224 02:37:02.003296 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:02.012601 master-0 kubenswrapper[31411]: I0224 02:37:02.010914 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:02.991766 master-0 kubenswrapper[31411]: I0224 02:37:02.991694 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55455d5d8d-zzwzz" event={"ID":"65df10fc-36c4-4eab-aaf3-962a5294face","Type":"ContainerStarted","Data":"ad67ba0dbffb3aee2d1f766a706493369bcb70ce69df62dd27486df44d71d40b"} Feb 24 02:37:02.991766 master-0 kubenswrapper[31411]: I0224 02:37:02.991764 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55455d5d8d-zzwzz" event={"ID":"65df10fc-36c4-4eab-aaf3-962a5294face","Type":"ContainerStarted","Data":"1e06af42a1cc4bcf137d694554941c425d9cdbfbc5a16e50b74086ff3b340afc"} Feb 24 02:37:02.992496 master-0 kubenswrapper[31411]: I0224 02:37:02.992046 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-55455d5d8d-zzwzz" Feb 24 02:37:02.995421 master-0 kubenswrapper[31411]: I0224 02:37:02.995349 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674dc645f-b7fhr" event={"ID":"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466","Type":"ContainerStarted","Data":"fc7d34bcc5819e0dd7298b84b8d60657f7b1b044bceab5b74c5810e6f58a1aa4"} Feb 24 02:37:02.995643 master-0 kubenswrapper[31411]: I0224 02:37:02.995615 31411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:37:02.995699 master-0 kubenswrapper[31411]: I0224 02:37:02.995644 31411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:37:02.995699 master-0 kubenswrapper[31411]: I0224 02:37:02.995646 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-674dc645f-b7fhr" Feb 24 02:37:02.995920 master-0 kubenswrapper[31411]: I0224 02:37:02.995890 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:03.173657 master-0 kubenswrapper[31411]: I0224 02:37:03.173567 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:03.309019 master-0 kubenswrapper[31411]: I0224 02:37:03.308887 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:03.315737 master-0 kubenswrapper[31411]: I0224 02:37:03.315548 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-55455d5d8d-zzwzz" podStartSLOduration=3.315512554 podStartE2EDuration="3.315512554s" podCreationTimestamp="2026-02-24 02:37:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:37:03.308257701 +0000 UTC m=+966.525455537" watchObservedRunningTime="2026-02-24 02:37:03.315512554 +0000 UTC m=+966.532710430" Feb 24 02:37:03.586121 master-0 kubenswrapper[31411]: I0224 02:37:03.585935 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-674dc645f-b7fhr" podStartSLOduration=3.585910524 podStartE2EDuration="3.585910524s" podCreationTimestamp="2026-02-24 02:37:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:37:03.56543509 +0000 UTC m=+966.782632946" watchObservedRunningTime="2026-02-24 02:37:03.585910524 +0000 UTC m=+966.803108360" Feb 24 02:37:04.231390 master-0 kubenswrapper[31411]: I0224 02:37:04.231309 31411 generic.go:334] "Generic (PLEG): container finished" podID="c356cf44-9774-4260-9463-2960be302f0e" containerID="df4c2d83721af90c423447bcf1b9b69d8aac7ee0523be5a3ea4ac895cf7ffb24" exitCode=0 Feb 24 02:37:04.232258 master-0 kubenswrapper[31411]: I0224 02:37:04.232146 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-db-sync-mhchn" event={"ID":"c356cf44-9774-4260-9463-2960be302f0e","Type":"ContainerDied","Data":"df4c2d83721af90c423447bcf1b9b69d8aac7ee0523be5a3ea4ac895cf7ffb24"} Feb 24 02:37:04.232258 master-0 kubenswrapper[31411]: I0224 02:37:04.232208 31411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:37:04.567619 master-0 kubenswrapper[31411]: I0224 02:37:04.567480 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:04.568256 master-0 kubenswrapper[31411]: I0224 02:37:04.568092 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:04.801519 master-0 kubenswrapper[31411]: I0224 02:37:04.797453 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6b46dbc6bf-ngrn9"] Feb 24 02:37:04.802325 master-0 kubenswrapper[31411]: I0224 02:37:04.802273 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b46dbc6bf-ngrn9" Feb 24 02:37:04.806694 master-0 kubenswrapper[31411]: I0224 02:37:04.806625 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 24 02:37:04.807171 master-0 kubenswrapper[31411]: I0224 02:37:04.807137 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 24 02:37:04.854245 master-0 kubenswrapper[31411]: I0224 02:37:04.854195 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6b46dbc6bf-ngrn9"] Feb 24 02:37:04.987626 master-0 kubenswrapper[31411]: I0224 02:37:04.987542 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4cae4ee5-812a-4144-bd0a-aadc0a96ace5-httpd-config\") pod \"neutron-6b46dbc6bf-ngrn9\" (UID: \"4cae4ee5-812a-4144-bd0a-aadc0a96ace5\") " pod="openstack/neutron-6b46dbc6bf-ngrn9" Feb 24 02:37:04.988121 master-0 kubenswrapper[31411]: I0224 02:37:04.987691 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cae4ee5-812a-4144-bd0a-aadc0a96ace5-combined-ca-bundle\") pod \"neutron-6b46dbc6bf-ngrn9\" (UID: \"4cae4ee5-812a-4144-bd0a-aadc0a96ace5\") " pod="openstack/neutron-6b46dbc6bf-ngrn9" Feb 24 02:37:04.990013 master-0 kubenswrapper[31411]: I0224 02:37:04.989967 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpd9p\" (UniqueName: \"kubernetes.io/projected/4cae4ee5-812a-4144-bd0a-aadc0a96ace5-kube-api-access-tpd9p\") pod \"neutron-6b46dbc6bf-ngrn9\" (UID: \"4cae4ee5-812a-4144-bd0a-aadc0a96ace5\") " pod="openstack/neutron-6b46dbc6bf-ngrn9" Feb 24 02:37:04.990129 master-0 kubenswrapper[31411]: I0224 02:37:04.990098 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cae4ee5-812a-4144-bd0a-aadc0a96ace5-internal-tls-certs\") pod \"neutron-6b46dbc6bf-ngrn9\" (UID: \"4cae4ee5-812a-4144-bd0a-aadc0a96ace5\") " pod="openstack/neutron-6b46dbc6bf-ngrn9" Feb 24 02:37:04.990206 master-0 kubenswrapper[31411]: I0224 02:37:04.990163 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4cae4ee5-812a-4144-bd0a-aadc0a96ace5-config\") pod \"neutron-6b46dbc6bf-ngrn9\" (UID: \"4cae4ee5-812a-4144-bd0a-aadc0a96ace5\") " pod="openstack/neutron-6b46dbc6bf-ngrn9" Feb 24 02:37:04.990284 master-0 kubenswrapper[31411]: I0224 02:37:04.990228 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cae4ee5-812a-4144-bd0a-aadc0a96ace5-public-tls-certs\") pod \"neutron-6b46dbc6bf-ngrn9\" (UID: \"4cae4ee5-812a-4144-bd0a-aadc0a96ace5\") " pod="openstack/neutron-6b46dbc6bf-ngrn9" Feb 24 02:37:04.990668 master-0 kubenswrapper[31411]: I0224 02:37:04.990632 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cae4ee5-812a-4144-bd0a-aadc0a96ace5-ovndb-tls-certs\") pod \"neutron-6b46dbc6bf-ngrn9\" (UID: \"4cae4ee5-812a-4144-bd0a-aadc0a96ace5\") " pod="openstack/neutron-6b46dbc6bf-ngrn9" Feb 24 02:37:05.094849 master-0 kubenswrapper[31411]: I0224 02:37:05.094725 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpd9p\" (UniqueName: \"kubernetes.io/projected/4cae4ee5-812a-4144-bd0a-aadc0a96ace5-kube-api-access-tpd9p\") pod \"neutron-6b46dbc6bf-ngrn9\" (UID: \"4cae4ee5-812a-4144-bd0a-aadc0a96ace5\") " pod="openstack/neutron-6b46dbc6bf-ngrn9" Feb 24 02:37:05.095208 master-0 kubenswrapper[31411]: I0224 02:37:05.094917 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cae4ee5-812a-4144-bd0a-aadc0a96ace5-internal-tls-certs\") pod \"neutron-6b46dbc6bf-ngrn9\" (UID: \"4cae4ee5-812a-4144-bd0a-aadc0a96ace5\") " pod="openstack/neutron-6b46dbc6bf-ngrn9" Feb 24 02:37:05.095208 master-0 kubenswrapper[31411]: I0224 02:37:05.094972 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4cae4ee5-812a-4144-bd0a-aadc0a96ace5-config\") pod \"neutron-6b46dbc6bf-ngrn9\" (UID: \"4cae4ee5-812a-4144-bd0a-aadc0a96ace5\") " pod="openstack/neutron-6b46dbc6bf-ngrn9" Feb 24 02:37:05.095208 master-0 kubenswrapper[31411]: I0224 02:37:05.095052 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cae4ee5-812a-4144-bd0a-aadc0a96ace5-public-tls-certs\") pod \"neutron-6b46dbc6bf-ngrn9\" (UID: \"4cae4ee5-812a-4144-bd0a-aadc0a96ace5\") " pod="openstack/neutron-6b46dbc6bf-ngrn9" Feb 24 02:37:05.095420 master-0 kubenswrapper[31411]: I0224 02:37:05.095255 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cae4ee5-812a-4144-bd0a-aadc0a96ace5-ovndb-tls-certs\") pod \"neutron-6b46dbc6bf-ngrn9\" (UID: \"4cae4ee5-812a-4144-bd0a-aadc0a96ace5\") " pod="openstack/neutron-6b46dbc6bf-ngrn9" Feb 24 02:37:05.095420 master-0 kubenswrapper[31411]: I0224 02:37:05.095416 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4cae4ee5-812a-4144-bd0a-aadc0a96ace5-httpd-config\") pod \"neutron-6b46dbc6bf-ngrn9\" (UID: \"4cae4ee5-812a-4144-bd0a-aadc0a96ace5\") " pod="openstack/neutron-6b46dbc6bf-ngrn9" Feb 24 02:37:05.095570 master-0 kubenswrapper[31411]: I0224 02:37:05.095492 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cae4ee5-812a-4144-bd0a-aadc0a96ace5-combined-ca-bundle\") pod \"neutron-6b46dbc6bf-ngrn9\" (UID: \"4cae4ee5-812a-4144-bd0a-aadc0a96ace5\") " pod="openstack/neutron-6b46dbc6bf-ngrn9" Feb 24 02:37:05.101181 master-0 kubenswrapper[31411]: I0224 02:37:05.100848 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cae4ee5-812a-4144-bd0a-aadc0a96ace5-ovndb-tls-certs\") pod \"neutron-6b46dbc6bf-ngrn9\" (UID: \"4cae4ee5-812a-4144-bd0a-aadc0a96ace5\") " pod="openstack/neutron-6b46dbc6bf-ngrn9" Feb 24 02:37:05.103436 master-0 kubenswrapper[31411]: I0224 02:37:05.102562 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cae4ee5-812a-4144-bd0a-aadc0a96ace5-public-tls-certs\") pod \"neutron-6b46dbc6bf-ngrn9\" (UID: \"4cae4ee5-812a-4144-bd0a-aadc0a96ace5\") " pod="openstack/neutron-6b46dbc6bf-ngrn9" Feb 24 02:37:05.103436 master-0 kubenswrapper[31411]: I0224 02:37:05.102619 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4cae4ee5-812a-4144-bd0a-aadc0a96ace5-config\") pod \"neutron-6b46dbc6bf-ngrn9\" (UID: \"4cae4ee5-812a-4144-bd0a-aadc0a96ace5\") " pod="openstack/neutron-6b46dbc6bf-ngrn9" Feb 24 02:37:05.103436 master-0 kubenswrapper[31411]: I0224 02:37:05.103140 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cae4ee5-812a-4144-bd0a-aadc0a96ace5-internal-tls-certs\") pod \"neutron-6b46dbc6bf-ngrn9\" (UID: \"4cae4ee5-812a-4144-bd0a-aadc0a96ace5\") " pod="openstack/neutron-6b46dbc6bf-ngrn9" Feb 24 02:37:05.103436 master-0 kubenswrapper[31411]: I0224 02:37:05.103338 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cae4ee5-812a-4144-bd0a-aadc0a96ace5-combined-ca-bundle\") pod \"neutron-6b46dbc6bf-ngrn9\" (UID: \"4cae4ee5-812a-4144-bd0a-aadc0a96ace5\") " pod="openstack/neutron-6b46dbc6bf-ngrn9" Feb 24 02:37:05.104519 master-0 kubenswrapper[31411]: I0224 02:37:05.104491 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4cae4ee5-812a-4144-bd0a-aadc0a96ace5-httpd-config\") pod \"neutron-6b46dbc6bf-ngrn9\" (UID: \"4cae4ee5-812a-4144-bd0a-aadc0a96ace5\") " pod="openstack/neutron-6b46dbc6bf-ngrn9" Feb 24 02:37:05.138647 master-0 kubenswrapper[31411]: I0224 02:37:05.138564 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpd9p\" (UniqueName: \"kubernetes.io/projected/4cae4ee5-812a-4144-bd0a-aadc0a96ace5-kube-api-access-tpd9p\") pod \"neutron-6b46dbc6bf-ngrn9\" (UID: \"4cae4ee5-812a-4144-bd0a-aadc0a96ace5\") " pod="openstack/neutron-6b46dbc6bf-ngrn9" Feb 24 02:37:05.157114 master-0 kubenswrapper[31411]: I0224 02:37:05.157061 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b46dbc6bf-ngrn9" Feb 24 02:37:05.496597 master-0 kubenswrapper[31411]: E0224 02:37:05.492531 31411 kubelet_node_status.go:756] "Failed to set some node status fields" err="failed to validate nodeIP: route ip+net: no such network interface" node="master-0" Feb 24 02:37:05.773454 master-0 kubenswrapper[31411]: I0224 02:37:05.773428 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ac23-db-sync-mhchn" Feb 24 02:37:05.908605 master-0 kubenswrapper[31411]: W0224 02:37:05.907978 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cae4ee5_812a_4144_bd0a_aadc0a96ace5.slice/crio-abfa579f3b91101a0fd3963f34b70e3b28df36fd3863867586ff677730b12fa0 WatchSource:0}: Error finding container abfa579f3b91101a0fd3963f34b70e3b28df36fd3863867586ff677730b12fa0: Status 404 returned error can't find the container with id abfa579f3b91101a0fd3963f34b70e3b28df36fd3863867586ff677730b12fa0 Feb 24 02:37:05.911601 master-0 kubenswrapper[31411]: I0224 02:37:05.911541 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6b46dbc6bf-ngrn9"] Feb 24 02:37:05.915314 master-0 kubenswrapper[31411]: I0224 02:37:05.915278 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w4pl\" (UniqueName: \"kubernetes.io/projected/c356cf44-9774-4260-9463-2960be302f0e-kube-api-access-9w4pl\") pod \"c356cf44-9774-4260-9463-2960be302f0e\" (UID: \"c356cf44-9774-4260-9463-2960be302f0e\") " Feb 24 02:37:05.915389 master-0 kubenswrapper[31411]: I0224 02:37:05.915370 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c356cf44-9774-4260-9463-2960be302f0e-etc-machine-id\") pod \"c356cf44-9774-4260-9463-2960be302f0e\" (UID: \"c356cf44-9774-4260-9463-2960be302f0e\") " Feb 24 02:37:05.915499 master-0 kubenswrapper[31411]: I0224 02:37:05.915469 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c356cf44-9774-4260-9463-2960be302f0e-scripts\") pod \"c356cf44-9774-4260-9463-2960be302f0e\" (UID: \"c356cf44-9774-4260-9463-2960be302f0e\") " Feb 24 02:37:05.915564 master-0 kubenswrapper[31411]: I0224 02:37:05.915545 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c356cf44-9774-4260-9463-2960be302f0e-config-data\") pod \"c356cf44-9774-4260-9463-2960be302f0e\" (UID: \"c356cf44-9774-4260-9463-2960be302f0e\") " Feb 24 02:37:05.915621 master-0 kubenswrapper[31411]: I0224 02:37:05.915603 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c356cf44-9774-4260-9463-2960be302f0e-db-sync-config-data\") pod \"c356cf44-9774-4260-9463-2960be302f0e\" (UID: \"c356cf44-9774-4260-9463-2960be302f0e\") " Feb 24 02:37:05.915659 master-0 kubenswrapper[31411]: I0224 02:37:05.915629 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c356cf44-9774-4260-9463-2960be302f0e-combined-ca-bundle\") pod \"c356cf44-9774-4260-9463-2960be302f0e\" (UID: \"c356cf44-9774-4260-9463-2960be302f0e\") " Feb 24 02:37:05.915912 master-0 kubenswrapper[31411]: I0224 02:37:05.915847 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c356cf44-9774-4260-9463-2960be302f0e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c356cf44-9774-4260-9463-2960be302f0e" (UID: "c356cf44-9774-4260-9463-2960be302f0e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:37:05.916140 master-0 kubenswrapper[31411]: I0224 02:37:05.916112 31411 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c356cf44-9774-4260-9463-2960be302f0e-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:05.918948 master-0 kubenswrapper[31411]: I0224 02:37:05.918856 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c356cf44-9774-4260-9463-2960be302f0e-kube-api-access-9w4pl" (OuterVolumeSpecName: "kube-api-access-9w4pl") pod "c356cf44-9774-4260-9463-2960be302f0e" (UID: "c356cf44-9774-4260-9463-2960be302f0e"). InnerVolumeSpecName "kube-api-access-9w4pl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:37:05.919582 master-0 kubenswrapper[31411]: I0224 02:37:05.919506 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c356cf44-9774-4260-9463-2960be302f0e-scripts" (OuterVolumeSpecName: "scripts") pod "c356cf44-9774-4260-9463-2960be302f0e" (UID: "c356cf44-9774-4260-9463-2960be302f0e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:05.920339 master-0 kubenswrapper[31411]: I0224 02:37:05.920279 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c356cf44-9774-4260-9463-2960be302f0e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c356cf44-9774-4260-9463-2960be302f0e" (UID: "c356cf44-9774-4260-9463-2960be302f0e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:05.951042 master-0 kubenswrapper[31411]: I0224 02:37:05.950985 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c356cf44-9774-4260-9463-2960be302f0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c356cf44-9774-4260-9463-2960be302f0e" (UID: "c356cf44-9774-4260-9463-2960be302f0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:05.975332 master-0 kubenswrapper[31411]: I0224 02:37:05.975277 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c356cf44-9774-4260-9463-2960be302f0e-config-data" (OuterVolumeSpecName: "config-data") pod "c356cf44-9774-4260-9463-2960be302f0e" (UID: "c356cf44-9774-4260-9463-2960be302f0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:06.019861 master-0 kubenswrapper[31411]: I0224 02:37:06.019803 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w4pl\" (UniqueName: \"kubernetes.io/projected/c356cf44-9774-4260-9463-2960be302f0e-kube-api-access-9w4pl\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:06.019861 master-0 kubenswrapper[31411]: I0224 02:37:06.019851 31411 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c356cf44-9774-4260-9463-2960be302f0e-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:06.019861 master-0 kubenswrapper[31411]: I0224 02:37:06.019866 31411 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c356cf44-9774-4260-9463-2960be302f0e-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:06.020047 master-0 kubenswrapper[31411]: I0224 02:37:06.019877 31411 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c356cf44-9774-4260-9463-2960be302f0e-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:06.020047 master-0 kubenswrapper[31411]: I0224 02:37:06.019892 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c356cf44-9774-4260-9463-2960be302f0e-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:06.267994 master-0 kubenswrapper[31411]: I0224 02:37:06.267935 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ac23-db-sync-mhchn" Feb 24 02:37:06.271879 master-0 kubenswrapper[31411]: I0224 02:37:06.271817 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-db-sync-mhchn" event={"ID":"c356cf44-9774-4260-9463-2960be302f0e","Type":"ContainerDied","Data":"4ad28eccb95db9c578dd0e0bca11ea30637059ca654da894081956d824463da2"} Feb 24 02:37:06.271957 master-0 kubenswrapper[31411]: I0224 02:37:06.271890 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ad28eccb95db9c578dd0e0bca11ea30637059ca654da894081956d824463da2" Feb 24 02:37:06.274152 master-0 kubenswrapper[31411]: I0224 02:37:06.274107 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b46dbc6bf-ngrn9" event={"ID":"4cae4ee5-812a-4144-bd0a-aadc0a96ace5","Type":"ContainerStarted","Data":"74faca83cd34a1128b3956940552a9c23a793c616e828267372cc701438a790b"} Feb 24 02:37:06.274223 master-0 kubenswrapper[31411]: I0224 02:37:06.274153 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b46dbc6bf-ngrn9" event={"ID":"4cae4ee5-812a-4144-bd0a-aadc0a96ace5","Type":"ContainerStarted","Data":"abfa579f3b91101a0fd3963f34b70e3b28df36fd3863867586ff677730b12fa0"} Feb 24 02:37:07.217450 master-0 kubenswrapper[31411]: I0224 02:37:07.217397 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-674dc645f-b7fhr"] Feb 24 02:37:07.218460 master-0 kubenswrapper[31411]: I0224 02:37:07.218156 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-674dc645f-b7fhr" podUID="c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466" containerName="dnsmasq-dns" containerID="cri-o://fc7d34bcc5819e0dd7298b84b8d60657f7b1b044bceab5b74c5810e6f58a1aa4" gracePeriod=10 Feb 24 02:37:07.220879 master-0 kubenswrapper[31411]: I0224 02:37:07.218756 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-674dc645f-b7fhr" Feb 24 02:37:07.230977 master-0 kubenswrapper[31411]: I0224 02:37:07.230896 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-6ac23-scheduler-0"] Feb 24 02:37:07.231675 master-0 kubenswrapper[31411]: E0224 02:37:07.231644 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c356cf44-9774-4260-9463-2960be302f0e" containerName="cinder-6ac23-db-sync" Feb 24 02:37:07.231675 master-0 kubenswrapper[31411]: I0224 02:37:07.231668 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="c356cf44-9774-4260-9463-2960be302f0e" containerName="cinder-6ac23-db-sync" Feb 24 02:37:07.232035 master-0 kubenswrapper[31411]: I0224 02:37:07.231960 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="c356cf44-9774-4260-9463-2960be302f0e" containerName="cinder-6ac23-db-sync" Feb 24 02:37:07.242962 master-0 kubenswrapper[31411]: I0224 02:37:07.242902 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:07.250515 master-0 kubenswrapper[31411]: I0224 02:37:07.250459 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-6ac23-config-data" Feb 24 02:37:07.250872 master-0 kubenswrapper[31411]: I0224 02:37:07.250844 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-6ac23-scheduler-config-data" Feb 24 02:37:07.251002 master-0 kubenswrapper[31411]: I0224 02:37:07.250977 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-6ac23-scripts" Feb 24 02:37:07.263426 master-0 kubenswrapper[31411]: I0224 02:37:07.263350 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6ac23-scheduler-0"] Feb 24 02:37:07.284823 master-0 kubenswrapper[31411]: I0224 02:37:07.284697 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-6ac23-volume-lvm-iscsi-0"] Feb 24 02:37:07.287155 master-0 kubenswrapper[31411]: I0224 02:37:07.287121 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.288979 master-0 kubenswrapper[31411]: I0224 02:37:07.288964 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-6ac23-volume-lvm-iscsi-config-data" Feb 24 02:37:07.294721 master-0 kubenswrapper[31411]: I0224 02:37:07.294645 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc5ccc685-kl2f6"] Feb 24 02:37:07.302386 master-0 kubenswrapper[31411]: I0224 02:37:07.296907 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc5ccc685-kl2f6" Feb 24 02:37:07.315041 master-0 kubenswrapper[31411]: I0224 02:37:07.308853 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6ac23-volume-lvm-iscsi-0"] Feb 24 02:37:07.322168 master-0 kubenswrapper[31411]: I0224 02:37:07.318639 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc5ccc685-kl2f6"] Feb 24 02:37:07.325078 master-0 kubenswrapper[31411]: I0224 02:37:07.325021 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b46dbc6bf-ngrn9" event={"ID":"4cae4ee5-812a-4144-bd0a-aadc0a96ace5","Type":"ContainerStarted","Data":"591a95f98e8f2920d3b9d98bb3ce9d61fcba18bf7ff9862e2e57b1005d9c769e"} Feb 24 02:37:07.346509 master-0 kubenswrapper[31411]: I0224 02:37:07.345773 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-6ac23-backup-0"] Feb 24 02:37:07.350009 master-0 kubenswrapper[31411]: I0224 02:37:07.348809 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.354255 master-0 kubenswrapper[31411]: I0224 02:37:07.353715 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-6ac23-backup-config-data" Feb 24 02:37:07.374467 master-0 kubenswrapper[31411]: I0224 02:37:07.374424 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6ac23-backup-0"] Feb 24 02:37:07.414684 master-0 kubenswrapper[31411]: I0224 02:37:07.414610 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-var-locks-brick\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.414684 master-0 kubenswrapper[31411]: I0224 02:37:07.414688 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e200864-094f-471c-ba9d-863d14ff8404-config-data\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.414976 master-0 kubenswrapper[31411]: I0224 02:37:07.414730 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcfea485-a4c1-4ed9-bfb8-2853157da83a-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc5ccc685-kl2f6\" (UID: \"dcfea485-a4c1-4ed9-bfb8-2853157da83a\") " pod="openstack/dnsmasq-dns-6bc5ccc685-kl2f6" Feb 24 02:37:07.414976 master-0 kubenswrapper[31411]: I0224 02:37:07.414758 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8657494d-8b2e-4545-9713-e55ace12f329-config-data-custom\") pod \"cinder-6ac23-scheduler-0\" (UID: \"8657494d-8b2e-4545-9713-e55ace12f329\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:07.414976 master-0 kubenswrapper[31411]: I0224 02:37:07.414781 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-etc-nvme\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.414976 master-0 kubenswrapper[31411]: I0224 02:37:07.414825 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjljc\" (UniqueName: \"kubernetes.io/projected/dcfea485-a4c1-4ed9-bfb8-2853157da83a-kube-api-access-tjljc\") pod \"dnsmasq-dns-6bc5ccc685-kl2f6\" (UID: \"dcfea485-a4c1-4ed9-bfb8-2853157da83a\") " pod="openstack/dnsmasq-dns-6bc5ccc685-kl2f6" Feb 24 02:37:07.414976 master-0 kubenswrapper[31411]: I0224 02:37:07.414864 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e200864-094f-471c-ba9d-863d14ff8404-combined-ca-bundle\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.414976 master-0 kubenswrapper[31411]: I0224 02:37:07.414907 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-run\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.414976 master-0 kubenswrapper[31411]: I0224 02:37:07.414934 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e200864-094f-471c-ba9d-863d14ff8404-scripts\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.414976 master-0 kubenswrapper[31411]: I0224 02:37:07.414971 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcfea485-a4c1-4ed9-bfb8-2853157da83a-dns-swift-storage-0\") pod \"dnsmasq-dns-6bc5ccc685-kl2f6\" (UID: \"dcfea485-a4c1-4ed9-bfb8-2853157da83a\") " pod="openstack/dnsmasq-dns-6bc5ccc685-kl2f6" Feb 24 02:37:07.415388 master-0 kubenswrapper[31411]: I0224 02:37:07.414995 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-var-locks-cinder\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.415388 master-0 kubenswrapper[31411]: I0224 02:37:07.415041 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcfea485-a4c1-4ed9-bfb8-2853157da83a-config\") pod \"dnsmasq-dns-6bc5ccc685-kl2f6\" (UID: \"dcfea485-a4c1-4ed9-bfb8-2853157da83a\") " pod="openstack/dnsmasq-dns-6bc5ccc685-kl2f6" Feb 24 02:37:07.415388 master-0 kubenswrapper[31411]: I0224 02:37:07.415066 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8657494d-8b2e-4545-9713-e55ace12f329-config-data\") pod \"cinder-6ac23-scheduler-0\" (UID: \"8657494d-8b2e-4545-9713-e55ace12f329\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:07.415388 master-0 kubenswrapper[31411]: I0224 02:37:07.415088 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr68t\" (UniqueName: \"kubernetes.io/projected/8e200864-094f-471c-ba9d-863d14ff8404-kube-api-access-gr68t\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.415388 master-0 kubenswrapper[31411]: I0224 02:37:07.415111 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8657494d-8b2e-4545-9713-e55ace12f329-scripts\") pod \"cinder-6ac23-scheduler-0\" (UID: \"8657494d-8b2e-4545-9713-e55ace12f329\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:07.415388 master-0 kubenswrapper[31411]: I0224 02:37:07.415129 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e200864-094f-471c-ba9d-863d14ff8404-config-data-custom\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.415388 master-0 kubenswrapper[31411]: I0224 02:37:07.415148 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8657494d-8b2e-4545-9713-e55ace12f329-combined-ca-bundle\") pod \"cinder-6ac23-scheduler-0\" (UID: \"8657494d-8b2e-4545-9713-e55ace12f329\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:07.415388 master-0 kubenswrapper[31411]: I0224 02:37:07.415179 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-var-lib-cinder\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.415388 master-0 kubenswrapper[31411]: I0224 02:37:07.415202 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcfea485-a4c1-4ed9-bfb8-2853157da83a-dns-svc\") pod \"dnsmasq-dns-6bc5ccc685-kl2f6\" (UID: \"dcfea485-a4c1-4ed9-bfb8-2853157da83a\") " pod="openstack/dnsmasq-dns-6bc5ccc685-kl2f6" Feb 24 02:37:07.415388 master-0 kubenswrapper[31411]: I0224 02:37:07.415231 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8657494d-8b2e-4545-9713-e55ace12f329-etc-machine-id\") pod \"cinder-6ac23-scheduler-0\" (UID: \"8657494d-8b2e-4545-9713-e55ace12f329\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:07.415388 master-0 kubenswrapper[31411]: I0224 02:37:07.415259 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-dev\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.415388 master-0 kubenswrapper[31411]: I0224 02:37:07.415281 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-sys\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.415388 master-0 kubenswrapper[31411]: I0224 02:37:07.415316 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wzsv\" (UniqueName: \"kubernetes.io/projected/8657494d-8b2e-4545-9713-e55ace12f329-kube-api-access-2wzsv\") pod \"cinder-6ac23-scheduler-0\" (UID: \"8657494d-8b2e-4545-9713-e55ace12f329\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:07.415388 master-0 kubenswrapper[31411]: I0224 02:37:07.415349 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-lib-modules\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.415388 master-0 kubenswrapper[31411]: I0224 02:37:07.415369 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcfea485-a4c1-4ed9-bfb8-2853157da83a-ovsdbserver-nb\") pod \"dnsmasq-dns-6bc5ccc685-kl2f6\" (UID: \"dcfea485-a4c1-4ed9-bfb8-2853157da83a\") " pod="openstack/dnsmasq-dns-6bc5ccc685-kl2f6" Feb 24 02:37:07.415388 master-0 kubenswrapper[31411]: I0224 02:37:07.415385 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-etc-iscsi\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.415892 master-0 kubenswrapper[31411]: I0224 02:37:07.415411 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-etc-machine-id\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.497508 master-0 kubenswrapper[31411]: I0224 02:37:07.496856 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-6ac23-api-0"] Feb 24 02:37:07.500059 master-0 kubenswrapper[31411]: I0224 02:37:07.499304 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:07.507346 master-0 kubenswrapper[31411]: I0224 02:37:07.502250 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-6ac23-api-config-data" Feb 24 02:37:07.517587 master-0 kubenswrapper[31411]: I0224 02:37:07.517512 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-etc-machine-id\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.517864 master-0 kubenswrapper[31411]: I0224 02:37:07.517605 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-dev\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.517864 master-0 kubenswrapper[31411]: I0224 02:37:07.517661 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-sys\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.517864 master-0 kubenswrapper[31411]: I0224 02:37:07.517726 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wzsv\" (UniqueName: \"kubernetes.io/projected/8657494d-8b2e-4545-9713-e55ace12f329-kube-api-access-2wzsv\") pod \"cinder-6ac23-scheduler-0\" (UID: \"8657494d-8b2e-4545-9713-e55ace12f329\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:07.517864 master-0 kubenswrapper[31411]: I0224 02:37:07.517746 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-lib-modules\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.517864 master-0 kubenswrapper[31411]: I0224 02:37:07.517772 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcfea485-a4c1-4ed9-bfb8-2853157da83a-ovsdbserver-nb\") pod \"dnsmasq-dns-6bc5ccc685-kl2f6\" (UID: \"dcfea485-a4c1-4ed9-bfb8-2853157da83a\") " pod="openstack/dnsmasq-dns-6bc5ccc685-kl2f6" Feb 24 02:37:07.517864 master-0 kubenswrapper[31411]: I0224 02:37:07.517789 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-etc-iscsi\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.517864 master-0 kubenswrapper[31411]: I0224 02:37:07.517807 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-dev\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.517864 master-0 kubenswrapper[31411]: I0224 02:37:07.517856 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-etc-machine-id\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.518190 master-0 kubenswrapper[31411]: I0224 02:37:07.517885 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fcfbc46-898e-4e7b-a96a-e2170c671a45-config-data\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.518190 master-0 kubenswrapper[31411]: I0224 02:37:07.517953 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9lkq\" (UniqueName: \"kubernetes.io/projected/6fcfbc46-898e-4e7b-a96a-e2170c671a45-kube-api-access-l9lkq\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.518190 master-0 kubenswrapper[31411]: I0224 02:37:07.518014 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-var-locks-brick\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.518190 master-0 kubenswrapper[31411]: I0224 02:37:07.518035 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-var-locks-brick\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.518190 master-0 kubenswrapper[31411]: I0224 02:37:07.518070 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e200864-094f-471c-ba9d-863d14ff8404-config-data\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.518190 master-0 kubenswrapper[31411]: I0224 02:37:07.518114 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcfea485-a4c1-4ed9-bfb8-2853157da83a-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc5ccc685-kl2f6\" (UID: \"dcfea485-a4c1-4ed9-bfb8-2853157da83a\") " pod="openstack/dnsmasq-dns-6bc5ccc685-kl2f6" Feb 24 02:37:07.518190 master-0 kubenswrapper[31411]: I0224 02:37:07.518143 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8657494d-8b2e-4545-9713-e55ace12f329-config-data-custom\") pod \"cinder-6ac23-scheduler-0\" (UID: \"8657494d-8b2e-4545-9713-e55ace12f329\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:07.518190 master-0 kubenswrapper[31411]: I0224 02:37:07.518162 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-etc-nvme\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.518190 master-0 kubenswrapper[31411]: I0224 02:37:07.518178 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-etc-nvme\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.518452 master-0 kubenswrapper[31411]: I0224 02:37:07.518204 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-var-lib-cinder\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.518452 master-0 kubenswrapper[31411]: I0224 02:37:07.518245 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6fcfbc46-898e-4e7b-a96a-e2170c671a45-config-data-custom\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.518452 master-0 kubenswrapper[31411]: I0224 02:37:07.518297 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjljc\" (UniqueName: \"kubernetes.io/projected/dcfea485-a4c1-4ed9-bfb8-2853157da83a-kube-api-access-tjljc\") pod \"dnsmasq-dns-6bc5ccc685-kl2f6\" (UID: \"dcfea485-a4c1-4ed9-bfb8-2853157da83a\") " pod="openstack/dnsmasq-dns-6bc5ccc685-kl2f6" Feb 24 02:37:07.518452 master-0 kubenswrapper[31411]: I0224 02:37:07.518317 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fcfbc46-898e-4e7b-a96a-e2170c671a45-scripts\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.518452 master-0 kubenswrapper[31411]: I0224 02:37:07.518359 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-var-locks-cinder\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.518452 master-0 kubenswrapper[31411]: I0224 02:37:07.518393 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e200864-094f-471c-ba9d-863d14ff8404-combined-ca-bundle\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.518452 master-0 kubenswrapper[31411]: I0224 02:37:07.518419 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-run\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.518452 master-0 kubenswrapper[31411]: I0224 02:37:07.518453 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e200864-094f-471c-ba9d-863d14ff8404-scripts\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.518828 master-0 kubenswrapper[31411]: I0224 02:37:07.518490 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcfea485-a4c1-4ed9-bfb8-2853157da83a-dns-swift-storage-0\") pod \"dnsmasq-dns-6bc5ccc685-kl2f6\" (UID: \"dcfea485-a4c1-4ed9-bfb8-2853157da83a\") " pod="openstack/dnsmasq-dns-6bc5ccc685-kl2f6" Feb 24 02:37:07.518828 master-0 kubenswrapper[31411]: I0224 02:37:07.518523 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-var-locks-cinder\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.518828 master-0 kubenswrapper[31411]: I0224 02:37:07.518548 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-sys\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.518828 master-0 kubenswrapper[31411]: I0224 02:37:07.518610 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-lib-modules\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.518828 master-0 kubenswrapper[31411]: I0224 02:37:07.518629 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-etc-iscsi\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.518828 master-0 kubenswrapper[31411]: I0224 02:37:07.518674 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcfea485-a4c1-4ed9-bfb8-2853157da83a-config\") pod \"dnsmasq-dns-6bc5ccc685-kl2f6\" (UID: \"dcfea485-a4c1-4ed9-bfb8-2853157da83a\") " pod="openstack/dnsmasq-dns-6bc5ccc685-kl2f6" Feb 24 02:37:07.518828 master-0 kubenswrapper[31411]: I0224 02:37:07.518694 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8657494d-8b2e-4545-9713-e55ace12f329-config-data\") pod \"cinder-6ac23-scheduler-0\" (UID: \"8657494d-8b2e-4545-9713-e55ace12f329\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:07.518828 master-0 kubenswrapper[31411]: I0224 02:37:07.518719 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr68t\" (UniqueName: \"kubernetes.io/projected/8e200864-094f-471c-ba9d-863d14ff8404-kube-api-access-gr68t\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.518828 master-0 kubenswrapper[31411]: I0224 02:37:07.518741 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8657494d-8b2e-4545-9713-e55ace12f329-scripts\") pod \"cinder-6ac23-scheduler-0\" (UID: \"8657494d-8b2e-4545-9713-e55ace12f329\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:07.518828 master-0 kubenswrapper[31411]: I0224 02:37:07.518758 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e200864-094f-471c-ba9d-863d14ff8404-config-data-custom\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.518828 master-0 kubenswrapper[31411]: I0224 02:37:07.518776 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8657494d-8b2e-4545-9713-e55ace12f329-combined-ca-bundle\") pod \"cinder-6ac23-scheduler-0\" (UID: \"8657494d-8b2e-4545-9713-e55ace12f329\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:07.518828 master-0 kubenswrapper[31411]: I0224 02:37:07.518798 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcfbc46-898e-4e7b-a96a-e2170c671a45-combined-ca-bundle\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.519206 master-0 kubenswrapper[31411]: I0224 02:37:07.518843 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-var-lib-cinder\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.519206 master-0 kubenswrapper[31411]: I0224 02:37:07.518880 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcfea485-a4c1-4ed9-bfb8-2853157da83a-dns-svc\") pod \"dnsmasq-dns-6bc5ccc685-kl2f6\" (UID: \"dcfea485-a4c1-4ed9-bfb8-2853157da83a\") " pod="openstack/dnsmasq-dns-6bc5ccc685-kl2f6" Feb 24 02:37:07.519206 master-0 kubenswrapper[31411]: I0224 02:37:07.518918 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-run\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.519206 master-0 kubenswrapper[31411]: I0224 02:37:07.518957 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8657494d-8b2e-4545-9713-e55ace12f329-etc-machine-id\") pod \"cinder-6ac23-scheduler-0\" (UID: \"8657494d-8b2e-4545-9713-e55ace12f329\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:07.519206 master-0 kubenswrapper[31411]: I0224 02:37:07.519064 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8657494d-8b2e-4545-9713-e55ace12f329-etc-machine-id\") pod \"cinder-6ac23-scheduler-0\" (UID: \"8657494d-8b2e-4545-9713-e55ace12f329\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:07.519206 master-0 kubenswrapper[31411]: I0224 02:37:07.519108 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-dev\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.519509 master-0 kubenswrapper[31411]: I0224 02:37:07.519480 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-sys\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.519816 master-0 kubenswrapper[31411]: I0224 02:37:07.519787 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-lib-modules\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.520548 master-0 kubenswrapper[31411]: I0224 02:37:07.520519 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcfea485-a4c1-4ed9-bfb8-2853157da83a-ovsdbserver-nb\") pod \"dnsmasq-dns-6bc5ccc685-kl2f6\" (UID: \"dcfea485-a4c1-4ed9-bfb8-2853157da83a\") " pod="openstack/dnsmasq-dns-6bc5ccc685-kl2f6" Feb 24 02:37:07.524788 master-0 kubenswrapper[31411]: I0224 02:37:07.520566 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-etc-iscsi\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.533012 master-0 kubenswrapper[31411]: I0224 02:37:07.532956 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-run\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.535738 master-0 kubenswrapper[31411]: I0224 02:37:07.535649 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-etc-nvme\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.536108 master-0 kubenswrapper[31411]: I0224 02:37:07.536031 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-var-lib-cinder\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.536339 master-0 kubenswrapper[31411]: I0224 02:37:07.536306 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-var-locks-cinder\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.541602 master-0 kubenswrapper[31411]: I0224 02:37:07.539410 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e200864-094f-471c-ba9d-863d14ff8404-combined-ca-bundle\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.542322 master-0 kubenswrapper[31411]: I0224 02:37:07.542287 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcfea485-a4c1-4ed9-bfb8-2853157da83a-config\") pod \"dnsmasq-dns-6bc5ccc685-kl2f6\" (UID: \"dcfea485-a4c1-4ed9-bfb8-2853157da83a\") " pod="openstack/dnsmasq-dns-6bc5ccc685-kl2f6" Feb 24 02:37:07.543314 master-0 kubenswrapper[31411]: I0224 02:37:07.543272 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8657494d-8b2e-4545-9713-e55ace12f329-config-data-custom\") pod \"cinder-6ac23-scheduler-0\" (UID: \"8657494d-8b2e-4545-9713-e55ace12f329\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:07.543696 master-0 kubenswrapper[31411]: I0224 02:37:07.543612 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcfea485-a4c1-4ed9-bfb8-2853157da83a-dns-svc\") pod \"dnsmasq-dns-6bc5ccc685-kl2f6\" (UID: \"dcfea485-a4c1-4ed9-bfb8-2853157da83a\") " pod="openstack/dnsmasq-dns-6bc5ccc685-kl2f6" Feb 24 02:37:07.546862 master-0 kubenswrapper[31411]: I0224 02:37:07.546736 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-var-locks-brick\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.546862 master-0 kubenswrapper[31411]: I0224 02:37:07.546795 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-etc-machine-id\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.547196 master-0 kubenswrapper[31411]: I0224 02:37:07.547169 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8657494d-8b2e-4545-9713-e55ace12f329-config-data\") pod \"cinder-6ac23-scheduler-0\" (UID: \"8657494d-8b2e-4545-9713-e55ace12f329\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:07.547627 master-0 kubenswrapper[31411]: I0224 02:37:07.547558 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcfea485-a4c1-4ed9-bfb8-2853157da83a-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc5ccc685-kl2f6\" (UID: \"dcfea485-a4c1-4ed9-bfb8-2853157da83a\") " pod="openstack/dnsmasq-dns-6bc5ccc685-kl2f6" Feb 24 02:37:07.548310 master-0 kubenswrapper[31411]: I0224 02:37:07.548279 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcfea485-a4c1-4ed9-bfb8-2853157da83a-dns-swift-storage-0\") pod \"dnsmasq-dns-6bc5ccc685-kl2f6\" (UID: \"dcfea485-a4c1-4ed9-bfb8-2853157da83a\") " pod="openstack/dnsmasq-dns-6bc5ccc685-kl2f6" Feb 24 02:37:07.551947 master-0 kubenswrapper[31411]: I0224 02:37:07.551401 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e200864-094f-471c-ba9d-863d14ff8404-config-data-custom\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.552707 master-0 kubenswrapper[31411]: I0224 02:37:07.552630 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8657494d-8b2e-4545-9713-e55ace12f329-scripts\") pod \"cinder-6ac23-scheduler-0\" (UID: \"8657494d-8b2e-4545-9713-e55ace12f329\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:07.552986 master-0 kubenswrapper[31411]: I0224 02:37:07.552955 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e200864-094f-471c-ba9d-863d14ff8404-scripts\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.554389 master-0 kubenswrapper[31411]: I0224 02:37:07.554354 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wzsv\" (UniqueName: \"kubernetes.io/projected/8657494d-8b2e-4545-9713-e55ace12f329-kube-api-access-2wzsv\") pod \"cinder-6ac23-scheduler-0\" (UID: \"8657494d-8b2e-4545-9713-e55ace12f329\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:07.556420 master-0 kubenswrapper[31411]: I0224 02:37:07.556385 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8657494d-8b2e-4545-9713-e55ace12f329-combined-ca-bundle\") pod \"cinder-6ac23-scheduler-0\" (UID: \"8657494d-8b2e-4545-9713-e55ace12f329\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:07.564347 master-0 kubenswrapper[31411]: I0224 02:37:07.564293 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6ac23-api-0"] Feb 24 02:37:07.564512 master-0 kubenswrapper[31411]: I0224 02:37:07.564448 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjljc\" (UniqueName: \"kubernetes.io/projected/dcfea485-a4c1-4ed9-bfb8-2853157da83a-kube-api-access-tjljc\") pod \"dnsmasq-dns-6bc5ccc685-kl2f6\" (UID: \"dcfea485-a4c1-4ed9-bfb8-2853157da83a\") " pod="openstack/dnsmasq-dns-6bc5ccc685-kl2f6" Feb 24 02:37:07.571835 master-0 kubenswrapper[31411]: I0224 02:37:07.571759 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6b46dbc6bf-ngrn9" podStartSLOduration=3.5717336680000003 podStartE2EDuration="3.571733668s" podCreationTimestamp="2026-02-24 02:37:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:37:07.420422876 +0000 UTC m=+970.637620722" watchObservedRunningTime="2026-02-24 02:37:07.571733668 +0000 UTC m=+970.788931514" Feb 24 02:37:07.577393 master-0 kubenswrapper[31411]: I0224 02:37:07.577339 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr68t\" (UniqueName: \"kubernetes.io/projected/8e200864-094f-471c-ba9d-863d14ff8404-kube-api-access-gr68t\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.586340 master-0 kubenswrapper[31411]: I0224 02:37:07.585667 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e200864-094f-471c-ba9d-863d14ff8404-config-data\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.621262 master-0 kubenswrapper[31411]: I0224 02:37:07.621194 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-var-locks-brick\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.621505 master-0 kubenswrapper[31411]: I0224 02:37:07.621370 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-var-locks-brick\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.621505 master-0 kubenswrapper[31411]: I0224 02:37:07.621477 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ef0062c-3b15-4250-9b1f-bb0143fcd105-config-data-custom\") pod \"cinder-6ac23-api-0\" (UID: \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:07.621614 master-0 kubenswrapper[31411]: I0224 02:37:07.621595 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ef0062c-3b15-4250-9b1f-bb0143fcd105-logs\") pod \"cinder-6ac23-api-0\" (UID: \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:07.621745 master-0 kubenswrapper[31411]: I0224 02:37:07.621696 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-etc-nvme\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.621798 master-0 kubenswrapper[31411]: I0224 02:37:07.621749 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-var-lib-cinder\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.621798 master-0 kubenswrapper[31411]: I0224 02:37:07.621780 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ef0062c-3b15-4250-9b1f-bb0143fcd105-etc-machine-id\") pod \"cinder-6ac23-api-0\" (UID: \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:07.621866 master-0 kubenswrapper[31411]: I0224 02:37:07.621802 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6fcfbc46-898e-4e7b-a96a-e2170c671a45-config-data-custom\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.621954 master-0 kubenswrapper[31411]: I0224 02:37:07.621937 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fcfbc46-898e-4e7b-a96a-e2170c671a45-scripts\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.621998 master-0 kubenswrapper[31411]: I0224 02:37:07.621945 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-etc-nvme\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.621998 master-0 kubenswrapper[31411]: I0224 02:37:07.621976 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-var-locks-cinder\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.622059 master-0 kubenswrapper[31411]: I0224 02:37:07.622023 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-var-locks-cinder\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.622090 master-0 kubenswrapper[31411]: I0224 02:37:07.622060 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-var-lib-cinder\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.624134 master-0 kubenswrapper[31411]: I0224 02:37:07.624107 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef0062c-3b15-4250-9b1f-bb0143fcd105-config-data\") pod \"cinder-6ac23-api-0\" (UID: \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:07.624219 master-0 kubenswrapper[31411]: I0224 02:37:07.624168 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-sys\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.624219 master-0 kubenswrapper[31411]: I0224 02:37:07.624205 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-lib-modules\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.626295 master-0 kubenswrapper[31411]: I0224 02:37:07.624229 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-etc-iscsi\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.626295 master-0 kubenswrapper[31411]: I0224 02:37:07.624302 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-etc-iscsi\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.626295 master-0 kubenswrapper[31411]: I0224 02:37:07.624354 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-sys\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.626295 master-0 kubenswrapper[31411]: I0224 02:37:07.624387 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-lib-modules\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.626295 master-0 kubenswrapper[31411]: I0224 02:37:07.624440 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcfbc46-898e-4e7b-a96a-e2170c671a45-combined-ca-bundle\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.626295 master-0 kubenswrapper[31411]: I0224 02:37:07.624595 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-run\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.626295 master-0 kubenswrapper[31411]: I0224 02:37:07.624794 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-etc-machine-id\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.626295 master-0 kubenswrapper[31411]: I0224 02:37:07.624885 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef0062c-3b15-4250-9b1f-bb0143fcd105-combined-ca-bundle\") pod \"cinder-6ac23-api-0\" (UID: \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:07.626295 master-0 kubenswrapper[31411]: I0224 02:37:07.624924 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef0062c-3b15-4250-9b1f-bb0143fcd105-scripts\") pod \"cinder-6ac23-api-0\" (UID: \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:07.626295 master-0 kubenswrapper[31411]: I0224 02:37:07.624966 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-dev\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.626295 master-0 kubenswrapper[31411]: I0224 02:37:07.625000 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fcfbc46-898e-4e7b-a96a-e2170c671a45-config-data\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.626295 master-0 kubenswrapper[31411]: I0224 02:37:07.625050 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9lkq\" (UniqueName: \"kubernetes.io/projected/6fcfbc46-898e-4e7b-a96a-e2170c671a45-kube-api-access-l9lkq\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.626295 master-0 kubenswrapper[31411]: I0224 02:37:07.625077 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxj29\" (UniqueName: \"kubernetes.io/projected/2ef0062c-3b15-4250-9b1f-bb0143fcd105-kube-api-access-nxj29\") pod \"cinder-6ac23-api-0\" (UID: \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:07.626295 master-0 kubenswrapper[31411]: I0224 02:37:07.625222 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-etc-machine-id\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.626295 master-0 kubenswrapper[31411]: I0224 02:37:07.625246 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-run\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.626295 master-0 kubenswrapper[31411]: I0224 02:37:07.625668 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-dev\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.627478 master-0 kubenswrapper[31411]: I0224 02:37:07.627439 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6fcfbc46-898e-4e7b-a96a-e2170c671a45-config-data-custom\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.631705 master-0 kubenswrapper[31411]: I0224 02:37:07.631631 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fcfbc46-898e-4e7b-a96a-e2170c671a45-scripts\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.634220 master-0 kubenswrapper[31411]: I0224 02:37:07.634035 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fcfbc46-898e-4e7b-a96a-e2170c671a45-config-data\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.641653 master-0 kubenswrapper[31411]: I0224 02:37:07.640288 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcfbc46-898e-4e7b-a96a-e2170c671a45-combined-ca-bundle\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.644250 master-0 kubenswrapper[31411]: I0224 02:37:07.643737 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9lkq\" (UniqueName: \"kubernetes.io/projected/6fcfbc46-898e-4e7b-a96a-e2170c671a45-kube-api-access-l9lkq\") pod \"cinder-6ac23-backup-0\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.714190 master-0 kubenswrapper[31411]: I0224 02:37:07.714111 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:07.732488 master-0 kubenswrapper[31411]: I0224 02:37:07.732427 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ef0062c-3b15-4250-9b1f-bb0143fcd105-etc-machine-id\") pod \"cinder-6ac23-api-0\" (UID: \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:07.732700 master-0 kubenswrapper[31411]: I0224 02:37:07.732560 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef0062c-3b15-4250-9b1f-bb0143fcd105-config-data\") pod \"cinder-6ac23-api-0\" (UID: \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:07.732700 master-0 kubenswrapper[31411]: I0224 02:37:07.732671 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef0062c-3b15-4250-9b1f-bb0143fcd105-combined-ca-bundle\") pod \"cinder-6ac23-api-0\" (UID: \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:07.732700 master-0 kubenswrapper[31411]: I0224 02:37:07.732698 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef0062c-3b15-4250-9b1f-bb0143fcd105-scripts\") pod \"cinder-6ac23-api-0\" (UID: \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:07.732808 master-0 kubenswrapper[31411]: I0224 02:37:07.732755 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxj29\" (UniqueName: \"kubernetes.io/projected/2ef0062c-3b15-4250-9b1f-bb0143fcd105-kube-api-access-nxj29\") pod \"cinder-6ac23-api-0\" (UID: \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:07.732808 master-0 kubenswrapper[31411]: I0224 02:37:07.732786 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ef0062c-3b15-4250-9b1f-bb0143fcd105-config-data-custom\") pod \"cinder-6ac23-api-0\" (UID: \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:07.732873 master-0 kubenswrapper[31411]: I0224 02:37:07.732812 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ef0062c-3b15-4250-9b1f-bb0143fcd105-logs\") pod \"cinder-6ac23-api-0\" (UID: \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:07.733483 master-0 kubenswrapper[31411]: I0224 02:37:07.733463 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ef0062c-3b15-4250-9b1f-bb0143fcd105-logs\") pod \"cinder-6ac23-api-0\" (UID: \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:07.733542 master-0 kubenswrapper[31411]: I0224 02:37:07.733529 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ef0062c-3b15-4250-9b1f-bb0143fcd105-etc-machine-id\") pod \"cinder-6ac23-api-0\" (UID: \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:07.740520 master-0 kubenswrapper[31411]: I0224 02:37:07.740467 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef0062c-3b15-4250-9b1f-bb0143fcd105-config-data\") pod \"cinder-6ac23-api-0\" (UID: \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:07.745282 master-0 kubenswrapper[31411]: I0224 02:37:07.743402 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef0062c-3b15-4250-9b1f-bb0143fcd105-combined-ca-bundle\") pod \"cinder-6ac23-api-0\" (UID: \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:07.749553 master-0 kubenswrapper[31411]: I0224 02:37:07.749519 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef0062c-3b15-4250-9b1f-bb0143fcd105-scripts\") pod \"cinder-6ac23-api-0\" (UID: \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:07.766601 master-0 kubenswrapper[31411]: I0224 02:37:07.764030 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ef0062c-3b15-4250-9b1f-bb0143fcd105-config-data-custom\") pod \"cinder-6ac23-api-0\" (UID: \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:07.767777 master-0 kubenswrapper[31411]: I0224 02:37:07.767694 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxj29\" (UniqueName: \"kubernetes.io/projected/2ef0062c-3b15-4250-9b1f-bb0143fcd105-kube-api-access-nxj29\") pod \"cinder-6ac23-api-0\" (UID: \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:07.787117 master-0 kubenswrapper[31411]: I0224 02:37:07.786998 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674dc645f-b7fhr" Feb 24 02:37:07.795439 master-0 kubenswrapper[31411]: I0224 02:37:07.793817 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc5ccc685-kl2f6" Feb 24 02:37:07.797419 master-0 kubenswrapper[31411]: I0224 02:37:07.797377 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:07.800779 master-0 kubenswrapper[31411]: I0224 02:37:07.800723 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:07.943928 master-0 kubenswrapper[31411]: I0224 02:37:07.942927 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-dns-svc\") pod \"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466\" (UID: \"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466\") " Feb 24 02:37:07.943928 master-0 kubenswrapper[31411]: I0224 02:37:07.943011 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx7pm\" (UniqueName: \"kubernetes.io/projected/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-kube-api-access-fx7pm\") pod \"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466\" (UID: \"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466\") " Feb 24 02:37:07.943928 master-0 kubenswrapper[31411]: I0224 02:37:07.943131 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-ovsdbserver-sb\") pod \"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466\" (UID: \"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466\") " Feb 24 02:37:07.943928 master-0 kubenswrapper[31411]: I0224 02:37:07.943173 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-dns-swift-storage-0\") pod \"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466\" (UID: \"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466\") " Feb 24 02:37:07.943928 master-0 kubenswrapper[31411]: I0224 02:37:07.943294 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-ovsdbserver-nb\") pod \"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466\" (UID: \"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466\") " Feb 24 02:37:07.943928 master-0 kubenswrapper[31411]: I0224 02:37:07.943322 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-config\") pod \"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466\" (UID: \"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466\") " Feb 24 02:37:07.951872 master-0 kubenswrapper[31411]: I0224 02:37:07.949810 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-kube-api-access-fx7pm" (OuterVolumeSpecName: "kube-api-access-fx7pm") pod "c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466" (UID: "c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466"). InnerVolumeSpecName "kube-api-access-fx7pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:37:08.013227 master-0 kubenswrapper[31411]: I0224 02:37:08.013164 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466" (UID: "c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:37:08.020091 master-0 kubenswrapper[31411]: I0224 02:37:08.019901 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-config" (OuterVolumeSpecName: "config") pod "c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466" (UID: "c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:37:08.038491 master-0 kubenswrapper[31411]: I0224 02:37:08.038389 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:08.048414 master-0 kubenswrapper[31411]: I0224 02:37:08.046384 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx7pm\" (UniqueName: \"kubernetes.io/projected/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-kube-api-access-fx7pm\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:08.048414 master-0 kubenswrapper[31411]: I0224 02:37:08.046415 31411 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:08.048414 master-0 kubenswrapper[31411]: I0224 02:37:08.046426 31411 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:08.049534 master-0 kubenswrapper[31411]: I0224 02:37:08.049500 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466" (UID: "c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:37:08.051598 master-0 kubenswrapper[31411]: I0224 02:37:08.051514 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466" (UID: "c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:37:08.059598 master-0 kubenswrapper[31411]: I0224 02:37:08.059548 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466" (UID: "c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:37:08.151544 master-0 kubenswrapper[31411]: I0224 02:37:08.151487 31411 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:08.151544 master-0 kubenswrapper[31411]: I0224 02:37:08.151530 31411 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:08.151544 master-0 kubenswrapper[31411]: I0224 02:37:08.151541 31411 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:08.311970 master-0 kubenswrapper[31411]: I0224 02:37:08.311893 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6ac23-scheduler-0"] Feb 24 02:37:08.338679 master-0 kubenswrapper[31411]: I0224 02:37:08.338613 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-scheduler-0" event={"ID":"8657494d-8b2e-4545-9713-e55ace12f329","Type":"ContainerStarted","Data":"88ff67047f2a86ba7b082e089a5ee6ec71ff924c9922bd408b5c2f89646f327a"} Feb 24 02:37:08.341091 master-0 kubenswrapper[31411]: I0224 02:37:08.340710 31411 generic.go:334] "Generic (PLEG): container finished" podID="c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466" containerID="fc7d34bcc5819e0dd7298b84b8d60657f7b1b044bceab5b74c5810e6f58a1aa4" exitCode=0 Feb 24 02:37:08.341840 master-0 kubenswrapper[31411]: I0224 02:37:08.341811 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674dc645f-b7fhr" Feb 24 02:37:08.342659 master-0 kubenswrapper[31411]: I0224 02:37:08.342177 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674dc645f-b7fhr" event={"ID":"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466","Type":"ContainerDied","Data":"fc7d34bcc5819e0dd7298b84b8d60657f7b1b044bceab5b74c5810e6f58a1aa4"} Feb 24 02:37:08.342659 master-0 kubenswrapper[31411]: I0224 02:37:08.342275 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674dc645f-b7fhr" event={"ID":"c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466","Type":"ContainerDied","Data":"3c7f59950bda322bec1848c87ab1da878bfa745ba8749b4b3d301f9ec1f26832"} Feb 24 02:37:08.342659 master-0 kubenswrapper[31411]: I0224 02:37:08.342306 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6b46dbc6bf-ngrn9" Feb 24 02:37:08.342659 master-0 kubenswrapper[31411]: I0224 02:37:08.342359 31411 scope.go:117] "RemoveContainer" containerID="fc7d34bcc5819e0dd7298b84b8d60657f7b1b044bceab5b74c5810e6f58a1aa4" Feb 24 02:37:08.436310 master-0 kubenswrapper[31411]: I0224 02:37:08.436204 31411 scope.go:117] "RemoveContainer" containerID="c2da429b3d451ce3a683eafbcfb11de3e297fb2e487e5a598e4ce22f5b7e6805" Feb 24 02:37:08.441165 master-0 kubenswrapper[31411]: I0224 02:37:08.441132 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-674dc645f-b7fhr"] Feb 24 02:37:08.451257 master-0 kubenswrapper[31411]: I0224 02:37:08.451175 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-674dc645f-b7fhr"] Feb 24 02:37:08.484133 master-0 kubenswrapper[31411]: I0224 02:37:08.483115 31411 scope.go:117] "RemoveContainer" containerID="fc7d34bcc5819e0dd7298b84b8d60657f7b1b044bceab5b74c5810e6f58a1aa4" Feb 24 02:37:08.484265 master-0 kubenswrapper[31411]: E0224 02:37:08.484236 31411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc7d34bcc5819e0dd7298b84b8d60657f7b1b044bceab5b74c5810e6f58a1aa4\": container with ID starting with fc7d34bcc5819e0dd7298b84b8d60657f7b1b044bceab5b74c5810e6f58a1aa4 not found: ID does not exist" containerID="fc7d34bcc5819e0dd7298b84b8d60657f7b1b044bceab5b74c5810e6f58a1aa4" Feb 24 02:37:08.484337 master-0 kubenswrapper[31411]: I0224 02:37:08.484272 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc7d34bcc5819e0dd7298b84b8d60657f7b1b044bceab5b74c5810e6f58a1aa4"} err="failed to get container status \"fc7d34bcc5819e0dd7298b84b8d60657f7b1b044bceab5b74c5810e6f58a1aa4\": rpc error: code = NotFound desc = could not find container \"fc7d34bcc5819e0dd7298b84b8d60657f7b1b044bceab5b74c5810e6f58a1aa4\": container with ID starting with fc7d34bcc5819e0dd7298b84b8d60657f7b1b044bceab5b74c5810e6f58a1aa4 not found: ID does not exist" Feb 24 02:37:08.484337 master-0 kubenswrapper[31411]: I0224 02:37:08.484303 31411 scope.go:117] "RemoveContainer" containerID="c2da429b3d451ce3a683eafbcfb11de3e297fb2e487e5a598e4ce22f5b7e6805" Feb 24 02:37:08.484837 master-0 kubenswrapper[31411]: E0224 02:37:08.484786 31411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2da429b3d451ce3a683eafbcfb11de3e297fb2e487e5a598e4ce22f5b7e6805\": container with ID starting with c2da429b3d451ce3a683eafbcfb11de3e297fb2e487e5a598e4ce22f5b7e6805 not found: ID does not exist" containerID="c2da429b3d451ce3a683eafbcfb11de3e297fb2e487e5a598e4ce22f5b7e6805" Feb 24 02:37:08.484837 master-0 kubenswrapper[31411]: I0224 02:37:08.484815 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2da429b3d451ce3a683eafbcfb11de3e297fb2e487e5a598e4ce22f5b7e6805"} err="failed to get container status \"c2da429b3d451ce3a683eafbcfb11de3e297fb2e487e5a598e4ce22f5b7e6805\": rpc error: code = NotFound desc = could not find container \"c2da429b3d451ce3a683eafbcfb11de3e297fb2e487e5a598e4ce22f5b7e6805\": container with ID starting with c2da429b3d451ce3a683eafbcfb11de3e297fb2e487e5a598e4ce22f5b7e6805 not found: ID does not exist" Feb 24 02:37:08.581727 master-0 kubenswrapper[31411]: I0224 02:37:08.580926 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc5ccc685-kl2f6"] Feb 24 02:37:08.615209 master-0 kubenswrapper[31411]: I0224 02:37:08.614934 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6ac23-volume-lvm-iscsi-0"] Feb 24 02:37:08.696595 master-0 kubenswrapper[31411]: I0224 02:37:08.696149 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6ac23-backup-0"] Feb 24 02:37:08.725352 master-0 kubenswrapper[31411]: W0224 02:37:08.725296 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fcfbc46_898e_4e7b_a96a_e2170c671a45.slice/crio-e93546c2e2924092e4bb69db692e3fb7d1c245b5c49c3c0ba03dc725391e17b5 WatchSource:0}: Error finding container e93546c2e2924092e4bb69db692e3fb7d1c245b5c49c3c0ba03dc725391e17b5: Status 404 returned error can't find the container with id e93546c2e2924092e4bb69db692e3fb7d1c245b5c49c3c0ba03dc725391e17b5 Feb 24 02:37:08.892519 master-0 kubenswrapper[31411]: I0224 02:37:08.892436 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6ac23-api-0"] Feb 24 02:37:08.892649 master-0 kubenswrapper[31411]: W0224 02:37:08.892557 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ef0062c_3b15_4250_9b1f_bb0143fcd105.slice/crio-f74f36271ada17055abbdf779803ab13c8e0825e9f080b71c62ee407cd5400fa WatchSource:0}: Error finding container f74f36271ada17055abbdf779803ab13c8e0825e9f080b71c62ee407cd5400fa: Status 404 returned error can't find the container with id f74f36271ada17055abbdf779803ab13c8e0825e9f080b71c62ee407cd5400fa Feb 24 02:37:09.143041 master-0 kubenswrapper[31411]: I0224 02:37:09.142990 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466" path="/var/lib/kubelet/pods/c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466/volumes" Feb 24 02:37:09.369602 master-0 kubenswrapper[31411]: I0224 02:37:09.369421 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-api-0" event={"ID":"2ef0062c-3b15-4250-9b1f-bb0143fcd105","Type":"ContainerStarted","Data":"f74f36271ada17055abbdf779803ab13c8e0825e9f080b71c62ee407cd5400fa"} Feb 24 02:37:09.387947 master-0 kubenswrapper[31411]: I0224 02:37:09.384970 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" event={"ID":"8e200864-094f-471c-ba9d-863d14ff8404","Type":"ContainerStarted","Data":"ddc5b9b1cc70f148d1c39a1db204e8407808aa7b5be8bdaf88463f576851eaf1"} Feb 24 02:37:09.394223 master-0 kubenswrapper[31411]: I0224 02:37:09.394161 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-backup-0" event={"ID":"6fcfbc46-898e-4e7b-a96a-e2170c671a45","Type":"ContainerStarted","Data":"e93546c2e2924092e4bb69db692e3fb7d1c245b5c49c3c0ba03dc725391e17b5"} Feb 24 02:37:09.396526 master-0 kubenswrapper[31411]: I0224 02:37:09.396489 31411 generic.go:334] "Generic (PLEG): container finished" podID="dcfea485-a4c1-4ed9-bfb8-2853157da83a" containerID="18ef565df257e875e545a03c10a4a7d8abdc5db10ad7302c6646158dc28a367c" exitCode=0 Feb 24 02:37:09.398406 master-0 kubenswrapper[31411]: I0224 02:37:09.398377 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc5ccc685-kl2f6" event={"ID":"dcfea485-a4c1-4ed9-bfb8-2853157da83a","Type":"ContainerDied","Data":"18ef565df257e875e545a03c10a4a7d8abdc5db10ad7302c6646158dc28a367c"} Feb 24 02:37:09.398406 master-0 kubenswrapper[31411]: I0224 02:37:09.398406 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc5ccc685-kl2f6" event={"ID":"dcfea485-a4c1-4ed9-bfb8-2853157da83a","Type":"ContainerStarted","Data":"68968d69c11ab1acd6ee17e014b91aae3a54d889df74742beae9b47c2a841e66"} Feb 24 02:37:09.731760 master-0 kubenswrapper[31411]: I0224 02:37:09.731703 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-6ac23-api-0"] Feb 24 02:37:10.414964 master-0 kubenswrapper[31411]: I0224 02:37:10.414883 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" event={"ID":"8e200864-094f-471c-ba9d-863d14ff8404","Type":"ContainerStarted","Data":"46b3e1401d4270531bf61719927a43f66a0d2786e4d9941333278da6d73172de"} Feb 24 02:37:10.418393 master-0 kubenswrapper[31411]: I0224 02:37:10.417424 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-backup-0" event={"ID":"6fcfbc46-898e-4e7b-a96a-e2170c671a45","Type":"ContainerStarted","Data":"bb53df19c53cbee60d75cdef9ec6e572433bc2a787797618d3d964e238935c5a"} Feb 24 02:37:10.421035 master-0 kubenswrapper[31411]: I0224 02:37:10.420994 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-scheduler-0" event={"ID":"8657494d-8b2e-4545-9713-e55ace12f329","Type":"ContainerStarted","Data":"eeb78254bf7f2baaf7618d1c8289888c1d4aedf8e156f196588221c7066ca527"} Feb 24 02:37:10.425048 master-0 kubenswrapper[31411]: I0224 02:37:10.424996 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc5ccc685-kl2f6" event={"ID":"dcfea485-a4c1-4ed9-bfb8-2853157da83a","Type":"ContainerStarted","Data":"2211f04ed5d84c1c323d5a3b81ef41db321aacb985cf203d8e49298cc3299c50"} Feb 24 02:37:10.425217 master-0 kubenswrapper[31411]: I0224 02:37:10.425180 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bc5ccc685-kl2f6" Feb 24 02:37:10.430925 master-0 kubenswrapper[31411]: I0224 02:37:10.430830 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-api-0" event={"ID":"2ef0062c-3b15-4250-9b1f-bb0143fcd105","Type":"ContainerStarted","Data":"6cdec3934af4e34ee4975c9e5118c80f5e4af1435142981792a3527895d9d943"} Feb 24 02:37:10.475322 master-0 kubenswrapper[31411]: I0224 02:37:10.475236 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bc5ccc685-kl2f6" podStartSLOduration=3.47521097 podStartE2EDuration="3.47521097s" podCreationTimestamp="2026-02-24 02:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:37:10.467431912 +0000 UTC m=+973.684629758" watchObservedRunningTime="2026-02-24 02:37:10.47521097 +0000 UTC m=+973.692408816" Feb 24 02:37:11.475154 master-0 kubenswrapper[31411]: I0224 02:37:11.475047 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-scheduler-0" event={"ID":"8657494d-8b2e-4545-9713-e55ace12f329","Type":"ContainerStarted","Data":"86a3152df7e3a2c46f55c0278d2eb568f064bdbae35cf4046fa2c6c2d3131640"} Feb 24 02:37:11.483109 master-0 kubenswrapper[31411]: I0224 02:37:11.483029 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-api-0" event={"ID":"2ef0062c-3b15-4250-9b1f-bb0143fcd105","Type":"ContainerStarted","Data":"9a9e66df9212f71e78ab3ccc9c8dcb0e8aa4cb35b5ded9f174d105d3405c6c1d"} Feb 24 02:37:11.483338 master-0 kubenswrapper[31411]: I0224 02:37:11.483255 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-6ac23-api-0" podUID="2ef0062c-3b15-4250-9b1f-bb0143fcd105" containerName="cinder-6ac23-api-log" containerID="cri-o://6cdec3934af4e34ee4975c9e5118c80f5e4af1435142981792a3527895d9d943" gracePeriod=30 Feb 24 02:37:11.483559 master-0 kubenswrapper[31411]: I0224 02:37:11.483499 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:11.483763 master-0 kubenswrapper[31411]: I0224 02:37:11.483676 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-6ac23-api-0" podUID="2ef0062c-3b15-4250-9b1f-bb0143fcd105" containerName="cinder-api" containerID="cri-o://9a9e66df9212f71e78ab3ccc9c8dcb0e8aa4cb35b5ded9f174d105d3405c6c1d" gracePeriod=30 Feb 24 02:37:11.489442 master-0 kubenswrapper[31411]: I0224 02:37:11.489393 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" event={"ID":"8e200864-094f-471c-ba9d-863d14ff8404","Type":"ContainerStarted","Data":"d4320888b9dabb7b3f57c8dc04c0c7e4c1d12a141528226e6a521350a767e095"} Feb 24 02:37:11.504447 master-0 kubenswrapper[31411]: I0224 02:37:11.504350 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-backup-0" event={"ID":"6fcfbc46-898e-4e7b-a96a-e2170c671a45","Type":"ContainerStarted","Data":"a18767a10ccb87aefe58cb57822457b32145a2e700ed95684236232652104c0d"} Feb 24 02:37:11.547916 master-0 kubenswrapper[31411]: I0224 02:37:11.547794 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-6ac23-scheduler-0" podStartSLOduration=3.751986817 podStartE2EDuration="4.547760945s" podCreationTimestamp="2026-02-24 02:37:07 +0000 UTC" firstStartedPulling="2026-02-24 02:37:08.303829549 +0000 UTC m=+971.521027395" lastFinishedPulling="2026-02-24 02:37:09.099603677 +0000 UTC m=+972.316801523" observedRunningTime="2026-02-24 02:37:11.51082311 +0000 UTC m=+974.728020996" watchObservedRunningTime="2026-02-24 02:37:11.547760945 +0000 UTC m=+974.764958821" Feb 24 02:37:11.566609 master-0 kubenswrapper[31411]: I0224 02:37:11.566430 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-6ac23-backup-0" podStartSLOduration=3.434514318 podStartE2EDuration="4.566410858s" podCreationTimestamp="2026-02-24 02:37:07 +0000 UTC" firstStartedPulling="2026-02-24 02:37:08.728245597 +0000 UTC m=+971.945443433" lastFinishedPulling="2026-02-24 02:37:09.860142127 +0000 UTC m=+973.077339973" observedRunningTime="2026-02-24 02:37:11.542472217 +0000 UTC m=+974.759670073" watchObservedRunningTime="2026-02-24 02:37:11.566410858 +0000 UTC m=+974.783608714" Feb 24 02:37:11.600609 master-0 kubenswrapper[31411]: I0224 02:37:11.599474 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" podStartSLOduration=3.45385656 podStartE2EDuration="4.599448124s" podCreationTimestamp="2026-02-24 02:37:07 +0000 UTC" firstStartedPulling="2026-02-24 02:37:08.713120933 +0000 UTC m=+971.930318779" lastFinishedPulling="2026-02-24 02:37:09.858712497 +0000 UTC m=+973.075910343" observedRunningTime="2026-02-24 02:37:11.58788026 +0000 UTC m=+974.805078126" watchObservedRunningTime="2026-02-24 02:37:11.599448124 +0000 UTC m=+974.816645980" Feb 24 02:37:11.635595 master-0 kubenswrapper[31411]: I0224 02:37:11.635472 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-6ac23-api-0" podStartSLOduration=4.635443893 podStartE2EDuration="4.635443893s" podCreationTimestamp="2026-02-24 02:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:37:11.619418924 +0000 UTC m=+974.836616780" watchObservedRunningTime="2026-02-24 02:37:11.635443893 +0000 UTC m=+974.852641749" Feb 24 02:37:12.523871 master-0 kubenswrapper[31411]: I0224 02:37:12.523796 31411 generic.go:334] "Generic (PLEG): container finished" podID="a3a705bf-9636-4410-a44a-6ff6907d4179" containerID="47500ce406e24c05c3cb656695bc67b302bb1c3089aed248cbc1372b1c351a3e" exitCode=0 Feb 24 02:37:12.524901 master-0 kubenswrapper[31411]: I0224 02:37:12.523913 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-jzr8b" event={"ID":"a3a705bf-9636-4410-a44a-6ff6907d4179","Type":"ContainerDied","Data":"47500ce406e24c05c3cb656695bc67b302bb1c3089aed248cbc1372b1c351a3e"} Feb 24 02:37:12.528238 master-0 kubenswrapper[31411]: I0224 02:37:12.528180 31411 generic.go:334] "Generic (PLEG): container finished" podID="2ef0062c-3b15-4250-9b1f-bb0143fcd105" containerID="9a9e66df9212f71e78ab3ccc9c8dcb0e8aa4cb35b5ded9f174d105d3405c6c1d" exitCode=0 Feb 24 02:37:12.528444 master-0 kubenswrapper[31411]: I0224 02:37:12.528415 31411 generic.go:334] "Generic (PLEG): container finished" podID="2ef0062c-3b15-4250-9b1f-bb0143fcd105" containerID="6cdec3934af4e34ee4975c9e5118c80f5e4af1435142981792a3527895d9d943" exitCode=143 Feb 24 02:37:12.528757 master-0 kubenswrapper[31411]: I0224 02:37:12.528263 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-api-0" event={"ID":"2ef0062c-3b15-4250-9b1f-bb0143fcd105","Type":"ContainerDied","Data":"9a9e66df9212f71e78ab3ccc9c8dcb0e8aa4cb35b5ded9f174d105d3405c6c1d"} Feb 24 02:37:12.528882 master-0 kubenswrapper[31411]: I0224 02:37:12.528821 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-api-0" event={"ID":"2ef0062c-3b15-4250-9b1f-bb0143fcd105","Type":"ContainerDied","Data":"6cdec3934af4e34ee4975c9e5118c80f5e4af1435142981792a3527895d9d943"} Feb 24 02:37:12.528882 master-0 kubenswrapper[31411]: I0224 02:37:12.528874 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-api-0" event={"ID":"2ef0062c-3b15-4250-9b1f-bb0143fcd105","Type":"ContainerDied","Data":"f74f36271ada17055abbdf779803ab13c8e0825e9f080b71c62ee407cd5400fa"} Feb 24 02:37:12.529017 master-0 kubenswrapper[31411]: I0224 02:37:12.528897 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f74f36271ada17055abbdf779803ab13c8e0825e9f080b71c62ee407cd5400fa" Feb 24 02:37:12.590513 master-0 kubenswrapper[31411]: I0224 02:37:12.590430 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:12.715870 master-0 kubenswrapper[31411]: I0224 02:37:12.715791 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:12.753665 master-0 kubenswrapper[31411]: I0224 02:37:12.753480 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ef0062c-3b15-4250-9b1f-bb0143fcd105-config-data-custom\") pod \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\" (UID: \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\") " Feb 24 02:37:12.753665 master-0 kubenswrapper[31411]: I0224 02:37:12.753629 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ef0062c-3b15-4250-9b1f-bb0143fcd105-etc-machine-id\") pod \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\" (UID: \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\") " Feb 24 02:37:12.753939 master-0 kubenswrapper[31411]: I0224 02:37:12.753823 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef0062c-3b15-4250-9b1f-bb0143fcd105-config-data\") pod \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\" (UID: \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\") " Feb 24 02:37:12.754029 master-0 kubenswrapper[31411]: I0224 02:37:12.753951 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef0062c-3b15-4250-9b1f-bb0143fcd105-scripts\") pod \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\" (UID: \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\") " Feb 24 02:37:12.754168 master-0 kubenswrapper[31411]: I0224 02:37:12.754135 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ef0062c-3b15-4250-9b1f-bb0143fcd105-logs\") pod \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\" (UID: \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\") " Feb 24 02:37:12.754421 master-0 kubenswrapper[31411]: I0224 02:37:12.754361 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ef0062c-3b15-4250-9b1f-bb0143fcd105-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2ef0062c-3b15-4250-9b1f-bb0143fcd105" (UID: "2ef0062c-3b15-4250-9b1f-bb0143fcd105"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:37:12.755198 master-0 kubenswrapper[31411]: I0224 02:37:12.755150 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ef0062c-3b15-4250-9b1f-bb0143fcd105-logs" (OuterVolumeSpecName: "logs") pod "2ef0062c-3b15-4250-9b1f-bb0143fcd105" (UID: "2ef0062c-3b15-4250-9b1f-bb0143fcd105"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:37:12.755198 master-0 kubenswrapper[31411]: I0224 02:37:12.755170 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxj29\" (UniqueName: \"kubernetes.io/projected/2ef0062c-3b15-4250-9b1f-bb0143fcd105-kube-api-access-nxj29\") pod \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\" (UID: \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\") " Feb 24 02:37:12.755463 master-0 kubenswrapper[31411]: I0224 02:37:12.755415 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef0062c-3b15-4250-9b1f-bb0143fcd105-combined-ca-bundle\") pod \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\" (UID: \"2ef0062c-3b15-4250-9b1f-bb0143fcd105\") " Feb 24 02:37:12.757218 master-0 kubenswrapper[31411]: I0224 02:37:12.757168 31411 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ef0062c-3b15-4250-9b1f-bb0143fcd105-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:12.757218 master-0 kubenswrapper[31411]: I0224 02:37:12.757204 31411 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ef0062c-3b15-4250-9b1f-bb0143fcd105-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:12.760953 master-0 kubenswrapper[31411]: I0224 02:37:12.760871 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ef0062c-3b15-4250-9b1f-bb0143fcd105-kube-api-access-nxj29" (OuterVolumeSpecName: "kube-api-access-nxj29") pod "2ef0062c-3b15-4250-9b1f-bb0143fcd105" (UID: "2ef0062c-3b15-4250-9b1f-bb0143fcd105"). InnerVolumeSpecName "kube-api-access-nxj29". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:37:12.762291 master-0 kubenswrapper[31411]: I0224 02:37:12.762247 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef0062c-3b15-4250-9b1f-bb0143fcd105-scripts" (OuterVolumeSpecName: "scripts") pod "2ef0062c-3b15-4250-9b1f-bb0143fcd105" (UID: "2ef0062c-3b15-4250-9b1f-bb0143fcd105"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:12.763208 master-0 kubenswrapper[31411]: I0224 02:37:12.762773 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef0062c-3b15-4250-9b1f-bb0143fcd105-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2ef0062c-3b15-4250-9b1f-bb0143fcd105" (UID: "2ef0062c-3b15-4250-9b1f-bb0143fcd105"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:12.798236 master-0 kubenswrapper[31411]: I0224 02:37:12.798102 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:12.798599 master-0 kubenswrapper[31411]: I0224 02:37:12.798529 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef0062c-3b15-4250-9b1f-bb0143fcd105-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ef0062c-3b15-4250-9b1f-bb0143fcd105" (UID: "2ef0062c-3b15-4250-9b1f-bb0143fcd105"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:12.802096 master-0 kubenswrapper[31411]: I0224 02:37:12.801831 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:12.828714 master-0 kubenswrapper[31411]: I0224 02:37:12.828119 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef0062c-3b15-4250-9b1f-bb0143fcd105-config-data" (OuterVolumeSpecName: "config-data") pod "2ef0062c-3b15-4250-9b1f-bb0143fcd105" (UID: "2ef0062c-3b15-4250-9b1f-bb0143fcd105"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:12.860750 master-0 kubenswrapper[31411]: I0224 02:37:12.860619 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxj29\" (UniqueName: \"kubernetes.io/projected/2ef0062c-3b15-4250-9b1f-bb0143fcd105-kube-api-access-nxj29\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:12.860750 master-0 kubenswrapper[31411]: I0224 02:37:12.860695 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef0062c-3b15-4250-9b1f-bb0143fcd105-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:12.860750 master-0 kubenswrapper[31411]: I0224 02:37:12.860706 31411 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ef0062c-3b15-4250-9b1f-bb0143fcd105-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:12.860750 master-0 kubenswrapper[31411]: I0224 02:37:12.860718 31411 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef0062c-3b15-4250-9b1f-bb0143fcd105-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:12.860750 master-0 kubenswrapper[31411]: I0224 02:37:12.860729 31411 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef0062c-3b15-4250-9b1f-bb0143fcd105-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:13.548230 master-0 kubenswrapper[31411]: I0224 02:37:13.548121 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:13.643616 master-0 kubenswrapper[31411]: I0224 02:37:13.643495 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-6ac23-api-0"] Feb 24 02:37:13.668452 master-0 kubenswrapper[31411]: I0224 02:37:13.667289 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-6ac23-api-0"] Feb 24 02:37:13.677038 master-0 kubenswrapper[31411]: I0224 02:37:13.676954 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-6ac23-api-0"] Feb 24 02:37:13.678005 master-0 kubenswrapper[31411]: E0224 02:37:13.677953 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ef0062c-3b15-4250-9b1f-bb0143fcd105" containerName="cinder-api" Feb 24 02:37:13.678005 master-0 kubenswrapper[31411]: I0224 02:37:13.677994 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ef0062c-3b15-4250-9b1f-bb0143fcd105" containerName="cinder-api" Feb 24 02:37:13.678244 master-0 kubenswrapper[31411]: E0224 02:37:13.678048 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466" containerName="dnsmasq-dns" Feb 24 02:37:13.678244 master-0 kubenswrapper[31411]: I0224 02:37:13.678062 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466" containerName="dnsmasq-dns" Feb 24 02:37:13.678244 master-0 kubenswrapper[31411]: E0224 02:37:13.678150 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ef0062c-3b15-4250-9b1f-bb0143fcd105" containerName="cinder-6ac23-api-log" Feb 24 02:37:13.678244 master-0 kubenswrapper[31411]: I0224 02:37:13.678165 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ef0062c-3b15-4250-9b1f-bb0143fcd105" containerName="cinder-6ac23-api-log" Feb 24 02:37:13.678244 master-0 kubenswrapper[31411]: E0224 02:37:13.678226 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466" containerName="init" Feb 24 02:37:13.678244 master-0 kubenswrapper[31411]: I0224 02:37:13.678239 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466" containerName="init" Feb 24 02:37:13.678755 master-0 kubenswrapper[31411]: I0224 02:37:13.678703 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ef0062c-3b15-4250-9b1f-bb0143fcd105" containerName="cinder-api" Feb 24 02:37:13.678879 master-0 kubenswrapper[31411]: I0224 02:37:13.678761 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ef0062c-3b15-4250-9b1f-bb0143fcd105" containerName="cinder-6ac23-api-log" Feb 24 02:37:13.678879 master-0 kubenswrapper[31411]: I0224 02:37:13.678844 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4fddb55-1b2d-44e1-8ad9-0f2fb92bd466" containerName="dnsmasq-dns" Feb 24 02:37:13.689305 master-0 kubenswrapper[31411]: I0224 02:37:13.681680 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:13.689305 master-0 kubenswrapper[31411]: I0224 02:37:13.685712 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-6ac23-api-config-data" Feb 24 02:37:13.689305 master-0 kubenswrapper[31411]: I0224 02:37:13.686082 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 24 02:37:13.689305 master-0 kubenswrapper[31411]: I0224 02:37:13.687751 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6ac23-api-0"] Feb 24 02:37:13.689305 master-0 kubenswrapper[31411]: I0224 02:37:13.689269 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 24 02:37:13.796669 master-0 kubenswrapper[31411]: I0224 02:37:13.796550 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1097a77b-8299-4100-b3f4-d94348cf6578-config-data-custom\") pod \"cinder-6ac23-api-0\" (UID: \"1097a77b-8299-4100-b3f4-d94348cf6578\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:13.796984 master-0 kubenswrapper[31411]: I0224 02:37:13.796697 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1097a77b-8299-4100-b3f4-d94348cf6578-config-data\") pod \"cinder-6ac23-api-0\" (UID: \"1097a77b-8299-4100-b3f4-d94348cf6578\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:13.796984 master-0 kubenswrapper[31411]: I0224 02:37:13.796921 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1097a77b-8299-4100-b3f4-d94348cf6578-scripts\") pod \"cinder-6ac23-api-0\" (UID: \"1097a77b-8299-4100-b3f4-d94348cf6578\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:13.797136 master-0 kubenswrapper[31411]: I0224 02:37:13.797092 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1097a77b-8299-4100-b3f4-d94348cf6578-internal-tls-certs\") pod \"cinder-6ac23-api-0\" (UID: \"1097a77b-8299-4100-b3f4-d94348cf6578\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:13.797372 master-0 kubenswrapper[31411]: I0224 02:37:13.797285 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1097a77b-8299-4100-b3f4-d94348cf6578-combined-ca-bundle\") pod \"cinder-6ac23-api-0\" (UID: \"1097a77b-8299-4100-b3f4-d94348cf6578\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:13.797506 master-0 kubenswrapper[31411]: I0224 02:37:13.797378 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1097a77b-8299-4100-b3f4-d94348cf6578-logs\") pod \"cinder-6ac23-api-0\" (UID: \"1097a77b-8299-4100-b3f4-d94348cf6578\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:13.797506 master-0 kubenswrapper[31411]: I0224 02:37:13.797469 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1097a77b-8299-4100-b3f4-d94348cf6578-public-tls-certs\") pod \"cinder-6ac23-api-0\" (UID: \"1097a77b-8299-4100-b3f4-d94348cf6578\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:13.799293 master-0 kubenswrapper[31411]: I0224 02:37:13.797523 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1097a77b-8299-4100-b3f4-d94348cf6578-etc-machine-id\") pod \"cinder-6ac23-api-0\" (UID: \"1097a77b-8299-4100-b3f4-d94348cf6578\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:13.799293 master-0 kubenswrapper[31411]: I0224 02:37:13.797875 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nvmc\" (UniqueName: \"kubernetes.io/projected/1097a77b-8299-4100-b3f4-d94348cf6578-kube-api-access-5nvmc\") pod \"cinder-6ac23-api-0\" (UID: \"1097a77b-8299-4100-b3f4-d94348cf6578\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:13.901459 master-0 kubenswrapper[31411]: I0224 02:37:13.901372 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1097a77b-8299-4100-b3f4-d94348cf6578-config-data-custom\") pod \"cinder-6ac23-api-0\" (UID: \"1097a77b-8299-4100-b3f4-d94348cf6578\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:13.901928 master-0 kubenswrapper[31411]: I0224 02:37:13.901498 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1097a77b-8299-4100-b3f4-d94348cf6578-config-data\") pod \"cinder-6ac23-api-0\" (UID: \"1097a77b-8299-4100-b3f4-d94348cf6578\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:13.907035 master-0 kubenswrapper[31411]: I0224 02:37:13.906944 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1097a77b-8299-4100-b3f4-d94348cf6578-scripts\") pod \"cinder-6ac23-api-0\" (UID: \"1097a77b-8299-4100-b3f4-d94348cf6578\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:13.907405 master-0 kubenswrapper[31411]: I0224 02:37:13.907121 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1097a77b-8299-4100-b3f4-d94348cf6578-internal-tls-certs\") pod \"cinder-6ac23-api-0\" (UID: \"1097a77b-8299-4100-b3f4-d94348cf6578\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:13.907519 master-0 kubenswrapper[31411]: I0224 02:37:13.907478 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1097a77b-8299-4100-b3f4-d94348cf6578-combined-ca-bundle\") pod \"cinder-6ac23-api-0\" (UID: \"1097a77b-8299-4100-b3f4-d94348cf6578\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:13.907779 master-0 kubenswrapper[31411]: I0224 02:37:13.907652 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1097a77b-8299-4100-b3f4-d94348cf6578-logs\") pod \"cinder-6ac23-api-0\" (UID: \"1097a77b-8299-4100-b3f4-d94348cf6578\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:13.907886 master-0 kubenswrapper[31411]: I0224 02:37:13.907848 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1097a77b-8299-4100-b3f4-d94348cf6578-public-tls-certs\") pod \"cinder-6ac23-api-0\" (UID: \"1097a77b-8299-4100-b3f4-d94348cf6578\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:13.907941 master-0 kubenswrapper[31411]: I0224 02:37:13.907920 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1097a77b-8299-4100-b3f4-d94348cf6578-etc-machine-id\") pod \"cinder-6ac23-api-0\" (UID: \"1097a77b-8299-4100-b3f4-d94348cf6578\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:13.908228 master-0 kubenswrapper[31411]: I0224 02:37:13.908123 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nvmc\" (UniqueName: \"kubernetes.io/projected/1097a77b-8299-4100-b3f4-d94348cf6578-kube-api-access-5nvmc\") pod \"cinder-6ac23-api-0\" (UID: \"1097a77b-8299-4100-b3f4-d94348cf6578\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:13.908670 master-0 kubenswrapper[31411]: I0224 02:37:13.908614 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1097a77b-8299-4100-b3f4-d94348cf6578-config-data-custom\") pod \"cinder-6ac23-api-0\" (UID: \"1097a77b-8299-4100-b3f4-d94348cf6578\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:13.911075 master-0 kubenswrapper[31411]: I0224 02:37:13.910170 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1097a77b-8299-4100-b3f4-d94348cf6578-config-data\") pod \"cinder-6ac23-api-0\" (UID: \"1097a77b-8299-4100-b3f4-d94348cf6578\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:13.911075 master-0 kubenswrapper[31411]: I0224 02:37:13.910253 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1097a77b-8299-4100-b3f4-d94348cf6578-etc-machine-id\") pod \"cinder-6ac23-api-0\" (UID: \"1097a77b-8299-4100-b3f4-d94348cf6578\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:13.911075 master-0 kubenswrapper[31411]: I0224 02:37:13.910608 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1097a77b-8299-4100-b3f4-d94348cf6578-logs\") pod \"cinder-6ac23-api-0\" (UID: \"1097a77b-8299-4100-b3f4-d94348cf6578\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:13.923093 master-0 kubenswrapper[31411]: I0224 02:37:13.919860 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1097a77b-8299-4100-b3f4-d94348cf6578-public-tls-certs\") pod \"cinder-6ac23-api-0\" (UID: \"1097a77b-8299-4100-b3f4-d94348cf6578\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:13.923093 master-0 kubenswrapper[31411]: I0224 02:37:13.919874 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1097a77b-8299-4100-b3f4-d94348cf6578-scripts\") pod \"cinder-6ac23-api-0\" (UID: \"1097a77b-8299-4100-b3f4-d94348cf6578\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:13.923093 master-0 kubenswrapper[31411]: I0224 02:37:13.922719 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1097a77b-8299-4100-b3f4-d94348cf6578-combined-ca-bundle\") pod \"cinder-6ac23-api-0\" (UID: \"1097a77b-8299-4100-b3f4-d94348cf6578\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:13.927714 master-0 kubenswrapper[31411]: I0224 02:37:13.927339 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1097a77b-8299-4100-b3f4-d94348cf6578-internal-tls-certs\") pod \"cinder-6ac23-api-0\" (UID: \"1097a77b-8299-4100-b3f4-d94348cf6578\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:13.932111 master-0 kubenswrapper[31411]: I0224 02:37:13.932068 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nvmc\" (UniqueName: \"kubernetes.io/projected/1097a77b-8299-4100-b3f4-d94348cf6578-kube-api-access-5nvmc\") pod \"cinder-6ac23-api-0\" (UID: \"1097a77b-8299-4100-b3f4-d94348cf6578\") " pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:14.046707 master-0 kubenswrapper[31411]: I0224 02:37:14.040972 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:14.220308 master-0 kubenswrapper[31411]: I0224 02:37:14.219712 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-jzr8b" Feb 24 02:37:14.336031 master-0 kubenswrapper[31411]: I0224 02:37:14.335907 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q8vp\" (UniqueName: \"kubernetes.io/projected/a3a705bf-9636-4410-a44a-6ff6907d4179-kube-api-access-6q8vp\") pod \"a3a705bf-9636-4410-a44a-6ff6907d4179\" (UID: \"a3a705bf-9636-4410-a44a-6ff6907d4179\") " Feb 24 02:37:14.336401 master-0 kubenswrapper[31411]: I0224 02:37:14.336336 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a705bf-9636-4410-a44a-6ff6907d4179-combined-ca-bundle\") pod \"a3a705bf-9636-4410-a44a-6ff6907d4179\" (UID: \"a3a705bf-9636-4410-a44a-6ff6907d4179\") " Feb 24 02:37:14.336707 master-0 kubenswrapper[31411]: I0224 02:37:14.336619 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a705bf-9636-4410-a44a-6ff6907d4179-config-data\") pod \"a3a705bf-9636-4410-a44a-6ff6907d4179\" (UID: \"a3a705bf-9636-4410-a44a-6ff6907d4179\") " Feb 24 02:37:14.336782 master-0 kubenswrapper[31411]: I0224 02:37:14.336730 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a3a705bf-9636-4410-a44a-6ff6907d4179-config-data-merged\") pod \"a3a705bf-9636-4410-a44a-6ff6907d4179\" (UID: \"a3a705bf-9636-4410-a44a-6ff6907d4179\") " Feb 24 02:37:14.336980 master-0 kubenswrapper[31411]: I0224 02:37:14.336941 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/a3a705bf-9636-4410-a44a-6ff6907d4179-etc-podinfo\") pod \"a3a705bf-9636-4410-a44a-6ff6907d4179\" (UID: \"a3a705bf-9636-4410-a44a-6ff6907d4179\") " Feb 24 02:37:14.337041 master-0 kubenswrapper[31411]: I0224 02:37:14.337009 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3a705bf-9636-4410-a44a-6ff6907d4179-scripts\") pod \"a3a705bf-9636-4410-a44a-6ff6907d4179\" (UID: \"a3a705bf-9636-4410-a44a-6ff6907d4179\") " Feb 24 02:37:14.342916 master-0 kubenswrapper[31411]: I0224 02:37:14.341729 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3a705bf-9636-4410-a44a-6ff6907d4179-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "a3a705bf-9636-4410-a44a-6ff6907d4179" (UID: "a3a705bf-9636-4410-a44a-6ff6907d4179"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:37:14.348391 master-0 kubenswrapper[31411]: I0224 02:37:14.348154 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3a705bf-9636-4410-a44a-6ff6907d4179-scripts" (OuterVolumeSpecName: "scripts") pod "a3a705bf-9636-4410-a44a-6ff6907d4179" (UID: "a3a705bf-9636-4410-a44a-6ff6907d4179"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:14.348391 master-0 kubenswrapper[31411]: I0224 02:37:14.348179 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a3a705bf-9636-4410-a44a-6ff6907d4179-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "a3a705bf-9636-4410-a44a-6ff6907d4179" (UID: "a3a705bf-9636-4410-a44a-6ff6907d4179"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 24 02:37:14.361996 master-0 kubenswrapper[31411]: I0224 02:37:14.361638 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3a705bf-9636-4410-a44a-6ff6907d4179-kube-api-access-6q8vp" (OuterVolumeSpecName: "kube-api-access-6q8vp") pod "a3a705bf-9636-4410-a44a-6ff6907d4179" (UID: "a3a705bf-9636-4410-a44a-6ff6907d4179"). InnerVolumeSpecName "kube-api-access-6q8vp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:37:14.400993 master-0 kubenswrapper[31411]: I0224 02:37:14.400907 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3a705bf-9636-4410-a44a-6ff6907d4179-config-data" (OuterVolumeSpecName: "config-data") pod "a3a705bf-9636-4410-a44a-6ff6907d4179" (UID: "a3a705bf-9636-4410-a44a-6ff6907d4179"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:14.417731 master-0 kubenswrapper[31411]: I0224 02:37:14.417524 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3a705bf-9636-4410-a44a-6ff6907d4179-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3a705bf-9636-4410-a44a-6ff6907d4179" (UID: "a3a705bf-9636-4410-a44a-6ff6907d4179"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:14.444080 master-0 kubenswrapper[31411]: I0224 02:37:14.444018 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6q8vp\" (UniqueName: \"kubernetes.io/projected/a3a705bf-9636-4410-a44a-6ff6907d4179-kube-api-access-6q8vp\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:14.444080 master-0 kubenswrapper[31411]: I0224 02:37:14.444081 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a705bf-9636-4410-a44a-6ff6907d4179-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:14.444290 master-0 kubenswrapper[31411]: I0224 02:37:14.444108 31411 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a705bf-9636-4410-a44a-6ff6907d4179-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:14.444290 master-0 kubenswrapper[31411]: I0224 02:37:14.444128 31411 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a3a705bf-9636-4410-a44a-6ff6907d4179-config-data-merged\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:14.444290 master-0 kubenswrapper[31411]: I0224 02:37:14.444149 31411 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/a3a705bf-9636-4410-a44a-6ff6907d4179-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:14.444290 master-0 kubenswrapper[31411]: I0224 02:37:14.444168 31411 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3a705bf-9636-4410-a44a-6ff6907d4179-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:14.564427 master-0 kubenswrapper[31411]: I0224 02:37:14.564366 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-jzr8b" event={"ID":"a3a705bf-9636-4410-a44a-6ff6907d4179","Type":"ContainerDied","Data":"4132648f2993f7a672d22ddb371ede07a50dc3610cf1142ca93b21a22ec87fb2"} Feb 24 02:37:14.564427 master-0 kubenswrapper[31411]: I0224 02:37:14.564420 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4132648f2993f7a672d22ddb371ede07a50dc3610cf1142ca93b21a22ec87fb2" Feb 24 02:37:14.565189 master-0 kubenswrapper[31411]: I0224 02:37:14.564452 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-jzr8b" Feb 24 02:37:14.679974 master-0 kubenswrapper[31411]: W0224 02:37:14.679921 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1097a77b_8299_4100_b3f4_d94348cf6578.slice/crio-38ef615f1fbe260dd0ee671bf49cafe74b077e63eeeaf1447b8cccf3c9668cb1 WatchSource:0}: Error finding container 38ef615f1fbe260dd0ee671bf49cafe74b077e63eeeaf1447b8cccf3c9668cb1: Status 404 returned error can't find the container with id 38ef615f1fbe260dd0ee671bf49cafe74b077e63eeeaf1447b8cccf3c9668cb1 Feb 24 02:37:14.688432 master-0 kubenswrapper[31411]: I0224 02:37:14.688358 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6ac23-api-0"] Feb 24 02:37:15.134606 master-0 kubenswrapper[31411]: I0224 02:37:15.131778 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ef0062c-3b15-4250-9b1f-bb0143fcd105" path="/var/lib/kubelet/pods/2ef0062c-3b15-4250-9b1f-bb0143fcd105/volumes" Feb 24 02:37:15.134606 master-0 kubenswrapper[31411]: I0224 02:37:15.132863 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-create-8kz9s"] Feb 24 02:37:15.134606 master-0 kubenswrapper[31411]: E0224 02:37:15.133385 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3a705bf-9636-4410-a44a-6ff6907d4179" containerName="ironic-db-sync" Feb 24 02:37:15.134606 master-0 kubenswrapper[31411]: I0224 02:37:15.133403 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3a705bf-9636-4410-a44a-6ff6907d4179" containerName="ironic-db-sync" Feb 24 02:37:15.134606 master-0 kubenswrapper[31411]: E0224 02:37:15.133500 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3a705bf-9636-4410-a44a-6ff6907d4179" containerName="init" Feb 24 02:37:15.134606 master-0 kubenswrapper[31411]: I0224 02:37:15.133509 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3a705bf-9636-4410-a44a-6ff6907d4179" containerName="init" Feb 24 02:37:15.135123 master-0 kubenswrapper[31411]: I0224 02:37:15.135090 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3a705bf-9636-4410-a44a-6ff6907d4179" containerName="ironic-db-sync" Feb 24 02:37:15.136273 master-0 kubenswrapper[31411]: I0224 02:37:15.136246 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-8kz9s" Feb 24 02:37:15.165593 master-0 kubenswrapper[31411]: I0224 02:37:15.163796 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-8kz9s"] Feb 24 02:37:15.287605 master-0 kubenswrapper[31411]: I0224 02:37:15.287419 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prjc6\" (UniqueName: \"kubernetes.io/projected/681e6026-865f-4f36-9ca6-5321c0738d18-kube-api-access-prjc6\") pod \"ironic-inspector-db-create-8kz9s\" (UID: \"681e6026-865f-4f36-9ca6-5321c0738d18\") " pod="openstack/ironic-inspector-db-create-8kz9s" Feb 24 02:37:15.287605 master-0 kubenswrapper[31411]: I0224 02:37:15.287546 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/681e6026-865f-4f36-9ca6-5321c0738d18-operator-scripts\") pod \"ironic-inspector-db-create-8kz9s\" (UID: \"681e6026-865f-4f36-9ca6-5321c0738d18\") " pod="openstack/ironic-inspector-db-create-8kz9s" Feb 24 02:37:15.358611 master-0 kubenswrapper[31411]: I0224 02:37:15.357353 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-neutron-agent-7d8f6784f6-dqjdm"] Feb 24 02:37:15.359285 master-0 kubenswrapper[31411]: I0224 02:37:15.359249 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-7d8f6784f6-dqjdm" Feb 24 02:37:15.366554 master-0 kubenswrapper[31411]: I0224 02:37:15.363592 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-ironic-neutron-agent-config-data" Feb 24 02:37:15.397675 master-0 kubenswrapper[31411]: I0224 02:37:15.397235 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-7d8f6784f6-dqjdm"] Feb 24 02:37:15.403320 master-0 kubenswrapper[31411]: I0224 02:37:15.403256 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-2bdc-account-create-update-5cgdd"] Feb 24 02:37:15.405441 master-0 kubenswrapper[31411]: I0224 02:37:15.405416 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-2bdc-account-create-update-5cgdd" Feb 24 02:37:15.409545 master-0 kubenswrapper[31411]: I0224 02:37:15.409495 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prjc6\" (UniqueName: \"kubernetes.io/projected/681e6026-865f-4f36-9ca6-5321c0738d18-kube-api-access-prjc6\") pod \"ironic-inspector-db-create-8kz9s\" (UID: \"681e6026-865f-4f36-9ca6-5321c0738d18\") " pod="openstack/ironic-inspector-db-create-8kz9s" Feb 24 02:37:15.409650 master-0 kubenswrapper[31411]: I0224 02:37:15.409620 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/681e6026-865f-4f36-9ca6-5321c0738d18-operator-scripts\") pod \"ironic-inspector-db-create-8kz9s\" (UID: \"681e6026-865f-4f36-9ca6-5321c0738d18\") " pod="openstack/ironic-inspector-db-create-8kz9s" Feb 24 02:37:15.410508 master-0 kubenswrapper[31411]: I0224 02:37:15.410480 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/681e6026-865f-4f36-9ca6-5321c0738d18-operator-scripts\") pod \"ironic-inspector-db-create-8kz9s\" (UID: \"681e6026-865f-4f36-9ca6-5321c0738d18\") " pod="openstack/ironic-inspector-db-create-8kz9s" Feb 24 02:37:15.419008 master-0 kubenswrapper[31411]: I0224 02:37:15.418944 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-db-secret" Feb 24 02:37:15.438514 master-0 kubenswrapper[31411]: I0224 02:37:15.433399 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-2bdc-account-create-update-5cgdd"] Feb 24 02:37:15.564807 master-0 kubenswrapper[31411]: I0224 02:37:15.562041 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prjc6\" (UniqueName: \"kubernetes.io/projected/681e6026-865f-4f36-9ca6-5321c0738d18-kube-api-access-prjc6\") pod \"ironic-inspector-db-create-8kz9s\" (UID: \"681e6026-865f-4f36-9ca6-5321c0738d18\") " pod="openstack/ironic-inspector-db-create-8kz9s" Feb 24 02:37:15.601371 master-0 kubenswrapper[31411]: I0224 02:37:15.600518 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-api-0" event={"ID":"1097a77b-8299-4100-b3f4-d94348cf6578","Type":"ContainerStarted","Data":"38ef615f1fbe260dd0ee671bf49cafe74b077e63eeeaf1447b8cccf3c9668cb1"} Feb 24 02:37:15.675899 master-0 kubenswrapper[31411]: I0224 02:37:15.675821 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72106e8c-2a98-4a82-9f36-c820986c5665-combined-ca-bundle\") pod \"ironic-neutron-agent-7d8f6784f6-dqjdm\" (UID: \"72106e8c-2a98-4a82-9f36-c820986c5665\") " pod="openstack/ironic-neutron-agent-7d8f6784f6-dqjdm" Feb 24 02:37:15.675899 master-0 kubenswrapper[31411]: I0224 02:37:15.675894 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppxr4\" (UniqueName: \"kubernetes.io/projected/72106e8c-2a98-4a82-9f36-c820986c5665-kube-api-access-ppxr4\") pod \"ironic-neutron-agent-7d8f6784f6-dqjdm\" (UID: \"72106e8c-2a98-4a82-9f36-c820986c5665\") " pod="openstack/ironic-neutron-agent-7d8f6784f6-dqjdm" Feb 24 02:37:15.676251 master-0 kubenswrapper[31411]: I0224 02:37:15.676118 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6zjj\" (UniqueName: \"kubernetes.io/projected/df4cb99b-a2a5-4423-ab60-5d180ea09e93-kube-api-access-c6zjj\") pod \"ironic-inspector-2bdc-account-create-update-5cgdd\" (UID: \"df4cb99b-a2a5-4423-ab60-5d180ea09e93\") " pod="openstack/ironic-inspector-2bdc-account-create-update-5cgdd" Feb 24 02:37:15.676251 master-0 kubenswrapper[31411]: I0224 02:37:15.676218 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/72106e8c-2a98-4a82-9f36-c820986c5665-config\") pod \"ironic-neutron-agent-7d8f6784f6-dqjdm\" (UID: \"72106e8c-2a98-4a82-9f36-c820986c5665\") " pod="openstack/ironic-neutron-agent-7d8f6784f6-dqjdm" Feb 24 02:37:15.676349 master-0 kubenswrapper[31411]: I0224 02:37:15.676326 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df4cb99b-a2a5-4423-ab60-5d180ea09e93-operator-scripts\") pod \"ironic-inspector-2bdc-account-create-update-5cgdd\" (UID: \"df4cb99b-a2a5-4423-ab60-5d180ea09e93\") " pod="openstack/ironic-inspector-2bdc-account-create-update-5cgdd" Feb 24 02:37:15.720747 master-0 kubenswrapper[31411]: I0224 02:37:15.720667 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc5ccc685-kl2f6"] Feb 24 02:37:15.721108 master-0 kubenswrapper[31411]: I0224 02:37:15.721044 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bc5ccc685-kl2f6" podUID="dcfea485-a4c1-4ed9-bfb8-2853157da83a" containerName="dnsmasq-dns" containerID="cri-o://2211f04ed5d84c1c323d5a3b81ef41db321aacb985cf203d8e49298cc3299c50" gracePeriod=10 Feb 24 02:37:15.726009 master-0 kubenswrapper[31411]: I0224 02:37:15.725954 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bc5ccc685-kl2f6" Feb 24 02:37:15.760954 master-0 kubenswrapper[31411]: I0224 02:37:15.752406 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b45666449-v77b5"] Feb 24 02:37:15.760954 master-0 kubenswrapper[31411]: I0224 02:37:15.757190 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b45666449-v77b5" Feb 24 02:37:15.774437 master-0 kubenswrapper[31411]: I0224 02:37:15.773959 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b45666449-v77b5"] Feb 24 02:37:15.781287 master-0 kubenswrapper[31411]: I0224 02:37:15.778886 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6zjj\" (UniqueName: \"kubernetes.io/projected/df4cb99b-a2a5-4423-ab60-5d180ea09e93-kube-api-access-c6zjj\") pod \"ironic-inspector-2bdc-account-create-update-5cgdd\" (UID: \"df4cb99b-a2a5-4423-ab60-5d180ea09e93\") " pod="openstack/ironic-inspector-2bdc-account-create-update-5cgdd" Feb 24 02:37:15.781287 master-0 kubenswrapper[31411]: I0224 02:37:15.778966 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/72106e8c-2a98-4a82-9f36-c820986c5665-config\") pod \"ironic-neutron-agent-7d8f6784f6-dqjdm\" (UID: \"72106e8c-2a98-4a82-9f36-c820986c5665\") " pod="openstack/ironic-neutron-agent-7d8f6784f6-dqjdm" Feb 24 02:37:15.781287 master-0 kubenswrapper[31411]: I0224 02:37:15.779021 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df4cb99b-a2a5-4423-ab60-5d180ea09e93-operator-scripts\") pod \"ironic-inspector-2bdc-account-create-update-5cgdd\" (UID: \"df4cb99b-a2a5-4423-ab60-5d180ea09e93\") " pod="openstack/ironic-inspector-2bdc-account-create-update-5cgdd" Feb 24 02:37:15.781287 master-0 kubenswrapper[31411]: I0224 02:37:15.779223 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72106e8c-2a98-4a82-9f36-c820986c5665-combined-ca-bundle\") pod \"ironic-neutron-agent-7d8f6784f6-dqjdm\" (UID: \"72106e8c-2a98-4a82-9f36-c820986c5665\") " pod="openstack/ironic-neutron-agent-7d8f6784f6-dqjdm" Feb 24 02:37:15.781287 master-0 kubenswrapper[31411]: I0224 02:37:15.779246 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppxr4\" (UniqueName: \"kubernetes.io/projected/72106e8c-2a98-4a82-9f36-c820986c5665-kube-api-access-ppxr4\") pod \"ironic-neutron-agent-7d8f6784f6-dqjdm\" (UID: \"72106e8c-2a98-4a82-9f36-c820986c5665\") " pod="openstack/ironic-neutron-agent-7d8f6784f6-dqjdm" Feb 24 02:37:15.785223 master-0 kubenswrapper[31411]: I0224 02:37:15.784792 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df4cb99b-a2a5-4423-ab60-5d180ea09e93-operator-scripts\") pod \"ironic-inspector-2bdc-account-create-update-5cgdd\" (UID: \"df4cb99b-a2a5-4423-ab60-5d180ea09e93\") " pod="openstack/ironic-inspector-2bdc-account-create-update-5cgdd" Feb 24 02:37:15.790094 master-0 kubenswrapper[31411]: I0224 02:37:15.788170 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/72106e8c-2a98-4a82-9f36-c820986c5665-config\") pod \"ironic-neutron-agent-7d8f6784f6-dqjdm\" (UID: \"72106e8c-2a98-4a82-9f36-c820986c5665\") " pod="openstack/ironic-neutron-agent-7d8f6784f6-dqjdm" Feb 24 02:37:15.820676 master-0 kubenswrapper[31411]: I0224 02:37:15.820425 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-8kz9s" Feb 24 02:37:15.828622 master-0 kubenswrapper[31411]: I0224 02:37:15.822330 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72106e8c-2a98-4a82-9f36-c820986c5665-combined-ca-bundle\") pod \"ironic-neutron-agent-7d8f6784f6-dqjdm\" (UID: \"72106e8c-2a98-4a82-9f36-c820986c5665\") " pod="openstack/ironic-neutron-agent-7d8f6784f6-dqjdm" Feb 24 02:37:15.828622 master-0 kubenswrapper[31411]: I0224 02:37:15.825103 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppxr4\" (UniqueName: \"kubernetes.io/projected/72106e8c-2a98-4a82-9f36-c820986c5665-kube-api-access-ppxr4\") pod \"ironic-neutron-agent-7d8f6784f6-dqjdm\" (UID: \"72106e8c-2a98-4a82-9f36-c820986c5665\") " pod="openstack/ironic-neutron-agent-7d8f6784f6-dqjdm" Feb 24 02:37:15.828622 master-0 kubenswrapper[31411]: I0224 02:37:15.827771 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6zjj\" (UniqueName: \"kubernetes.io/projected/df4cb99b-a2a5-4423-ab60-5d180ea09e93-kube-api-access-c6zjj\") pod \"ironic-inspector-2bdc-account-create-update-5cgdd\" (UID: \"df4cb99b-a2a5-4423-ab60-5d180ea09e93\") " pod="openstack/ironic-inspector-2bdc-account-create-update-5cgdd" Feb 24 02:37:15.843297 master-0 kubenswrapper[31411]: I0224 02:37:15.841588 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-75c678c459-9mmbb"] Feb 24 02:37:15.862158 master-0 kubenswrapper[31411]: I0224 02:37:15.862101 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-75c678c459-9mmbb" Feb 24 02:37:15.871481 master-0 kubenswrapper[31411]: I0224 02:37:15.871023 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-transport" Feb 24 02:37:15.871481 master-0 kubenswrapper[31411]: I0224 02:37:15.871083 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 24 02:37:15.871481 master-0 kubenswrapper[31411]: I0224 02:37:15.871404 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Feb 24 02:37:15.873201 master-0 kubenswrapper[31411]: I0224 02:37:15.873173 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-config-data" Feb 24 02:37:15.873359 master-0 kubenswrapper[31411]: I0224 02:37:15.873341 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-scripts" Feb 24 02:37:15.881401 master-0 kubenswrapper[31411]: I0224 02:37:15.881348 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87vct\" (UniqueName: \"kubernetes.io/projected/8bdcc60c-f7fb-43df-8b93-035a0796383f-kube-api-access-87vct\") pod \"dnsmasq-dns-6b45666449-v77b5\" (UID: \"8bdcc60c-f7fb-43df-8b93-035a0796383f\") " pod="openstack/dnsmasq-dns-6b45666449-v77b5" Feb 24 02:37:15.881483 master-0 kubenswrapper[31411]: I0224 02:37:15.881428 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8bdcc60c-f7fb-43df-8b93-035a0796383f-ovsdbserver-sb\") pod \"dnsmasq-dns-6b45666449-v77b5\" (UID: \"8bdcc60c-f7fb-43df-8b93-035a0796383f\") " pod="openstack/dnsmasq-dns-6b45666449-v77b5" Feb 24 02:37:15.881531 master-0 kubenswrapper[31411]: I0224 02:37:15.881518 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8bdcc60c-f7fb-43df-8b93-035a0796383f-dns-swift-storage-0\") pod \"dnsmasq-dns-6b45666449-v77b5\" (UID: \"8bdcc60c-f7fb-43df-8b93-035a0796383f\") " pod="openstack/dnsmasq-dns-6b45666449-v77b5" Feb 24 02:37:15.881586 master-0 kubenswrapper[31411]: I0224 02:37:15.881541 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8bdcc60c-f7fb-43df-8b93-035a0796383f-ovsdbserver-nb\") pod \"dnsmasq-dns-6b45666449-v77b5\" (UID: \"8bdcc60c-f7fb-43df-8b93-035a0796383f\") " pod="openstack/dnsmasq-dns-6b45666449-v77b5" Feb 24 02:37:15.881629 master-0 kubenswrapper[31411]: I0224 02:37:15.881594 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bdcc60c-f7fb-43df-8b93-035a0796383f-config\") pod \"dnsmasq-dns-6b45666449-v77b5\" (UID: \"8bdcc60c-f7fb-43df-8b93-035a0796383f\") " pod="openstack/dnsmasq-dns-6b45666449-v77b5" Feb 24 02:37:15.881664 master-0 kubenswrapper[31411]: I0224 02:37:15.881628 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8bdcc60c-f7fb-43df-8b93-035a0796383f-dns-svc\") pod \"dnsmasq-dns-6b45666449-v77b5\" (UID: \"8bdcc60c-f7fb-43df-8b93-035a0796383f\") " pod="openstack/dnsmasq-dns-6b45666449-v77b5" Feb 24 02:37:15.915537 master-0 kubenswrapper[31411]: I0224 02:37:15.915465 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-75c678c459-9mmbb"] Feb 24 02:37:15.989662 master-0 kubenswrapper[31411]: I0224 02:37:15.983495 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bdcc60c-f7fb-43df-8b93-035a0796383f-config\") pod \"dnsmasq-dns-6b45666449-v77b5\" (UID: \"8bdcc60c-f7fb-43df-8b93-035a0796383f\") " pod="openstack/dnsmasq-dns-6b45666449-v77b5" Feb 24 02:37:15.989662 master-0 kubenswrapper[31411]: I0224 02:37:15.983585 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8bdcc60c-f7fb-43df-8b93-035a0796383f-dns-svc\") pod \"dnsmasq-dns-6b45666449-v77b5\" (UID: \"8bdcc60c-f7fb-43df-8b93-035a0796383f\") " pod="openstack/dnsmasq-dns-6b45666449-v77b5" Feb 24 02:37:15.989662 master-0 kubenswrapper[31411]: I0224 02:37:15.983631 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/9ef8199f-6610-44a1-b85c-fc7f2bea6294-etc-podinfo\") pod \"ironic-75c678c459-9mmbb\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " pod="openstack/ironic-75c678c459-9mmbb" Feb 24 02:37:15.989662 master-0 kubenswrapper[31411]: I0224 02:37:15.983697 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7m95\" (UniqueName: \"kubernetes.io/projected/9ef8199f-6610-44a1-b85c-fc7f2bea6294-kube-api-access-z7m95\") pod \"ironic-75c678c459-9mmbb\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " pod="openstack/ironic-75c678c459-9mmbb" Feb 24 02:37:15.989662 master-0 kubenswrapper[31411]: I0224 02:37:15.983722 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ef8199f-6610-44a1-b85c-fc7f2bea6294-logs\") pod \"ironic-75c678c459-9mmbb\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " pod="openstack/ironic-75c678c459-9mmbb" Feb 24 02:37:15.989662 master-0 kubenswrapper[31411]: I0224 02:37:15.983745 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ef8199f-6610-44a1-b85c-fc7f2bea6294-config-data-custom\") pod \"ironic-75c678c459-9mmbb\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " pod="openstack/ironic-75c678c459-9mmbb" Feb 24 02:37:15.989662 master-0 kubenswrapper[31411]: I0224 02:37:15.983778 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ef8199f-6610-44a1-b85c-fc7f2bea6294-combined-ca-bundle\") pod \"ironic-75c678c459-9mmbb\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " pod="openstack/ironic-75c678c459-9mmbb" Feb 24 02:37:15.989662 master-0 kubenswrapper[31411]: I0224 02:37:15.983805 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87vct\" (UniqueName: \"kubernetes.io/projected/8bdcc60c-f7fb-43df-8b93-035a0796383f-kube-api-access-87vct\") pod \"dnsmasq-dns-6b45666449-v77b5\" (UID: \"8bdcc60c-f7fb-43df-8b93-035a0796383f\") " pod="openstack/dnsmasq-dns-6b45666449-v77b5" Feb 24 02:37:15.989662 master-0 kubenswrapper[31411]: I0224 02:37:15.983822 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ef8199f-6610-44a1-b85c-fc7f2bea6294-scripts\") pod \"ironic-75c678c459-9mmbb\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " pod="openstack/ironic-75c678c459-9mmbb" Feb 24 02:37:15.989662 master-0 kubenswrapper[31411]: I0224 02:37:15.983859 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8bdcc60c-f7fb-43df-8b93-035a0796383f-ovsdbserver-sb\") pod \"dnsmasq-dns-6b45666449-v77b5\" (UID: \"8bdcc60c-f7fb-43df-8b93-035a0796383f\") " pod="openstack/dnsmasq-dns-6b45666449-v77b5" Feb 24 02:37:15.989662 master-0 kubenswrapper[31411]: I0224 02:37:15.983915 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ef8199f-6610-44a1-b85c-fc7f2bea6294-config-data\") pod \"ironic-75c678c459-9mmbb\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " pod="openstack/ironic-75c678c459-9mmbb" Feb 24 02:37:15.989662 master-0 kubenswrapper[31411]: I0224 02:37:15.983958 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8bdcc60c-f7fb-43df-8b93-035a0796383f-dns-swift-storage-0\") pod \"dnsmasq-dns-6b45666449-v77b5\" (UID: \"8bdcc60c-f7fb-43df-8b93-035a0796383f\") " pod="openstack/dnsmasq-dns-6b45666449-v77b5" Feb 24 02:37:15.989662 master-0 kubenswrapper[31411]: I0224 02:37:15.983979 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8bdcc60c-f7fb-43df-8b93-035a0796383f-ovsdbserver-nb\") pod \"dnsmasq-dns-6b45666449-v77b5\" (UID: \"8bdcc60c-f7fb-43df-8b93-035a0796383f\") " pod="openstack/dnsmasq-dns-6b45666449-v77b5" Feb 24 02:37:15.989662 master-0 kubenswrapper[31411]: I0224 02:37:15.984004 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/9ef8199f-6610-44a1-b85c-fc7f2bea6294-config-data-merged\") pod \"ironic-75c678c459-9mmbb\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " pod="openstack/ironic-75c678c459-9mmbb" Feb 24 02:37:15.989662 master-0 kubenswrapper[31411]: I0224 02:37:15.985495 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bdcc60c-f7fb-43df-8b93-035a0796383f-config\") pod \"dnsmasq-dns-6b45666449-v77b5\" (UID: \"8bdcc60c-f7fb-43df-8b93-035a0796383f\") " pod="openstack/dnsmasq-dns-6b45666449-v77b5" Feb 24 02:37:15.989662 master-0 kubenswrapper[31411]: I0224 02:37:15.986064 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8bdcc60c-f7fb-43df-8b93-035a0796383f-dns-svc\") pod \"dnsmasq-dns-6b45666449-v77b5\" (UID: \"8bdcc60c-f7fb-43df-8b93-035a0796383f\") " pod="openstack/dnsmasq-dns-6b45666449-v77b5" Feb 24 02:37:15.989662 master-0 kubenswrapper[31411]: I0224 02:37:15.986936 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8bdcc60c-f7fb-43df-8b93-035a0796383f-ovsdbserver-sb\") pod \"dnsmasq-dns-6b45666449-v77b5\" (UID: \"8bdcc60c-f7fb-43df-8b93-035a0796383f\") " pod="openstack/dnsmasq-dns-6b45666449-v77b5" Feb 24 02:37:15.989662 master-0 kubenswrapper[31411]: I0224 02:37:15.987484 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8bdcc60c-f7fb-43df-8b93-035a0796383f-dns-swift-storage-0\") pod \"dnsmasq-dns-6b45666449-v77b5\" (UID: \"8bdcc60c-f7fb-43df-8b93-035a0796383f\") " pod="openstack/dnsmasq-dns-6b45666449-v77b5" Feb 24 02:37:15.989662 master-0 kubenswrapper[31411]: I0224 02:37:15.988015 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8bdcc60c-f7fb-43df-8b93-035a0796383f-ovsdbserver-nb\") pod \"dnsmasq-dns-6b45666449-v77b5\" (UID: \"8bdcc60c-f7fb-43df-8b93-035a0796383f\") " pod="openstack/dnsmasq-dns-6b45666449-v77b5" Feb 24 02:37:16.039358 master-0 kubenswrapper[31411]: I0224 02:37:16.039323 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87vct\" (UniqueName: \"kubernetes.io/projected/8bdcc60c-f7fb-43df-8b93-035a0796383f-kube-api-access-87vct\") pod \"dnsmasq-dns-6b45666449-v77b5\" (UID: \"8bdcc60c-f7fb-43df-8b93-035a0796383f\") " pod="openstack/dnsmasq-dns-6b45666449-v77b5" Feb 24 02:37:16.091198 master-0 kubenswrapper[31411]: I0224 02:37:16.091043 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/9ef8199f-6610-44a1-b85c-fc7f2bea6294-config-data-merged\") pod \"ironic-75c678c459-9mmbb\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " pod="openstack/ironic-75c678c459-9mmbb" Feb 24 02:37:16.091624 master-0 kubenswrapper[31411]: I0224 02:37:16.090200 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/9ef8199f-6610-44a1-b85c-fc7f2bea6294-config-data-merged\") pod \"ironic-75c678c459-9mmbb\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " pod="openstack/ironic-75c678c459-9mmbb" Feb 24 02:37:16.097764 master-0 kubenswrapper[31411]: I0224 02:37:16.091730 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/9ef8199f-6610-44a1-b85c-fc7f2bea6294-etc-podinfo\") pod \"ironic-75c678c459-9mmbb\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " pod="openstack/ironic-75c678c459-9mmbb" Feb 24 02:37:16.097764 master-0 kubenswrapper[31411]: I0224 02:37:16.092588 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7m95\" (UniqueName: \"kubernetes.io/projected/9ef8199f-6610-44a1-b85c-fc7f2bea6294-kube-api-access-z7m95\") pod \"ironic-75c678c459-9mmbb\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " pod="openstack/ironic-75c678c459-9mmbb" Feb 24 02:37:16.097764 master-0 kubenswrapper[31411]: I0224 02:37:16.092625 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ef8199f-6610-44a1-b85c-fc7f2bea6294-logs\") pod \"ironic-75c678c459-9mmbb\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " pod="openstack/ironic-75c678c459-9mmbb" Feb 24 02:37:16.097764 master-0 kubenswrapper[31411]: I0224 02:37:16.092665 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ef8199f-6610-44a1-b85c-fc7f2bea6294-config-data-custom\") pod \"ironic-75c678c459-9mmbb\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " pod="openstack/ironic-75c678c459-9mmbb" Feb 24 02:37:16.097764 master-0 kubenswrapper[31411]: I0224 02:37:16.092731 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ef8199f-6610-44a1-b85c-fc7f2bea6294-combined-ca-bundle\") pod \"ironic-75c678c459-9mmbb\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " pod="openstack/ironic-75c678c459-9mmbb" Feb 24 02:37:16.097764 master-0 kubenswrapper[31411]: I0224 02:37:16.092770 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ef8199f-6610-44a1-b85c-fc7f2bea6294-scripts\") pod \"ironic-75c678c459-9mmbb\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " pod="openstack/ironic-75c678c459-9mmbb" Feb 24 02:37:16.097764 master-0 kubenswrapper[31411]: I0224 02:37:16.092931 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ef8199f-6610-44a1-b85c-fc7f2bea6294-config-data\") pod \"ironic-75c678c459-9mmbb\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " pod="openstack/ironic-75c678c459-9mmbb" Feb 24 02:37:16.097764 master-0 kubenswrapper[31411]: I0224 02:37:16.094055 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ef8199f-6610-44a1-b85c-fc7f2bea6294-logs\") pod \"ironic-75c678c459-9mmbb\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " pod="openstack/ironic-75c678c459-9mmbb" Feb 24 02:37:16.110079 master-0 kubenswrapper[31411]: I0224 02:37:16.110032 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ef8199f-6610-44a1-b85c-fc7f2bea6294-config-data-custom\") pod \"ironic-75c678c459-9mmbb\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " pod="openstack/ironic-75c678c459-9mmbb" Feb 24 02:37:16.110861 master-0 kubenswrapper[31411]: I0224 02:37:16.110598 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ef8199f-6610-44a1-b85c-fc7f2bea6294-scripts\") pod \"ironic-75c678c459-9mmbb\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " pod="openstack/ironic-75c678c459-9mmbb" Feb 24 02:37:16.112630 master-0 kubenswrapper[31411]: I0224 02:37:16.112557 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ef8199f-6610-44a1-b85c-fc7f2bea6294-config-data\") pod \"ironic-75c678c459-9mmbb\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " pod="openstack/ironic-75c678c459-9mmbb" Feb 24 02:37:16.114194 master-0 kubenswrapper[31411]: I0224 02:37:16.113140 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/9ef8199f-6610-44a1-b85c-fc7f2bea6294-etc-podinfo\") pod \"ironic-75c678c459-9mmbb\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " pod="openstack/ironic-75c678c459-9mmbb" Feb 24 02:37:16.117743 master-0 kubenswrapper[31411]: I0224 02:37:16.115688 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ef8199f-6610-44a1-b85c-fc7f2bea6294-combined-ca-bundle\") pod \"ironic-75c678c459-9mmbb\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " pod="openstack/ironic-75c678c459-9mmbb" Feb 24 02:37:16.120141 master-0 kubenswrapper[31411]: I0224 02:37:16.120110 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7m95\" (UniqueName: \"kubernetes.io/projected/9ef8199f-6610-44a1-b85c-fc7f2bea6294-kube-api-access-z7m95\") pod \"ironic-75c678c459-9mmbb\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " pod="openstack/ironic-75c678c459-9mmbb" Feb 24 02:37:16.235793 master-0 kubenswrapper[31411]: I0224 02:37:16.235323 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-7d8f6784f6-dqjdm" Feb 24 02:37:16.282903 master-0 kubenswrapper[31411]: I0224 02:37:16.281363 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-2bdc-account-create-update-5cgdd" Feb 24 02:37:16.348696 master-0 kubenswrapper[31411]: I0224 02:37:16.346590 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b45666449-v77b5" Feb 24 02:37:16.393653 master-0 kubenswrapper[31411]: I0224 02:37:16.371084 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-75c678c459-9mmbb" Feb 24 02:37:16.571786 master-0 kubenswrapper[31411]: I0224 02:37:16.567216 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc5ccc685-kl2f6" Feb 24 02:37:16.583619 master-0 kubenswrapper[31411]: I0224 02:37:16.583520 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-8kz9s"] Feb 24 02:37:16.625045 master-0 kubenswrapper[31411]: I0224 02:37:16.625004 31411 generic.go:334] "Generic (PLEG): container finished" podID="dcfea485-a4c1-4ed9-bfb8-2853157da83a" containerID="2211f04ed5d84c1c323d5a3b81ef41db321aacb985cf203d8e49298cc3299c50" exitCode=0 Feb 24 02:37:16.625285 master-0 kubenswrapper[31411]: I0224 02:37:16.625265 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc5ccc685-kl2f6" event={"ID":"dcfea485-a4c1-4ed9-bfb8-2853157da83a","Type":"ContainerDied","Data":"2211f04ed5d84c1c323d5a3b81ef41db321aacb985cf203d8e49298cc3299c50"} Feb 24 02:37:16.625398 master-0 kubenswrapper[31411]: I0224 02:37:16.625382 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc5ccc685-kl2f6" event={"ID":"dcfea485-a4c1-4ed9-bfb8-2853157da83a","Type":"ContainerDied","Data":"68968d69c11ab1acd6ee17e014b91aae3a54d889df74742beae9b47c2a841e66"} Feb 24 02:37:16.625499 master-0 kubenswrapper[31411]: I0224 02:37:16.625469 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc5ccc685-kl2f6" Feb 24 02:37:16.626378 master-0 kubenswrapper[31411]: I0224 02:37:16.625479 31411 scope.go:117] "RemoveContainer" containerID="2211f04ed5d84c1c323d5a3b81ef41db321aacb985cf203d8e49298cc3299c50" Feb 24 02:37:16.630001 master-0 kubenswrapper[31411]: I0224 02:37:16.629865 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-api-0" event={"ID":"1097a77b-8299-4100-b3f4-d94348cf6578","Type":"ContainerStarted","Data":"d0e55bfb67e4bfe818bedac1b6e15ddb7ea65b201e1499f220a707aa7d65c423"} Feb 24 02:37:16.699849 master-0 kubenswrapper[31411]: I0224 02:37:16.699814 31411 scope.go:117] "RemoveContainer" containerID="18ef565df257e875e545a03c10a4a7d8abdc5db10ad7302c6646158dc28a367c" Feb 24 02:37:16.715858 master-0 kubenswrapper[31411]: I0224 02:37:16.715782 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcfea485-a4c1-4ed9-bfb8-2853157da83a-dns-swift-storage-0\") pod \"dcfea485-a4c1-4ed9-bfb8-2853157da83a\" (UID: \"dcfea485-a4c1-4ed9-bfb8-2853157da83a\") " Feb 24 02:37:16.716127 master-0 kubenswrapper[31411]: I0224 02:37:16.715930 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcfea485-a4c1-4ed9-bfb8-2853157da83a-config\") pod \"dcfea485-a4c1-4ed9-bfb8-2853157da83a\" (UID: \"dcfea485-a4c1-4ed9-bfb8-2853157da83a\") " Feb 24 02:37:16.716127 master-0 kubenswrapper[31411]: I0224 02:37:16.716007 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjljc\" (UniqueName: \"kubernetes.io/projected/dcfea485-a4c1-4ed9-bfb8-2853157da83a-kube-api-access-tjljc\") pod \"dcfea485-a4c1-4ed9-bfb8-2853157da83a\" (UID: \"dcfea485-a4c1-4ed9-bfb8-2853157da83a\") " Feb 24 02:37:16.716202 master-0 kubenswrapper[31411]: I0224 02:37:16.716190 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcfea485-a4c1-4ed9-bfb8-2853157da83a-dns-svc\") pod \"dcfea485-a4c1-4ed9-bfb8-2853157da83a\" (UID: \"dcfea485-a4c1-4ed9-bfb8-2853157da83a\") " Feb 24 02:37:16.716328 master-0 kubenswrapper[31411]: I0224 02:37:16.716304 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcfea485-a4c1-4ed9-bfb8-2853157da83a-ovsdbserver-sb\") pod \"dcfea485-a4c1-4ed9-bfb8-2853157da83a\" (UID: \"dcfea485-a4c1-4ed9-bfb8-2853157da83a\") " Feb 24 02:37:16.716373 master-0 kubenswrapper[31411]: I0224 02:37:16.716354 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcfea485-a4c1-4ed9-bfb8-2853157da83a-ovsdbserver-nb\") pod \"dcfea485-a4c1-4ed9-bfb8-2853157da83a\" (UID: \"dcfea485-a4c1-4ed9-bfb8-2853157da83a\") " Feb 24 02:37:16.736901 master-0 kubenswrapper[31411]: I0224 02:37:16.736841 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcfea485-a4c1-4ed9-bfb8-2853157da83a-kube-api-access-tjljc" (OuterVolumeSpecName: "kube-api-access-tjljc") pod "dcfea485-a4c1-4ed9-bfb8-2853157da83a" (UID: "dcfea485-a4c1-4ed9-bfb8-2853157da83a"). InnerVolumeSpecName "kube-api-access-tjljc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:37:16.783185 master-0 kubenswrapper[31411]: I0224 02:37:16.783130 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcfea485-a4c1-4ed9-bfb8-2853157da83a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dcfea485-a4c1-4ed9-bfb8-2853157da83a" (UID: "dcfea485-a4c1-4ed9-bfb8-2853157da83a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:37:16.790781 master-0 kubenswrapper[31411]: I0224 02:37:16.790715 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcfea485-a4c1-4ed9-bfb8-2853157da83a-config" (OuterVolumeSpecName: "config") pod "dcfea485-a4c1-4ed9-bfb8-2853157da83a" (UID: "dcfea485-a4c1-4ed9-bfb8-2853157da83a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:37:16.804165 master-0 kubenswrapper[31411]: I0224 02:37:16.802863 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcfea485-a4c1-4ed9-bfb8-2853157da83a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dcfea485-a4c1-4ed9-bfb8-2853157da83a" (UID: "dcfea485-a4c1-4ed9-bfb8-2853157da83a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:37:16.804165 master-0 kubenswrapper[31411]: I0224 02:37:16.803000 31411 scope.go:117] "RemoveContainer" containerID="2211f04ed5d84c1c323d5a3b81ef41db321aacb985cf203d8e49298cc3299c50" Feb 24 02:37:16.809672 master-0 kubenswrapper[31411]: E0224 02:37:16.809610 31411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2211f04ed5d84c1c323d5a3b81ef41db321aacb985cf203d8e49298cc3299c50\": container with ID starting with 2211f04ed5d84c1c323d5a3b81ef41db321aacb985cf203d8e49298cc3299c50 not found: ID does not exist" containerID="2211f04ed5d84c1c323d5a3b81ef41db321aacb985cf203d8e49298cc3299c50" Feb 24 02:37:16.810346 master-0 kubenswrapper[31411]: I0224 02:37:16.809676 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2211f04ed5d84c1c323d5a3b81ef41db321aacb985cf203d8e49298cc3299c50"} err="failed to get container status \"2211f04ed5d84c1c323d5a3b81ef41db321aacb985cf203d8e49298cc3299c50\": rpc error: code = NotFound desc = could not find container \"2211f04ed5d84c1c323d5a3b81ef41db321aacb985cf203d8e49298cc3299c50\": container with ID starting with 2211f04ed5d84c1c323d5a3b81ef41db321aacb985cf203d8e49298cc3299c50 not found: ID does not exist" Feb 24 02:37:16.810346 master-0 kubenswrapper[31411]: I0224 02:37:16.809717 31411 scope.go:117] "RemoveContainer" containerID="18ef565df257e875e545a03c10a4a7d8abdc5db10ad7302c6646158dc28a367c" Feb 24 02:37:16.811251 master-0 kubenswrapper[31411]: E0224 02:37:16.811219 31411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18ef565df257e875e545a03c10a4a7d8abdc5db10ad7302c6646158dc28a367c\": container with ID starting with 18ef565df257e875e545a03c10a4a7d8abdc5db10ad7302c6646158dc28a367c not found: ID does not exist" containerID="18ef565df257e875e545a03c10a4a7d8abdc5db10ad7302c6646158dc28a367c" Feb 24 02:37:16.811251 master-0 kubenswrapper[31411]: I0224 02:37:16.811246 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18ef565df257e875e545a03c10a4a7d8abdc5db10ad7302c6646158dc28a367c"} err="failed to get container status \"18ef565df257e875e545a03c10a4a7d8abdc5db10ad7302c6646158dc28a367c\": rpc error: code = NotFound desc = could not find container \"18ef565df257e875e545a03c10a4a7d8abdc5db10ad7302c6646158dc28a367c\": container with ID starting with 18ef565df257e875e545a03c10a4a7d8abdc5db10ad7302c6646158dc28a367c not found: ID does not exist" Feb 24 02:37:16.819301 master-0 kubenswrapper[31411]: I0224 02:37:16.819110 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcfea485-a4c1-4ed9-bfb8-2853157da83a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dcfea485-a4c1-4ed9-bfb8-2853157da83a" (UID: "dcfea485-a4c1-4ed9-bfb8-2853157da83a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:37:16.820024 master-0 kubenswrapper[31411]: I0224 02:37:16.819928 31411 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcfea485-a4c1-4ed9-bfb8-2853157da83a-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:16.820024 master-0 kubenswrapper[31411]: I0224 02:37:16.819970 31411 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcfea485-a4c1-4ed9-bfb8-2853157da83a-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:16.820024 master-0 kubenswrapper[31411]: I0224 02:37:16.819982 31411 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcfea485-a4c1-4ed9-bfb8-2853157da83a-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:16.820024 master-0 kubenswrapper[31411]: I0224 02:37:16.819992 31411 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcfea485-a4c1-4ed9-bfb8-2853157da83a-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:16.820024 master-0 kubenswrapper[31411]: I0224 02:37:16.820003 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjljc\" (UniqueName: \"kubernetes.io/projected/dcfea485-a4c1-4ed9-bfb8-2853157da83a-kube-api-access-tjljc\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:16.823401 master-0 kubenswrapper[31411]: I0224 02:37:16.823354 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcfea485-a4c1-4ed9-bfb8-2853157da83a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dcfea485-a4c1-4ed9-bfb8-2853157da83a" (UID: "dcfea485-a4c1-4ed9-bfb8-2853157da83a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:37:16.873818 master-0 kubenswrapper[31411]: I0224 02:37:16.873746 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-7d8f6784f6-dqjdm"] Feb 24 02:37:16.925174 master-0 kubenswrapper[31411]: I0224 02:37:16.922897 31411 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcfea485-a4c1-4ed9-bfb8-2853157da83a-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:16.994712 master-0 kubenswrapper[31411]: I0224 02:37:16.992644 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc5ccc685-kl2f6"] Feb 24 02:37:17.021676 master-0 kubenswrapper[31411]: I0224 02:37:17.021375 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc5ccc685-kl2f6"] Feb 24 02:37:17.036363 master-0 kubenswrapper[31411]: W0224 02:37:17.034390 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf4cb99b_a2a5_4423_ab60_5d180ea09e93.slice/crio-76786aafb3a8501556ff39b48f72982e9510bfe96393f0898583608c13af35d7 WatchSource:0}: Error finding container 76786aafb3a8501556ff39b48f72982e9510bfe96393f0898583608c13af35d7: Status 404 returned error can't find the container with id 76786aafb3a8501556ff39b48f72982e9510bfe96393f0898583608c13af35d7 Feb 24 02:37:17.045033 master-0 kubenswrapper[31411]: I0224 02:37:17.044981 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-2bdc-account-create-update-5cgdd"] Feb 24 02:37:17.158644 master-0 kubenswrapper[31411]: I0224 02:37:17.158478 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcfea485-a4c1-4ed9-bfb8-2853157da83a" path="/var/lib/kubelet/pods/dcfea485-a4c1-4ed9-bfb8-2853157da83a/volumes" Feb 24 02:37:17.218986 master-0 kubenswrapper[31411]: I0224 02:37:17.218942 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b45666449-v77b5"] Feb 24 02:37:17.265727 master-0 kubenswrapper[31411]: I0224 02:37:17.265553 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-75c678c459-9mmbb"] Feb 24 02:37:17.351363 master-0 kubenswrapper[31411]: I0224 02:37:17.351307 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-conductor-0"] Feb 24 02:37:17.352548 master-0 kubenswrapper[31411]: E0224 02:37:17.352511 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcfea485-a4c1-4ed9-bfb8-2853157da83a" containerName="dnsmasq-dns" Feb 24 02:37:17.352548 master-0 kubenswrapper[31411]: I0224 02:37:17.352541 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcfea485-a4c1-4ed9-bfb8-2853157da83a" containerName="dnsmasq-dns" Feb 24 02:37:17.352654 master-0 kubenswrapper[31411]: E0224 02:37:17.352596 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcfea485-a4c1-4ed9-bfb8-2853157da83a" containerName="init" Feb 24 02:37:17.352654 master-0 kubenswrapper[31411]: I0224 02:37:17.352605 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcfea485-a4c1-4ed9-bfb8-2853157da83a" containerName="init" Feb 24 02:37:17.354067 master-0 kubenswrapper[31411]: I0224 02:37:17.354033 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcfea485-a4c1-4ed9-bfb8-2853157da83a" containerName="dnsmasq-dns" Feb 24 02:37:17.371108 master-0 kubenswrapper[31411]: I0224 02:37:17.370991 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Feb 24 02:37:17.376175 master-0 kubenswrapper[31411]: I0224 02:37:17.373418 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-scripts" Feb 24 02:37:17.376175 master-0 kubenswrapper[31411]: I0224 02:37:17.374562 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-config-data" Feb 24 02:37:17.394801 master-0 kubenswrapper[31411]: I0224 02:37:17.394685 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Feb 24 02:37:17.483830 master-0 kubenswrapper[31411]: I0224 02:37:17.483791 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d289f4ce-9a2f-4d57-bf7b-414618c7c4e8-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8\") " pod="openstack/ironic-conductor-0" Feb 24 02:37:17.483942 master-0 kubenswrapper[31411]: I0224 02:37:17.483880 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d289f4ce-9a2f-4d57-bf7b-414618c7c4e8-scripts\") pod \"ironic-conductor-0\" (UID: \"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8\") " pod="openstack/ironic-conductor-0" Feb 24 02:37:17.483977 master-0 kubenswrapper[31411]: I0224 02:37:17.483947 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d289f4ce-9a2f-4d57-bf7b-414618c7c4e8-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8\") " pod="openstack/ironic-conductor-0" Feb 24 02:37:17.484011 master-0 kubenswrapper[31411]: I0224 02:37:17.483984 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d289f4ce-9a2f-4d57-bf7b-414618c7c4e8-config-data\") pod \"ironic-conductor-0\" (UID: \"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8\") " pod="openstack/ironic-conductor-0" Feb 24 02:37:17.484065 master-0 kubenswrapper[31411]: I0224 02:37:17.484045 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4941f0bb-aa69-433a-901d-c8b9ad538b67\" (UniqueName: \"kubernetes.io/csi/topolvm.io^86246bae-8cf9-4cf7-a1b3-dc4ecfbefe9d\") pod \"ironic-conductor-0\" (UID: \"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8\") " pod="openstack/ironic-conductor-0" Feb 24 02:37:17.484111 master-0 kubenswrapper[31411]: I0224 02:37:17.484092 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d289f4ce-9a2f-4d57-bf7b-414618c7c4e8-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8\") " pod="openstack/ironic-conductor-0" Feb 24 02:37:17.484168 master-0 kubenswrapper[31411]: I0224 02:37:17.484150 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d289f4ce-9a2f-4d57-bf7b-414618c7c4e8-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8\") " pod="openstack/ironic-conductor-0" Feb 24 02:37:17.484222 master-0 kubenswrapper[31411]: I0224 02:37:17.484215 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrpgk\" (UniqueName: \"kubernetes.io/projected/d289f4ce-9a2f-4d57-bf7b-414618c7c4e8-kube-api-access-vrpgk\") pod \"ironic-conductor-0\" (UID: \"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8\") " pod="openstack/ironic-conductor-0" Feb 24 02:37:17.588952 master-0 kubenswrapper[31411]: I0224 02:37:17.588764 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrpgk\" (UniqueName: \"kubernetes.io/projected/d289f4ce-9a2f-4d57-bf7b-414618c7c4e8-kube-api-access-vrpgk\") pod \"ironic-conductor-0\" (UID: \"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8\") " pod="openstack/ironic-conductor-0" Feb 24 02:37:17.591346 master-0 kubenswrapper[31411]: I0224 02:37:17.591305 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d289f4ce-9a2f-4d57-bf7b-414618c7c4e8-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8\") " pod="openstack/ironic-conductor-0" Feb 24 02:37:17.591404 master-0 kubenswrapper[31411]: I0224 02:37:17.591379 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d289f4ce-9a2f-4d57-bf7b-414618c7c4e8-scripts\") pod \"ironic-conductor-0\" (UID: \"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8\") " pod="openstack/ironic-conductor-0" Feb 24 02:37:17.594594 master-0 kubenswrapper[31411]: I0224 02:37:17.591469 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d289f4ce-9a2f-4d57-bf7b-414618c7c4e8-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8\") " pod="openstack/ironic-conductor-0" Feb 24 02:37:17.594594 master-0 kubenswrapper[31411]: I0224 02:37:17.591546 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d289f4ce-9a2f-4d57-bf7b-414618c7c4e8-config-data\") pod \"ironic-conductor-0\" (UID: \"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8\") " pod="openstack/ironic-conductor-0" Feb 24 02:37:17.594594 master-0 kubenswrapper[31411]: I0224 02:37:17.591643 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4941f0bb-aa69-433a-901d-c8b9ad538b67\" (UniqueName: \"kubernetes.io/csi/topolvm.io^86246bae-8cf9-4cf7-a1b3-dc4ecfbefe9d\") pod \"ironic-conductor-0\" (UID: \"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8\") " pod="openstack/ironic-conductor-0" Feb 24 02:37:17.594594 master-0 kubenswrapper[31411]: I0224 02:37:17.591691 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d289f4ce-9a2f-4d57-bf7b-414618c7c4e8-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8\") " pod="openstack/ironic-conductor-0" Feb 24 02:37:17.594810 master-0 kubenswrapper[31411]: I0224 02:37:17.594779 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d289f4ce-9a2f-4d57-bf7b-414618c7c4e8-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8\") " pod="openstack/ironic-conductor-0" Feb 24 02:37:17.594847 master-0 kubenswrapper[31411]: I0224 02:37:17.594824 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d289f4ce-9a2f-4d57-bf7b-414618c7c4e8-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8\") " pod="openstack/ironic-conductor-0" Feb 24 02:37:17.595691 master-0 kubenswrapper[31411]: I0224 02:37:17.595614 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d289f4ce-9a2f-4d57-bf7b-414618c7c4e8-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8\") " pod="openstack/ironic-conductor-0" Feb 24 02:37:17.597506 master-0 kubenswrapper[31411]: I0224 02:37:17.596360 31411 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 02:37:17.597506 master-0 kubenswrapper[31411]: I0224 02:37:17.596430 31411 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4941f0bb-aa69-433a-901d-c8b9ad538b67\" (UniqueName: \"kubernetes.io/csi/topolvm.io^86246bae-8cf9-4cf7-a1b3-dc4ecfbefe9d\") pod \"ironic-conductor-0\" (UID: \"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/5c19cf87611c4a06910009fd943b9637b2a3869a55eb4a53a63cfd39dde3de97/globalmount\"" pod="openstack/ironic-conductor-0" Feb 24 02:37:17.597506 master-0 kubenswrapper[31411]: I0224 02:37:17.597070 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d289f4ce-9a2f-4d57-bf7b-414618c7c4e8-scripts\") pod \"ironic-conductor-0\" (UID: \"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8\") " pod="openstack/ironic-conductor-0" Feb 24 02:37:17.601582 master-0 kubenswrapper[31411]: I0224 02:37:17.601513 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d289f4ce-9a2f-4d57-bf7b-414618c7c4e8-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8\") " pod="openstack/ironic-conductor-0" Feb 24 02:37:17.608077 master-0 kubenswrapper[31411]: I0224 02:37:17.604003 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d289f4ce-9a2f-4d57-bf7b-414618c7c4e8-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8\") " pod="openstack/ironic-conductor-0" Feb 24 02:37:17.608077 master-0 kubenswrapper[31411]: I0224 02:37:17.604787 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d289f4ce-9a2f-4d57-bf7b-414618c7c4e8-config-data\") pod \"ironic-conductor-0\" (UID: \"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8\") " pod="openstack/ironic-conductor-0" Feb 24 02:37:17.610821 master-0 kubenswrapper[31411]: I0224 02:37:17.610769 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrpgk\" (UniqueName: \"kubernetes.io/projected/d289f4ce-9a2f-4d57-bf7b-414618c7c4e8-kube-api-access-vrpgk\") pod \"ironic-conductor-0\" (UID: \"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8\") " pod="openstack/ironic-conductor-0" Feb 24 02:37:17.653695 master-0 kubenswrapper[31411]: I0224 02:37:17.653552 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-api-0" event={"ID":"1097a77b-8299-4100-b3f4-d94348cf6578","Type":"ContainerStarted","Data":"51d5b15fddc4ccb292cad043bef00b644ab62fd3273edfdef618f14eeb9391c3"} Feb 24 02:37:17.653853 master-0 kubenswrapper[31411]: I0224 02:37:17.653747 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:17.655477 master-0 kubenswrapper[31411]: I0224 02:37:17.655412 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7d8f6784f6-dqjdm" event={"ID":"72106e8c-2a98-4a82-9f36-c820986c5665","Type":"ContainerStarted","Data":"4090169bcc9e81416b5d6380a8d8da1c5c4181523e292632d3d05dd6a53ef152"} Feb 24 02:37:17.659055 master-0 kubenswrapper[31411]: I0224 02:37:17.658044 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-2bdc-account-create-update-5cgdd" event={"ID":"df4cb99b-a2a5-4423-ab60-5d180ea09e93","Type":"ContainerStarted","Data":"66967fc67f5d734652bce57f0d6e278d63a9fb0e5f8989303157d0300581964f"} Feb 24 02:37:17.659055 master-0 kubenswrapper[31411]: I0224 02:37:17.658110 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-2bdc-account-create-update-5cgdd" event={"ID":"df4cb99b-a2a5-4423-ab60-5d180ea09e93","Type":"ContainerStarted","Data":"76786aafb3a8501556ff39b48f72982e9510bfe96393f0898583608c13af35d7"} Feb 24 02:37:17.668602 master-0 kubenswrapper[31411]: I0224 02:37:17.666711 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-75c678c459-9mmbb" event={"ID":"9ef8199f-6610-44a1-b85c-fc7f2bea6294","Type":"ContainerStarted","Data":"a688c60e9c9a887785f46b66598fa692f80f62afa09344e913a89c83d554cd86"} Feb 24 02:37:17.672930 master-0 kubenswrapper[31411]: I0224 02:37:17.669340 31411 generic.go:334] "Generic (PLEG): container finished" podID="8bdcc60c-f7fb-43df-8b93-035a0796383f" containerID="90a1b8dc9793313cec4187727ff7e331f607fe03465d604355fe3354891290e7" exitCode=0 Feb 24 02:37:17.672930 master-0 kubenswrapper[31411]: I0224 02:37:17.669432 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b45666449-v77b5" event={"ID":"8bdcc60c-f7fb-43df-8b93-035a0796383f","Type":"ContainerDied","Data":"90a1b8dc9793313cec4187727ff7e331f607fe03465d604355fe3354891290e7"} Feb 24 02:37:17.672930 master-0 kubenswrapper[31411]: I0224 02:37:17.669466 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b45666449-v77b5" event={"ID":"8bdcc60c-f7fb-43df-8b93-035a0796383f","Type":"ContainerStarted","Data":"da6f59ba4f50fa2fb89d2d8f92f8093cc3c586c66462fdcaecd150d8a45a50ab"} Feb 24 02:37:17.677777 master-0 kubenswrapper[31411]: I0224 02:37:17.677728 31411 generic.go:334] "Generic (PLEG): container finished" podID="681e6026-865f-4f36-9ca6-5321c0738d18" containerID="eba3c9e9d3744ec13ee7e23484b76faeb29d5d57adc284af42df09984d8ec479" exitCode=0 Feb 24 02:37:17.677858 master-0 kubenswrapper[31411]: I0224 02:37:17.677784 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-8kz9s" event={"ID":"681e6026-865f-4f36-9ca6-5321c0738d18","Type":"ContainerDied","Data":"eba3c9e9d3744ec13ee7e23484b76faeb29d5d57adc284af42df09984d8ec479"} Feb 24 02:37:17.677858 master-0 kubenswrapper[31411]: I0224 02:37:17.677813 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-8kz9s" event={"ID":"681e6026-865f-4f36-9ca6-5321c0738d18","Type":"ContainerStarted","Data":"28ed97ed23fbbdbd117661041e0cb54897621f21ddcae482f3be6ae0d2eac81f"} Feb 24 02:37:17.702179 master-0 kubenswrapper[31411]: I0224 02:37:17.701450 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-6ac23-api-0" podStartSLOduration=4.70142737 podStartE2EDuration="4.70142737s" podCreationTimestamp="2026-02-24 02:37:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:37:17.698388465 +0000 UTC m=+980.915586311" watchObservedRunningTime="2026-02-24 02:37:17.70142737 +0000 UTC m=+980.918625206" Feb 24 02:37:17.775304 master-0 kubenswrapper[31411]: I0224 02:37:17.775187 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-2bdc-account-create-update-5cgdd" podStartSLOduration=2.775162667 podStartE2EDuration="2.775162667s" podCreationTimestamp="2026-02-24 02:37:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:37:17.768555702 +0000 UTC m=+980.985753548" watchObservedRunningTime="2026-02-24 02:37:17.775162667 +0000 UTC m=+980.992360513" Feb 24 02:37:17.991852 master-0 kubenswrapper[31411]: I0224 02:37:17.990513 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:18.075708 master-0 kubenswrapper[31411]: I0224 02:37:18.075662 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-6ac23-scheduler-0"] Feb 24 02:37:18.094872 master-0 kubenswrapper[31411]: I0224 02:37:18.094825 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:18.119209 master-0 kubenswrapper[31411]: I0224 02:37:18.119153 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:18.247118 master-0 kubenswrapper[31411]: I0224 02:37:18.245809 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-6ac23-backup-0"] Feb 24 02:37:18.290319 master-0 kubenswrapper[31411]: I0224 02:37:18.290229 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-6ac23-volume-lvm-iscsi-0"] Feb 24 02:37:18.698025 master-0 kubenswrapper[31411]: I0224 02:37:18.697971 31411 generic.go:334] "Generic (PLEG): container finished" podID="df4cb99b-a2a5-4423-ab60-5d180ea09e93" containerID="66967fc67f5d734652bce57f0d6e278d63a9fb0e5f8989303157d0300581964f" exitCode=0 Feb 24 02:37:18.698590 master-0 kubenswrapper[31411]: I0224 02:37:18.698078 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-2bdc-account-create-update-5cgdd" event={"ID":"df4cb99b-a2a5-4423-ab60-5d180ea09e93","Type":"ContainerDied","Data":"66967fc67f5d734652bce57f0d6e278d63a9fb0e5f8989303157d0300581964f"} Feb 24 02:37:18.701839 master-0 kubenswrapper[31411]: I0224 02:37:18.701761 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-6ac23-backup-0" podUID="6fcfbc46-898e-4e7b-a96a-e2170c671a45" containerName="cinder-backup" containerID="cri-o://bb53df19c53cbee60d75cdef9ec6e572433bc2a787797618d3d964e238935c5a" gracePeriod=30 Feb 24 02:37:18.703906 master-0 kubenswrapper[31411]: I0224 02:37:18.703863 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b45666449-v77b5" event={"ID":"8bdcc60c-f7fb-43df-8b93-035a0796383f","Type":"ContainerStarted","Data":"47b7cb45e1461ca1dd03c22026f38d6712770b0ea1a900da45208953543a87e3"} Feb 24 02:37:18.703958 master-0 kubenswrapper[31411]: I0224 02:37:18.703910 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b45666449-v77b5" Feb 24 02:37:18.704186 master-0 kubenswrapper[31411]: I0224 02:37:18.704148 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-6ac23-scheduler-0" podUID="8657494d-8b2e-4545-9713-e55ace12f329" containerName="cinder-scheduler" containerID="cri-o://eeb78254bf7f2baaf7618d1c8289888c1d4aedf8e156f196588221c7066ca527" gracePeriod=30 Feb 24 02:37:18.704363 master-0 kubenswrapper[31411]: I0224 02:37:18.704340 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" podUID="8e200864-094f-471c-ba9d-863d14ff8404" containerName="cinder-volume" containerID="cri-o://46b3e1401d4270531bf61719927a43f66a0d2786e4d9941333278da6d73172de" gracePeriod=30 Feb 24 02:37:18.704941 master-0 kubenswrapper[31411]: I0224 02:37:18.704897 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-6ac23-scheduler-0" podUID="8657494d-8b2e-4545-9713-e55ace12f329" containerName="probe" containerID="cri-o://86a3152df7e3a2c46f55c0278d2eb568f064bdbae35cf4046fa2c6c2d3131640" gracePeriod=30 Feb 24 02:37:18.704988 master-0 kubenswrapper[31411]: I0224 02:37:18.704955 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" podUID="8e200864-094f-471c-ba9d-863d14ff8404" containerName="probe" containerID="cri-o://d4320888b9dabb7b3f57c8dc04c0c7e4c1d12a141528226e6a521350a767e095" gracePeriod=30 Feb 24 02:37:18.705097 master-0 kubenswrapper[31411]: I0224 02:37:18.704879 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-6ac23-backup-0" podUID="6fcfbc46-898e-4e7b-a96a-e2170c671a45" containerName="probe" containerID="cri-o://a18767a10ccb87aefe58cb57822457b32145a2e700ed95684236232652104c0d" gracePeriod=30 Feb 24 02:37:18.776348 master-0 kubenswrapper[31411]: I0224 02:37:18.776262 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b45666449-v77b5" podStartSLOduration=3.776238049 podStartE2EDuration="3.776238049s" podCreationTimestamp="2026-02-24 02:37:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:37:18.773586075 +0000 UTC m=+981.990783921" watchObservedRunningTime="2026-02-24 02:37:18.776238049 +0000 UTC m=+981.993435895" Feb 24 02:37:18.876435 master-0 kubenswrapper[31411]: I0224 02:37:18.876378 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-85b75c94bc-pp6mc"] Feb 24 02:37:18.879988 master-0 kubenswrapper[31411]: I0224 02:37:18.879771 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:18.883497 master-0 kubenswrapper[31411]: I0224 02:37:18.883454 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-public-svc" Feb 24 02:37:18.887066 master-0 kubenswrapper[31411]: I0224 02:37:18.883677 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-internal-svc" Feb 24 02:37:18.933398 master-0 kubenswrapper[31411]: I0224 02:37:18.933166 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-85b75c94bc-pp6mc"] Feb 24 02:37:18.959608 master-0 kubenswrapper[31411]: I0224 02:37:18.959545 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da4d9085-6b7b-4507-803b-39a20e05bf2c-config-data-custom\") pod \"ironic-85b75c94bc-pp6mc\" (UID: \"da4d9085-6b7b-4507-803b-39a20e05bf2c\") " pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:18.959608 master-0 kubenswrapper[31411]: I0224 02:37:18.959602 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/da4d9085-6b7b-4507-803b-39a20e05bf2c-etc-podinfo\") pod \"ironic-85b75c94bc-pp6mc\" (UID: \"da4d9085-6b7b-4507-803b-39a20e05bf2c\") " pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:18.959913 master-0 kubenswrapper[31411]: I0224 02:37:18.959643 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da4d9085-6b7b-4507-803b-39a20e05bf2c-combined-ca-bundle\") pod \"ironic-85b75c94bc-pp6mc\" (UID: \"da4d9085-6b7b-4507-803b-39a20e05bf2c\") " pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:18.959913 master-0 kubenswrapper[31411]: I0224 02:37:18.959674 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da4d9085-6b7b-4507-803b-39a20e05bf2c-scripts\") pod \"ironic-85b75c94bc-pp6mc\" (UID: \"da4d9085-6b7b-4507-803b-39a20e05bf2c\") " pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:18.959913 master-0 kubenswrapper[31411]: I0224 02:37:18.959708 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/da4d9085-6b7b-4507-803b-39a20e05bf2c-config-data-merged\") pod \"ironic-85b75c94bc-pp6mc\" (UID: \"da4d9085-6b7b-4507-803b-39a20e05bf2c\") " pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:18.959913 master-0 kubenswrapper[31411]: I0224 02:37:18.959758 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da4d9085-6b7b-4507-803b-39a20e05bf2c-logs\") pod \"ironic-85b75c94bc-pp6mc\" (UID: \"da4d9085-6b7b-4507-803b-39a20e05bf2c\") " pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:18.959913 master-0 kubenswrapper[31411]: I0224 02:37:18.959820 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7s4q\" (UniqueName: \"kubernetes.io/projected/da4d9085-6b7b-4507-803b-39a20e05bf2c-kube-api-access-p7s4q\") pod \"ironic-85b75c94bc-pp6mc\" (UID: \"da4d9085-6b7b-4507-803b-39a20e05bf2c\") " pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:18.959913 master-0 kubenswrapper[31411]: I0224 02:37:18.959856 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da4d9085-6b7b-4507-803b-39a20e05bf2c-config-data\") pod \"ironic-85b75c94bc-pp6mc\" (UID: \"da4d9085-6b7b-4507-803b-39a20e05bf2c\") " pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:18.959913 master-0 kubenswrapper[31411]: I0224 02:37:18.959878 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da4d9085-6b7b-4507-803b-39a20e05bf2c-internal-tls-certs\") pod \"ironic-85b75c94bc-pp6mc\" (UID: \"da4d9085-6b7b-4507-803b-39a20e05bf2c\") " pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:18.959913 master-0 kubenswrapper[31411]: I0224 02:37:18.959895 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da4d9085-6b7b-4507-803b-39a20e05bf2c-public-tls-certs\") pod \"ironic-85b75c94bc-pp6mc\" (UID: \"da4d9085-6b7b-4507-803b-39a20e05bf2c\") " pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:19.033779 master-0 kubenswrapper[31411]: I0224 02:37:19.032541 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4941f0bb-aa69-433a-901d-c8b9ad538b67\" (UniqueName: \"kubernetes.io/csi/topolvm.io^86246bae-8cf9-4cf7-a1b3-dc4ecfbefe9d\") pod \"ironic-conductor-0\" (UID: \"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8\") " pod="openstack/ironic-conductor-0" Feb 24 02:37:19.062936 master-0 kubenswrapper[31411]: I0224 02:37:19.062866 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da4d9085-6b7b-4507-803b-39a20e05bf2c-combined-ca-bundle\") pod \"ironic-85b75c94bc-pp6mc\" (UID: \"da4d9085-6b7b-4507-803b-39a20e05bf2c\") " pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:19.063269 master-0 kubenswrapper[31411]: I0224 02:37:19.063249 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da4d9085-6b7b-4507-803b-39a20e05bf2c-scripts\") pod \"ironic-85b75c94bc-pp6mc\" (UID: \"da4d9085-6b7b-4507-803b-39a20e05bf2c\") " pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:19.063412 master-0 kubenswrapper[31411]: I0224 02:37:19.063399 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/da4d9085-6b7b-4507-803b-39a20e05bf2c-config-data-merged\") pod \"ironic-85b75c94bc-pp6mc\" (UID: \"da4d9085-6b7b-4507-803b-39a20e05bf2c\") " pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:19.063517 master-0 kubenswrapper[31411]: I0224 02:37:19.063504 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da4d9085-6b7b-4507-803b-39a20e05bf2c-logs\") pod \"ironic-85b75c94bc-pp6mc\" (UID: \"da4d9085-6b7b-4507-803b-39a20e05bf2c\") " pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:19.063665 master-0 kubenswrapper[31411]: I0224 02:37:19.063651 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7s4q\" (UniqueName: \"kubernetes.io/projected/da4d9085-6b7b-4507-803b-39a20e05bf2c-kube-api-access-p7s4q\") pod \"ironic-85b75c94bc-pp6mc\" (UID: \"da4d9085-6b7b-4507-803b-39a20e05bf2c\") " pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:19.063767 master-0 kubenswrapper[31411]: I0224 02:37:19.063755 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da4d9085-6b7b-4507-803b-39a20e05bf2c-config-data\") pod \"ironic-85b75c94bc-pp6mc\" (UID: \"da4d9085-6b7b-4507-803b-39a20e05bf2c\") " pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:19.063851 master-0 kubenswrapper[31411]: I0224 02:37:19.063838 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da4d9085-6b7b-4507-803b-39a20e05bf2c-internal-tls-certs\") pod \"ironic-85b75c94bc-pp6mc\" (UID: \"da4d9085-6b7b-4507-803b-39a20e05bf2c\") " pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:19.063950 master-0 kubenswrapper[31411]: I0224 02:37:19.063932 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da4d9085-6b7b-4507-803b-39a20e05bf2c-public-tls-certs\") pod \"ironic-85b75c94bc-pp6mc\" (UID: \"da4d9085-6b7b-4507-803b-39a20e05bf2c\") " pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:19.064094 master-0 kubenswrapper[31411]: I0224 02:37:19.064081 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da4d9085-6b7b-4507-803b-39a20e05bf2c-config-data-custom\") pod \"ironic-85b75c94bc-pp6mc\" (UID: \"da4d9085-6b7b-4507-803b-39a20e05bf2c\") " pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:19.064179 master-0 kubenswrapper[31411]: I0224 02:37:19.064166 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/da4d9085-6b7b-4507-803b-39a20e05bf2c-etc-podinfo\") pod \"ironic-85b75c94bc-pp6mc\" (UID: \"da4d9085-6b7b-4507-803b-39a20e05bf2c\") " pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:19.068743 master-0 kubenswrapper[31411]: I0224 02:37:19.068699 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/da4d9085-6b7b-4507-803b-39a20e05bf2c-config-data-merged\") pod \"ironic-85b75c94bc-pp6mc\" (UID: \"da4d9085-6b7b-4507-803b-39a20e05bf2c\") " pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:19.070944 master-0 kubenswrapper[31411]: I0224 02:37:19.070769 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da4d9085-6b7b-4507-803b-39a20e05bf2c-internal-tls-certs\") pod \"ironic-85b75c94bc-pp6mc\" (UID: \"da4d9085-6b7b-4507-803b-39a20e05bf2c\") " pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:19.072520 master-0 kubenswrapper[31411]: I0224 02:37:19.071674 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/da4d9085-6b7b-4507-803b-39a20e05bf2c-etc-podinfo\") pod \"ironic-85b75c94bc-pp6mc\" (UID: \"da4d9085-6b7b-4507-803b-39a20e05bf2c\") " pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:19.072520 master-0 kubenswrapper[31411]: I0224 02:37:19.071835 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da4d9085-6b7b-4507-803b-39a20e05bf2c-logs\") pod \"ironic-85b75c94bc-pp6mc\" (UID: \"da4d9085-6b7b-4507-803b-39a20e05bf2c\") " pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:19.075531 master-0 kubenswrapper[31411]: I0224 02:37:19.075484 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da4d9085-6b7b-4507-803b-39a20e05bf2c-config-data\") pod \"ironic-85b75c94bc-pp6mc\" (UID: \"da4d9085-6b7b-4507-803b-39a20e05bf2c\") " pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:19.078928 master-0 kubenswrapper[31411]: I0224 02:37:19.078860 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da4d9085-6b7b-4507-803b-39a20e05bf2c-combined-ca-bundle\") pod \"ironic-85b75c94bc-pp6mc\" (UID: \"da4d9085-6b7b-4507-803b-39a20e05bf2c\") " pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:19.081436 master-0 kubenswrapper[31411]: I0224 02:37:19.080931 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da4d9085-6b7b-4507-803b-39a20e05bf2c-scripts\") pod \"ironic-85b75c94bc-pp6mc\" (UID: \"da4d9085-6b7b-4507-803b-39a20e05bf2c\") " pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:19.084389 master-0 kubenswrapper[31411]: I0224 02:37:19.084340 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da4d9085-6b7b-4507-803b-39a20e05bf2c-public-tls-certs\") pod \"ironic-85b75c94bc-pp6mc\" (UID: \"da4d9085-6b7b-4507-803b-39a20e05bf2c\") " pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:19.085639 master-0 kubenswrapper[31411]: I0224 02:37:19.085585 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da4d9085-6b7b-4507-803b-39a20e05bf2c-config-data-custom\") pod \"ironic-85b75c94bc-pp6mc\" (UID: \"da4d9085-6b7b-4507-803b-39a20e05bf2c\") " pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:19.085803 master-0 kubenswrapper[31411]: I0224 02:37:19.085763 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7s4q\" (UniqueName: \"kubernetes.io/projected/da4d9085-6b7b-4507-803b-39a20e05bf2c-kube-api-access-p7s4q\") pod \"ironic-85b75c94bc-pp6mc\" (UID: \"da4d9085-6b7b-4507-803b-39a20e05bf2c\") " pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:19.229460 master-0 kubenswrapper[31411]: I0224 02:37:19.229162 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:19.291819 master-0 kubenswrapper[31411]: I0224 02:37:19.291741 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Feb 24 02:37:19.721792 master-0 kubenswrapper[31411]: I0224 02:37:19.721696 31411 generic.go:334] "Generic (PLEG): container finished" podID="8e200864-094f-471c-ba9d-863d14ff8404" containerID="d4320888b9dabb7b3f57c8dc04c0c7e4c1d12a141528226e6a521350a767e095" exitCode=0 Feb 24 02:37:19.722397 master-0 kubenswrapper[31411]: I0224 02:37:19.721801 31411 generic.go:334] "Generic (PLEG): container finished" podID="8e200864-094f-471c-ba9d-863d14ff8404" containerID="46b3e1401d4270531bf61719927a43f66a0d2786e4d9941333278da6d73172de" exitCode=0 Feb 24 02:37:19.722397 master-0 kubenswrapper[31411]: I0224 02:37:19.721814 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" event={"ID":"8e200864-094f-471c-ba9d-863d14ff8404","Type":"ContainerDied","Data":"d4320888b9dabb7b3f57c8dc04c0c7e4c1d12a141528226e6a521350a767e095"} Feb 24 02:37:19.722397 master-0 kubenswrapper[31411]: I0224 02:37:19.721885 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" event={"ID":"8e200864-094f-471c-ba9d-863d14ff8404","Type":"ContainerDied","Data":"46b3e1401d4270531bf61719927a43f66a0d2786e4d9941333278da6d73172de"} Feb 24 02:37:19.726230 master-0 kubenswrapper[31411]: I0224 02:37:19.726199 31411 generic.go:334] "Generic (PLEG): container finished" podID="6fcfbc46-898e-4e7b-a96a-e2170c671a45" containerID="a18767a10ccb87aefe58cb57822457b32145a2e700ed95684236232652104c0d" exitCode=0 Feb 24 02:37:19.726230 master-0 kubenswrapper[31411]: I0224 02:37:19.726219 31411 generic.go:334] "Generic (PLEG): container finished" podID="6fcfbc46-898e-4e7b-a96a-e2170c671a45" containerID="bb53df19c53cbee60d75cdef9ec6e572433bc2a787797618d3d964e238935c5a" exitCode=0 Feb 24 02:37:19.726318 master-0 kubenswrapper[31411]: I0224 02:37:19.726277 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-backup-0" event={"ID":"6fcfbc46-898e-4e7b-a96a-e2170c671a45","Type":"ContainerDied","Data":"a18767a10ccb87aefe58cb57822457b32145a2e700ed95684236232652104c0d"} Feb 24 02:37:19.726318 master-0 kubenswrapper[31411]: I0224 02:37:19.726313 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-backup-0" event={"ID":"6fcfbc46-898e-4e7b-a96a-e2170c671a45","Type":"ContainerDied","Data":"bb53df19c53cbee60d75cdef9ec6e572433bc2a787797618d3d964e238935c5a"} Feb 24 02:37:19.730288 master-0 kubenswrapper[31411]: I0224 02:37:19.730105 31411 generic.go:334] "Generic (PLEG): container finished" podID="8657494d-8b2e-4545-9713-e55ace12f329" containerID="86a3152df7e3a2c46f55c0278d2eb568f064bdbae35cf4046fa2c6c2d3131640" exitCode=0 Feb 24 02:37:19.730853 master-0 kubenswrapper[31411]: I0224 02:37:19.730171 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-scheduler-0" event={"ID":"8657494d-8b2e-4545-9713-e55ace12f329","Type":"ContainerDied","Data":"86a3152df7e3a2c46f55c0278d2eb568f064bdbae35cf4046fa2c6c2d3131640"} Feb 24 02:37:19.813091 master-0 kubenswrapper[31411]: I0224 02:37:19.813042 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-8kz9s" Feb 24 02:37:20.008760 master-0 kubenswrapper[31411]: I0224 02:37:20.008684 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prjc6\" (UniqueName: \"kubernetes.io/projected/681e6026-865f-4f36-9ca6-5321c0738d18-kube-api-access-prjc6\") pod \"681e6026-865f-4f36-9ca6-5321c0738d18\" (UID: \"681e6026-865f-4f36-9ca6-5321c0738d18\") " Feb 24 02:37:20.009140 master-0 kubenswrapper[31411]: I0224 02:37:20.009106 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/681e6026-865f-4f36-9ca6-5321c0738d18-operator-scripts\") pod \"681e6026-865f-4f36-9ca6-5321c0738d18\" (UID: \"681e6026-865f-4f36-9ca6-5321c0738d18\") " Feb 24 02:37:20.010752 master-0 kubenswrapper[31411]: I0224 02:37:20.010688 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/681e6026-865f-4f36-9ca6-5321c0738d18-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "681e6026-865f-4f36-9ca6-5321c0738d18" (UID: "681e6026-865f-4f36-9ca6-5321c0738d18"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:37:20.016129 master-0 kubenswrapper[31411]: I0224 02:37:20.016031 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/681e6026-865f-4f36-9ca6-5321c0738d18-kube-api-access-prjc6" (OuterVolumeSpecName: "kube-api-access-prjc6") pod "681e6026-865f-4f36-9ca6-5321c0738d18" (UID: "681e6026-865f-4f36-9ca6-5321c0738d18"). InnerVolumeSpecName "kube-api-access-prjc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:37:20.114041 master-0 kubenswrapper[31411]: I0224 02:37:20.113886 31411 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/681e6026-865f-4f36-9ca6-5321c0738d18-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.114041 master-0 kubenswrapper[31411]: I0224 02:37:20.113943 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prjc6\" (UniqueName: \"kubernetes.io/projected/681e6026-865f-4f36-9ca6-5321c0738d18-kube-api-access-prjc6\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.342141 master-0 kubenswrapper[31411]: I0224 02:37:20.342084 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-2bdc-account-create-update-5cgdd" Feb 24 02:37:20.432529 master-0 kubenswrapper[31411]: I0224 02:37:20.432471 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df4cb99b-a2a5-4423-ab60-5d180ea09e93-operator-scripts\") pod \"df4cb99b-a2a5-4423-ab60-5d180ea09e93\" (UID: \"df4cb99b-a2a5-4423-ab60-5d180ea09e93\") " Feb 24 02:37:20.432889 master-0 kubenswrapper[31411]: I0224 02:37:20.432669 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6zjj\" (UniqueName: \"kubernetes.io/projected/df4cb99b-a2a5-4423-ab60-5d180ea09e93-kube-api-access-c6zjj\") pod \"df4cb99b-a2a5-4423-ab60-5d180ea09e93\" (UID: \"df4cb99b-a2a5-4423-ab60-5d180ea09e93\") " Feb 24 02:37:20.433045 master-0 kubenswrapper[31411]: I0224 02:37:20.433016 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df4cb99b-a2a5-4423-ab60-5d180ea09e93-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "df4cb99b-a2a5-4423-ab60-5d180ea09e93" (UID: "df4cb99b-a2a5-4423-ab60-5d180ea09e93"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:37:20.433200 master-0 kubenswrapper[31411]: I0224 02:37:20.433178 31411 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df4cb99b-a2a5-4423-ab60-5d180ea09e93-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.461009 master-0 kubenswrapper[31411]: I0224 02:37:20.457344 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df4cb99b-a2a5-4423-ab60-5d180ea09e93-kube-api-access-c6zjj" (OuterVolumeSpecName: "kube-api-access-c6zjj") pod "df4cb99b-a2a5-4423-ab60-5d180ea09e93" (UID: "df4cb99b-a2a5-4423-ab60-5d180ea09e93"). InnerVolumeSpecName "kube-api-access-c6zjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:37:20.536747 master-0 kubenswrapper[31411]: I0224 02:37:20.536566 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6zjj\" (UniqueName: \"kubernetes.io/projected/df4cb99b-a2a5-4423-ab60-5d180ea09e93-kube-api-access-c6zjj\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.593907 master-0 kubenswrapper[31411]: I0224 02:37:20.593827 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:20.638947 master-0 kubenswrapper[31411]: I0224 02:37:20.638869 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-sys\") pod \"8e200864-094f-471c-ba9d-863d14ff8404\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " Feb 24 02:37:20.638947 master-0 kubenswrapper[31411]: I0224 02:37:20.638940 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e200864-094f-471c-ba9d-863d14ff8404-combined-ca-bundle\") pod \"8e200864-094f-471c-ba9d-863d14ff8404\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " Feb 24 02:37:20.639392 master-0 kubenswrapper[31411]: I0224 02:37:20.639194 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-etc-iscsi\") pod \"8e200864-094f-471c-ba9d-863d14ff8404\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " Feb 24 02:37:20.639392 master-0 kubenswrapper[31411]: I0224 02:37:20.639271 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr68t\" (UniqueName: \"kubernetes.io/projected/8e200864-094f-471c-ba9d-863d14ff8404-kube-api-access-gr68t\") pod \"8e200864-094f-471c-ba9d-863d14ff8404\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " Feb 24 02:37:20.639392 master-0 kubenswrapper[31411]: I0224 02:37:20.639318 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e200864-094f-471c-ba9d-863d14ff8404-config-data\") pod \"8e200864-094f-471c-ba9d-863d14ff8404\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " Feb 24 02:37:20.639392 master-0 kubenswrapper[31411]: I0224 02:37:20.639356 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-etc-nvme\") pod \"8e200864-094f-471c-ba9d-863d14ff8404\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " Feb 24 02:37:20.639515 master-0 kubenswrapper[31411]: I0224 02:37:20.639406 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-etc-machine-id\") pod \"8e200864-094f-471c-ba9d-863d14ff8404\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " Feb 24 02:37:20.639515 master-0 kubenswrapper[31411]: I0224 02:37:20.639429 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-var-locks-cinder\") pod \"8e200864-094f-471c-ba9d-863d14ff8404\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " Feb 24 02:37:20.639515 master-0 kubenswrapper[31411]: I0224 02:37:20.639514 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-var-lib-cinder\") pod \"8e200864-094f-471c-ba9d-863d14ff8404\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " Feb 24 02:37:20.639827 master-0 kubenswrapper[31411]: I0224 02:37:20.639583 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e200864-094f-471c-ba9d-863d14ff8404-config-data-custom\") pod \"8e200864-094f-471c-ba9d-863d14ff8404\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " Feb 24 02:37:20.639827 master-0 kubenswrapper[31411]: I0224 02:37:20.639603 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-lib-modules\") pod \"8e200864-094f-471c-ba9d-863d14ff8404\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " Feb 24 02:37:20.639827 master-0 kubenswrapper[31411]: I0224 02:37:20.639822 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-var-locks-brick\") pod \"8e200864-094f-471c-ba9d-863d14ff8404\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " Feb 24 02:37:20.639994 master-0 kubenswrapper[31411]: I0224 02:37:20.639953 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e200864-094f-471c-ba9d-863d14ff8404-scripts\") pod \"8e200864-094f-471c-ba9d-863d14ff8404\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " Feb 24 02:37:20.640061 master-0 kubenswrapper[31411]: I0224 02:37:20.640032 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-run\") pod \"8e200864-094f-471c-ba9d-863d14ff8404\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " Feb 24 02:37:20.640061 master-0 kubenswrapper[31411]: I0224 02:37:20.640057 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-dev\") pod \"8e200864-094f-471c-ba9d-863d14ff8404\" (UID: \"8e200864-094f-471c-ba9d-863d14ff8404\") " Feb 24 02:37:20.641159 master-0 kubenswrapper[31411]: I0224 02:37:20.641093 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8e200864-094f-471c-ba9d-863d14ff8404" (UID: "8e200864-094f-471c-ba9d-863d14ff8404"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:37:20.641215 master-0 kubenswrapper[31411]: I0224 02:37:20.641161 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-dev" (OuterVolumeSpecName: "dev") pod "8e200864-094f-471c-ba9d-863d14ff8404" (UID: "8e200864-094f-471c-ba9d-863d14ff8404"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:37:20.641215 master-0 kubenswrapper[31411]: I0224 02:37:20.641170 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-sys" (OuterVolumeSpecName: "sys") pod "8e200864-094f-471c-ba9d-863d14ff8404" (UID: "8e200864-094f-471c-ba9d-863d14ff8404"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:37:20.641286 master-0 kubenswrapper[31411]: I0224 02:37:20.641219 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "8e200864-094f-471c-ba9d-863d14ff8404" (UID: "8e200864-094f-471c-ba9d-863d14ff8404"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:37:20.641286 master-0 kubenswrapper[31411]: I0224 02:37:20.641249 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "8e200864-094f-471c-ba9d-863d14ff8404" (UID: "8e200864-094f-471c-ba9d-863d14ff8404"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:37:20.642087 master-0 kubenswrapper[31411]: I0224 02:37:20.641740 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "8e200864-094f-471c-ba9d-863d14ff8404" (UID: "8e200864-094f-471c-ba9d-863d14ff8404"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:37:20.642087 master-0 kubenswrapper[31411]: I0224 02:37:20.641776 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8e200864-094f-471c-ba9d-863d14ff8404" (UID: "8e200864-094f-471c-ba9d-863d14ff8404"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:37:20.642087 master-0 kubenswrapper[31411]: I0224 02:37:20.641801 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "8e200864-094f-471c-ba9d-863d14ff8404" (UID: "8e200864-094f-471c-ba9d-863d14ff8404"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:37:20.642087 master-0 kubenswrapper[31411]: I0224 02:37:20.641827 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "8e200864-094f-471c-ba9d-863d14ff8404" (UID: "8e200864-094f-471c-ba9d-863d14ff8404"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:37:20.645072 master-0 kubenswrapper[31411]: I0224 02:37:20.644947 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-run" (OuterVolumeSpecName: "run") pod "8e200864-094f-471c-ba9d-863d14ff8404" (UID: "8e200864-094f-471c-ba9d-863d14ff8404"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:37:20.653162 master-0 kubenswrapper[31411]: I0224 02:37:20.653098 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e200864-094f-471c-ba9d-863d14ff8404-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8e200864-094f-471c-ba9d-863d14ff8404" (UID: "8e200864-094f-471c-ba9d-863d14ff8404"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:20.655254 master-0 kubenswrapper[31411]: I0224 02:37:20.655192 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e200864-094f-471c-ba9d-863d14ff8404-scripts" (OuterVolumeSpecName: "scripts") pod "8e200864-094f-471c-ba9d-863d14ff8404" (UID: "8e200864-094f-471c-ba9d-863d14ff8404"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:20.686760 master-0 kubenswrapper[31411]: I0224 02:37:20.685978 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e200864-094f-471c-ba9d-863d14ff8404-kube-api-access-gr68t" (OuterVolumeSpecName: "kube-api-access-gr68t") pod "8e200864-094f-471c-ba9d-863d14ff8404" (UID: "8e200864-094f-471c-ba9d-863d14ff8404"). InnerVolumeSpecName "kube-api-access-gr68t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:37:20.712952 master-0 kubenswrapper[31411]: I0224 02:37:20.712843 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:20.755719 master-0 kubenswrapper[31411]: I0224 02:37:20.755219 31411 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-sys\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.755719 master-0 kubenswrapper[31411]: I0224 02:37:20.755269 31411 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.755719 master-0 kubenswrapper[31411]: I0224 02:37:20.755283 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr68t\" (UniqueName: \"kubernetes.io/projected/8e200864-094f-471c-ba9d-863d14ff8404-kube-api-access-gr68t\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.755719 master-0 kubenswrapper[31411]: I0224 02:37:20.755295 31411 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-etc-nvme\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.755719 master-0 kubenswrapper[31411]: I0224 02:37:20.755305 31411 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.755719 master-0 kubenswrapper[31411]: I0224 02:37:20.755315 31411 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.755719 master-0 kubenswrapper[31411]: I0224 02:37:20.755324 31411 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.755719 master-0 kubenswrapper[31411]: I0224 02:37:20.755334 31411 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e200864-094f-471c-ba9d-863d14ff8404-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.755719 master-0 kubenswrapper[31411]: I0224 02:37:20.755343 31411 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-lib-modules\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.755719 master-0 kubenswrapper[31411]: I0224 02:37:20.755352 31411 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.755719 master-0 kubenswrapper[31411]: I0224 02:37:20.755361 31411 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e200864-094f-471c-ba9d-863d14ff8404-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.755719 master-0 kubenswrapper[31411]: I0224 02:37:20.755369 31411 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-run\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.755719 master-0 kubenswrapper[31411]: I0224 02:37:20.755378 31411 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8e200864-094f-471c-ba9d-863d14ff8404-dev\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.777125 master-0 kubenswrapper[31411]: I0224 02:37:20.777060 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-85b75c94bc-pp6mc"] Feb 24 02:37:20.803050 master-0 kubenswrapper[31411]: I0224 02:37:20.802674 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-2bdc-account-create-update-5cgdd" Feb 24 02:37:20.803050 master-0 kubenswrapper[31411]: I0224 02:37:20.802780 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-2bdc-account-create-update-5cgdd" event={"ID":"df4cb99b-a2a5-4423-ab60-5d180ea09e93","Type":"ContainerDied","Data":"76786aafb3a8501556ff39b48f72982e9510bfe96393f0898583608c13af35d7"} Feb 24 02:37:20.803050 master-0 kubenswrapper[31411]: I0224 02:37:20.802835 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76786aafb3a8501556ff39b48f72982e9510bfe96393f0898583608c13af35d7" Feb 24 02:37:20.811462 master-0 kubenswrapper[31411]: I0224 02:37:20.811335 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-75c678c459-9mmbb" event={"ID":"9ef8199f-6610-44a1-b85c-fc7f2bea6294","Type":"ContainerStarted","Data":"8ddc546cc3f3991e336a928e41297a654e01303391232133cff6addd93a06bab"} Feb 24 02:37:20.828935 master-0 kubenswrapper[31411]: I0224 02:37:20.822262 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-8kz9s" event={"ID":"681e6026-865f-4f36-9ca6-5321c0738d18","Type":"ContainerDied","Data":"28ed97ed23fbbdbd117661041e0cb54897621f21ddcae482f3be6ae0d2eac81f"} Feb 24 02:37:20.828935 master-0 kubenswrapper[31411]: I0224 02:37:20.822310 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28ed97ed23fbbdbd117661041e0cb54897621f21ddcae482f3be6ae0d2eac81f" Feb 24 02:37:20.828935 master-0 kubenswrapper[31411]: I0224 02:37:20.822380 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-8kz9s" Feb 24 02:37:20.835258 master-0 kubenswrapper[31411]: I0224 02:37:20.835106 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7d8f6784f6-dqjdm" event={"ID":"72106e8c-2a98-4a82-9f36-c820986c5665","Type":"ContainerStarted","Data":"13e540b83a6ce8264cb378fa002ef09e27fbf9e555b424f3a00379cdfc0c4c84"} Feb 24 02:37:20.836719 master-0 kubenswrapper[31411]: I0224 02:37:20.835692 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-7d8f6784f6-dqjdm" Feb 24 02:37:20.855943 master-0 kubenswrapper[31411]: I0224 02:37:20.855874 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" event={"ID":"8e200864-094f-471c-ba9d-863d14ff8404","Type":"ContainerDied","Data":"ddc5b9b1cc70f148d1c39a1db204e8407808aa7b5be8bdaf88463f576851eaf1"} Feb 24 02:37:20.856060 master-0 kubenswrapper[31411]: I0224 02:37:20.855952 31411 scope.go:117] "RemoveContainer" containerID="d4320888b9dabb7b3f57c8dc04c0c7e4c1d12a141528226e6a521350a767e095" Feb 24 02:37:20.859615 master-0 kubenswrapper[31411]: I0224 02:37:20.859588 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:20.860676 master-0 kubenswrapper[31411]: I0224 02:37:20.860376 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-sys\") pod \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " Feb 24 02:37:20.860676 master-0 kubenswrapper[31411]: I0224 02:37:20.860506 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-var-locks-cinder\") pod \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " Feb 24 02:37:20.860676 master-0 kubenswrapper[31411]: I0224 02:37:20.860634 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-etc-nvme\") pod \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " Feb 24 02:37:20.860823 master-0 kubenswrapper[31411]: I0224 02:37:20.860690 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-var-lib-cinder\") pod \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " Feb 24 02:37:20.860955 master-0 kubenswrapper[31411]: I0224 02:37:20.860935 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "6fcfbc46-898e-4e7b-a96a-e2170c671a45" (UID: "6fcfbc46-898e-4e7b-a96a-e2170c671a45"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:37:20.861783 master-0 kubenswrapper[31411]: I0224 02:37:20.861022 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-sys" (OuterVolumeSpecName: "sys") pod "6fcfbc46-898e-4e7b-a96a-e2170c671a45" (UID: "6fcfbc46-898e-4e7b-a96a-e2170c671a45"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:37:20.875561 master-0 kubenswrapper[31411]: I0224 02:37:20.863674 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fcfbc46-898e-4e7b-a96a-e2170c671a45-config-data\") pod \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " Feb 24 02:37:20.875561 master-0 kubenswrapper[31411]: I0224 02:37:20.863764 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "6fcfbc46-898e-4e7b-a96a-e2170c671a45" (UID: "6fcfbc46-898e-4e7b-a96a-e2170c671a45"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:37:20.875561 master-0 kubenswrapper[31411]: I0224 02:37:20.863773 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6fcfbc46-898e-4e7b-a96a-e2170c671a45-config-data-custom\") pod \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " Feb 24 02:37:20.875561 master-0 kubenswrapper[31411]: I0224 02:37:20.863803 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "6fcfbc46-898e-4e7b-a96a-e2170c671a45" (UID: "6fcfbc46-898e-4e7b-a96a-e2170c671a45"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:37:20.875561 master-0 kubenswrapper[31411]: I0224 02:37:20.863851 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-run\") pod \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " Feb 24 02:37:20.875561 master-0 kubenswrapper[31411]: I0224 02:37:20.863885 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-etc-machine-id\") pod \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " Feb 24 02:37:20.875561 master-0 kubenswrapper[31411]: I0224 02:37:20.863942 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcfbc46-898e-4e7b-a96a-e2170c671a45-combined-ca-bundle\") pod \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " Feb 24 02:37:20.875561 master-0 kubenswrapper[31411]: I0224 02:37:20.863983 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9lkq\" (UniqueName: \"kubernetes.io/projected/6fcfbc46-898e-4e7b-a96a-e2170c671a45-kube-api-access-l9lkq\") pod \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " Feb 24 02:37:20.875561 master-0 kubenswrapper[31411]: I0224 02:37:20.864010 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-dev\") pod \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " Feb 24 02:37:20.875561 master-0 kubenswrapper[31411]: I0224 02:37:20.864035 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-lib-modules\") pod \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " Feb 24 02:37:20.875561 master-0 kubenswrapper[31411]: I0224 02:37:20.864054 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-etc-iscsi\") pod \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " Feb 24 02:37:20.875561 master-0 kubenswrapper[31411]: I0224 02:37:20.864081 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fcfbc46-898e-4e7b-a96a-e2170c671a45-scripts\") pod \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " Feb 24 02:37:20.875561 master-0 kubenswrapper[31411]: I0224 02:37:20.864116 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-var-locks-brick\") pod \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\" (UID: \"6fcfbc46-898e-4e7b-a96a-e2170c671a45\") " Feb 24 02:37:20.875561 master-0 kubenswrapper[31411]: I0224 02:37:20.864630 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "6fcfbc46-898e-4e7b-a96a-e2170c671a45" (UID: "6fcfbc46-898e-4e7b-a96a-e2170c671a45"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:37:20.875561 master-0 kubenswrapper[31411]: I0224 02:37:20.868548 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e200864-094f-471c-ba9d-863d14ff8404-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e200864-094f-471c-ba9d-863d14ff8404" (UID: "8e200864-094f-471c-ba9d-863d14ff8404"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:20.875561 master-0 kubenswrapper[31411]: I0224 02:37:20.868679 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6fcfbc46-898e-4e7b-a96a-e2170c671a45" (UID: "6fcfbc46-898e-4e7b-a96a-e2170c671a45"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:37:20.875561 master-0 kubenswrapper[31411]: I0224 02:37:20.868701 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-dev" (OuterVolumeSpecName: "dev") pod "6fcfbc46-898e-4e7b-a96a-e2170c671a45" (UID: "6fcfbc46-898e-4e7b-a96a-e2170c671a45"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:37:20.875561 master-0 kubenswrapper[31411]: I0224 02:37:20.868726 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6fcfbc46-898e-4e7b-a96a-e2170c671a45" (UID: "6fcfbc46-898e-4e7b-a96a-e2170c671a45"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:37:20.875561 master-0 kubenswrapper[31411]: I0224 02:37:20.868744 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-run" (OuterVolumeSpecName: "run") pod "6fcfbc46-898e-4e7b-a96a-e2170c671a45" (UID: "6fcfbc46-898e-4e7b-a96a-e2170c671a45"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:37:20.875561 master-0 kubenswrapper[31411]: I0224 02:37:20.874717 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "6fcfbc46-898e-4e7b-a96a-e2170c671a45" (UID: "6fcfbc46-898e-4e7b-a96a-e2170c671a45"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:37:20.892389 master-0 kubenswrapper[31411]: I0224 02:37:20.877178 31411 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-run\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.892389 master-0 kubenswrapper[31411]: I0224 02:37:20.878313 31411 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.892389 master-0 kubenswrapper[31411]: I0224 02:37:20.879027 31411 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-dev\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.892389 master-0 kubenswrapper[31411]: I0224 02:37:20.879040 31411 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-lib-modules\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.892389 master-0 kubenswrapper[31411]: I0224 02:37:20.879049 31411 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.892389 master-0 kubenswrapper[31411]: I0224 02:37:20.879058 31411 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-sys\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.892389 master-0 kubenswrapper[31411]: I0224 02:37:20.879070 31411 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.892389 master-0 kubenswrapper[31411]: I0224 02:37:20.879083 31411 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-etc-nvme\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.892389 master-0 kubenswrapper[31411]: I0224 02:37:20.879092 31411 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.892389 master-0 kubenswrapper[31411]: I0224 02:37:20.879102 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e200864-094f-471c-ba9d-863d14ff8404-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.896873 master-0 kubenswrapper[31411]: I0224 02:37:20.894872 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fcfbc46-898e-4e7b-a96a-e2170c671a45-kube-api-access-l9lkq" (OuterVolumeSpecName: "kube-api-access-l9lkq") pod "6fcfbc46-898e-4e7b-a96a-e2170c671a45" (UID: "6fcfbc46-898e-4e7b-a96a-e2170c671a45"). InnerVolumeSpecName "kube-api-access-l9lkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:37:20.896873 master-0 kubenswrapper[31411]: I0224 02:37:20.896649 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fcfbc46-898e-4e7b-a96a-e2170c671a45-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6fcfbc46-898e-4e7b-a96a-e2170c671a45" (UID: "6fcfbc46-898e-4e7b-a96a-e2170c671a45"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:20.920674 master-0 kubenswrapper[31411]: I0224 02:37:20.907505 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fcfbc46-898e-4e7b-a96a-e2170c671a45-scripts" (OuterVolumeSpecName: "scripts") pod "6fcfbc46-898e-4e7b-a96a-e2170c671a45" (UID: "6fcfbc46-898e-4e7b-a96a-e2170c671a45"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:20.920674 master-0 kubenswrapper[31411]: I0224 02:37:20.911791 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-neutron-agent-7d8f6784f6-dqjdm" podStartSLOduration=2.840265143 podStartE2EDuration="5.911767605s" podCreationTimestamp="2026-02-24 02:37:15 +0000 UTC" firstStartedPulling="2026-02-24 02:37:16.903646006 +0000 UTC m=+980.120843852" lastFinishedPulling="2026-02-24 02:37:19.975148468 +0000 UTC m=+983.192346314" observedRunningTime="2026-02-24 02:37:20.879711806 +0000 UTC m=+984.096909652" watchObservedRunningTime="2026-02-24 02:37:20.911767605 +0000 UTC m=+984.128965451" Feb 24 02:37:20.920674 master-0 kubenswrapper[31411]: I0224 02:37:20.920584 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:20.921548 master-0 kubenswrapper[31411]: I0224 02:37:20.921400 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-backup-0" event={"ID":"6fcfbc46-898e-4e7b-a96a-e2170c671a45","Type":"ContainerDied","Data":"e93546c2e2924092e4bb69db692e3fb7d1c245b5c49c3c0ba03dc725391e17b5"} Feb 24 02:37:20.929386 master-0 kubenswrapper[31411]: I0224 02:37:20.929325 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Feb 24 02:37:20.948602 master-0 kubenswrapper[31411]: I0224 02:37:20.946001 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e200864-094f-471c-ba9d-863d14ff8404-config-data" (OuterVolumeSpecName: "config-data") pod "8e200864-094f-471c-ba9d-863d14ff8404" (UID: "8e200864-094f-471c-ba9d-863d14ff8404"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:20.992603 master-0 kubenswrapper[31411]: I0224 02:37:20.984544 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9lkq\" (UniqueName: \"kubernetes.io/projected/6fcfbc46-898e-4e7b-a96a-e2170c671a45-kube-api-access-l9lkq\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.992603 master-0 kubenswrapper[31411]: I0224 02:37:20.984602 31411 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6fcfbc46-898e-4e7b-a96a-e2170c671a45-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.992603 master-0 kubenswrapper[31411]: I0224 02:37:20.984615 31411 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fcfbc46-898e-4e7b-a96a-e2170c671a45-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.992603 master-0 kubenswrapper[31411]: I0224 02:37:20.984625 31411 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e200864-094f-471c-ba9d-863d14ff8404-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.992603 master-0 kubenswrapper[31411]: I0224 02:37:20.984636 31411 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6fcfbc46-898e-4e7b-a96a-e2170c671a45-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:20.996848 master-0 kubenswrapper[31411]: I0224 02:37:20.994857 31411 scope.go:117] "RemoveContainer" containerID="46b3e1401d4270531bf61719927a43f66a0d2786e4d9941333278da6d73172de" Feb 24 02:37:21.007506 master-0 kubenswrapper[31411]: I0224 02:37:21.007441 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fcfbc46-898e-4e7b-a96a-e2170c671a45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6fcfbc46-898e-4e7b-a96a-e2170c671a45" (UID: "6fcfbc46-898e-4e7b-a96a-e2170c671a45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:21.092090 master-0 kubenswrapper[31411]: I0224 02:37:21.088607 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcfbc46-898e-4e7b-a96a-e2170c671a45-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:21.095898 master-0 kubenswrapper[31411]: I0224 02:37:21.095837 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fcfbc46-898e-4e7b-a96a-e2170c671a45-config-data" (OuterVolumeSpecName: "config-data") pod "6fcfbc46-898e-4e7b-a96a-e2170c671a45" (UID: "6fcfbc46-898e-4e7b-a96a-e2170c671a45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:21.194342 master-0 kubenswrapper[31411]: I0224 02:37:21.194274 31411 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fcfbc46-898e-4e7b-a96a-e2170c671a45-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:21.240770 master-0 kubenswrapper[31411]: I0224 02:37:21.240725 31411 scope.go:117] "RemoveContainer" containerID="a18767a10ccb87aefe58cb57822457b32145a2e700ed95684236232652104c0d" Feb 24 02:37:21.285145 master-0 kubenswrapper[31411]: I0224 02:37:21.285044 31411 scope.go:117] "RemoveContainer" containerID="bb53df19c53cbee60d75cdef9ec6e572433bc2a787797618d3d964e238935c5a" Feb 24 02:37:21.292037 master-0 kubenswrapper[31411]: I0224 02:37:21.291968 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-6ac23-volume-lvm-iscsi-0"] Feb 24 02:37:21.312962 master-0 kubenswrapper[31411]: I0224 02:37:21.312475 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-6ac23-volume-lvm-iscsi-0"] Feb 24 02:37:21.362468 master-0 kubenswrapper[31411]: I0224 02:37:21.362381 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-6ac23-backup-0"] Feb 24 02:37:21.396850 master-0 kubenswrapper[31411]: I0224 02:37:21.396771 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-6ac23-volume-lvm-iscsi-0"] Feb 24 02:37:21.397557 master-0 kubenswrapper[31411]: E0224 02:37:21.397465 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fcfbc46-898e-4e7b-a96a-e2170c671a45" containerName="probe" Feb 24 02:37:21.397557 master-0 kubenswrapper[31411]: I0224 02:37:21.397485 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fcfbc46-898e-4e7b-a96a-e2170c671a45" containerName="probe" Feb 24 02:37:21.397557 master-0 kubenswrapper[31411]: E0224 02:37:21.397505 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fcfbc46-898e-4e7b-a96a-e2170c671a45" containerName="cinder-backup" Feb 24 02:37:21.397557 master-0 kubenswrapper[31411]: I0224 02:37:21.397512 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fcfbc46-898e-4e7b-a96a-e2170c671a45" containerName="cinder-backup" Feb 24 02:37:21.397557 master-0 kubenswrapper[31411]: E0224 02:37:21.397558 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df4cb99b-a2a5-4423-ab60-5d180ea09e93" containerName="mariadb-account-create-update" Feb 24 02:37:21.397828 master-0 kubenswrapper[31411]: I0224 02:37:21.397565 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="df4cb99b-a2a5-4423-ab60-5d180ea09e93" containerName="mariadb-account-create-update" Feb 24 02:37:21.397828 master-0 kubenswrapper[31411]: E0224 02:37:21.397603 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="681e6026-865f-4f36-9ca6-5321c0738d18" containerName="mariadb-database-create" Feb 24 02:37:21.397828 master-0 kubenswrapper[31411]: I0224 02:37:21.397610 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="681e6026-865f-4f36-9ca6-5321c0738d18" containerName="mariadb-database-create" Feb 24 02:37:21.397828 master-0 kubenswrapper[31411]: E0224 02:37:21.397640 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e200864-094f-471c-ba9d-863d14ff8404" containerName="cinder-volume" Feb 24 02:37:21.397828 master-0 kubenswrapper[31411]: I0224 02:37:21.397647 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e200864-094f-471c-ba9d-863d14ff8404" containerName="cinder-volume" Feb 24 02:37:21.397828 master-0 kubenswrapper[31411]: E0224 02:37:21.397660 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e200864-094f-471c-ba9d-863d14ff8404" containerName="probe" Feb 24 02:37:21.397828 master-0 kubenswrapper[31411]: I0224 02:37:21.397666 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e200864-094f-471c-ba9d-863d14ff8404" containerName="probe" Feb 24 02:37:21.398863 master-0 kubenswrapper[31411]: I0224 02:37:21.397918 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e200864-094f-471c-ba9d-863d14ff8404" containerName="cinder-volume" Feb 24 02:37:21.398863 master-0 kubenswrapper[31411]: I0224 02:37:21.397946 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e200864-094f-471c-ba9d-863d14ff8404" containerName="probe" Feb 24 02:37:21.398863 master-0 kubenswrapper[31411]: I0224 02:37:21.397960 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fcfbc46-898e-4e7b-a96a-e2170c671a45" containerName="probe" Feb 24 02:37:21.398863 master-0 kubenswrapper[31411]: I0224 02:37:21.397992 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fcfbc46-898e-4e7b-a96a-e2170c671a45" containerName="cinder-backup" Feb 24 02:37:21.398863 master-0 kubenswrapper[31411]: I0224 02:37:21.398006 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="df4cb99b-a2a5-4423-ab60-5d180ea09e93" containerName="mariadb-account-create-update" Feb 24 02:37:21.398863 master-0 kubenswrapper[31411]: I0224 02:37:21.398027 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="681e6026-865f-4f36-9ca6-5321c0738d18" containerName="mariadb-database-create" Feb 24 02:37:21.404773 master-0 kubenswrapper[31411]: I0224 02:37:21.404060 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.409817 master-0 kubenswrapper[31411]: I0224 02:37:21.407316 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-6ac23-volume-lvm-iscsi-config-data" Feb 24 02:37:21.448809 master-0 kubenswrapper[31411]: I0224 02:37:21.448631 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-6ac23-backup-0"] Feb 24 02:37:21.459810 master-0 kubenswrapper[31411]: I0224 02:37:21.459754 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6ac23-volume-lvm-iscsi-0"] Feb 24 02:37:21.496354 master-0 kubenswrapper[31411]: I0224 02:37:21.496280 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-6ac23-backup-0"] Feb 24 02:37:21.501793 master-0 kubenswrapper[31411]: I0224 02:37:21.501735 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.505270 master-0 kubenswrapper[31411]: I0224 02:37:21.505215 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d6d3b198-9449-43a4-9252-2659f60e7959-var-locks-cinder\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.505346 master-0 kubenswrapper[31411]: I0224 02:37:21.505320 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6d3b198-9449-43a4-9252-2659f60e7959-lib-modules\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.505407 master-0 kubenswrapper[31411]: I0224 02:37:21.505349 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d6d3b198-9449-43a4-9252-2659f60e7959-var-locks-brick\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.505407 master-0 kubenswrapper[31411]: I0224 02:37:21.505374 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d6d3b198-9449-43a4-9252-2659f60e7959-etc-iscsi\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.505407 master-0 kubenswrapper[31411]: I0224 02:37:21.505407 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d6d3b198-9449-43a4-9252-2659f60e7959-var-lib-cinder\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.505508 master-0 kubenswrapper[31411]: I0224 02:37:21.505445 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d6d3b198-9449-43a4-9252-2659f60e7959-dev\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.505508 master-0 kubenswrapper[31411]: I0224 02:37:21.505469 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d6d3b198-9449-43a4-9252-2659f60e7959-sys\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.505508 master-0 kubenswrapper[31411]: I0224 02:37:21.505506 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6d3b198-9449-43a4-9252-2659f60e7959-combined-ca-bundle\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.505619 master-0 kubenswrapper[31411]: I0224 02:37:21.505535 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d6d3b198-9449-43a4-9252-2659f60e7959-etc-nvme\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.505619 master-0 kubenswrapper[31411]: I0224 02:37:21.505586 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d6d3b198-9449-43a4-9252-2659f60e7959-etc-machine-id\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.505619 master-0 kubenswrapper[31411]: I0224 02:37:21.505609 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72s9z\" (UniqueName: \"kubernetes.io/projected/d6d3b198-9449-43a4-9252-2659f60e7959-kube-api-access-72s9z\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.505707 master-0 kubenswrapper[31411]: I0224 02:37:21.505629 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6d3b198-9449-43a4-9252-2659f60e7959-config-data\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.505707 master-0 kubenswrapper[31411]: I0224 02:37:21.505654 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d6d3b198-9449-43a4-9252-2659f60e7959-run\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.505707 master-0 kubenswrapper[31411]: I0224 02:37:21.505672 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6d3b198-9449-43a4-9252-2659f60e7959-scripts\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.505707 master-0 kubenswrapper[31411]: I0224 02:37:21.505704 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6d3b198-9449-43a4-9252-2659f60e7959-config-data-custom\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.511123 master-0 kubenswrapper[31411]: I0224 02:37:21.511079 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-6ac23-backup-config-data" Feb 24 02:37:21.512130 master-0 kubenswrapper[31411]: I0224 02:37:21.512099 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6ac23-backup-0"] Feb 24 02:37:21.610495 master-0 kubenswrapper[31411]: I0224 02:37:21.610278 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/eaa90bcd-7367-47bd-ab29-0fe04562014b-var-locks-cinder\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.610495 master-0 kubenswrapper[31411]: I0224 02:37:21.610362 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d6d3b198-9449-43a4-9252-2659f60e7959-var-locks-cinder\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.610495 master-0 kubenswrapper[31411]: I0224 02:37:21.610448 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/eaa90bcd-7367-47bd-ab29-0fe04562014b-sys\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.610495 master-0 kubenswrapper[31411]: I0224 02:37:21.610487 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eaa90bcd-7367-47bd-ab29-0fe04562014b-etc-machine-id\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.610762 master-0 kubenswrapper[31411]: I0224 02:37:21.610517 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qps49\" (UniqueName: \"kubernetes.io/projected/eaa90bcd-7367-47bd-ab29-0fe04562014b-kube-api-access-qps49\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.610762 master-0 kubenswrapper[31411]: I0224 02:37:21.610614 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa90bcd-7367-47bd-ab29-0fe04562014b-config-data\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.610762 master-0 kubenswrapper[31411]: I0224 02:37:21.610649 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6d3b198-9449-43a4-9252-2659f60e7959-lib-modules\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.610762 master-0 kubenswrapper[31411]: I0224 02:37:21.610675 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d6d3b198-9449-43a4-9252-2659f60e7959-var-locks-brick\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.610762 master-0 kubenswrapper[31411]: I0224 02:37:21.610709 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d6d3b198-9449-43a4-9252-2659f60e7959-etc-iscsi\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.610762 master-0 kubenswrapper[31411]: I0224 02:37:21.610730 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa90bcd-7367-47bd-ab29-0fe04562014b-combined-ca-bundle\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.610952 master-0 kubenswrapper[31411]: I0224 02:37:21.610787 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d6d3b198-9449-43a4-9252-2659f60e7959-var-lib-cinder\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.610952 master-0 kubenswrapper[31411]: I0224 02:37:21.610839 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d6d3b198-9449-43a4-9252-2659f60e7959-dev\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.610952 master-0 kubenswrapper[31411]: I0224 02:37:21.610867 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d6d3b198-9449-43a4-9252-2659f60e7959-sys\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.610952 master-0 kubenswrapper[31411]: I0224 02:37:21.610923 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/eaa90bcd-7367-47bd-ab29-0fe04562014b-run\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.610952 master-0 kubenswrapper[31411]: I0224 02:37:21.610955 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6d3b198-9449-43a4-9252-2659f60e7959-combined-ca-bundle\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.611100 master-0 kubenswrapper[31411]: I0224 02:37:21.610974 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eaa90bcd-7367-47bd-ab29-0fe04562014b-lib-modules\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.611100 master-0 kubenswrapper[31411]: I0224 02:37:21.610997 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/eaa90bcd-7367-47bd-ab29-0fe04562014b-etc-nvme\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.611100 master-0 kubenswrapper[31411]: I0224 02:37:21.611028 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaa90bcd-7367-47bd-ab29-0fe04562014b-config-data-custom\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.611100 master-0 kubenswrapper[31411]: I0224 02:37:21.611054 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d6d3b198-9449-43a4-9252-2659f60e7959-etc-nvme\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.611100 master-0 kubenswrapper[31411]: I0224 02:37:21.611078 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/eaa90bcd-7367-47bd-ab29-0fe04562014b-var-lib-cinder\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.611263 master-0 kubenswrapper[31411]: I0224 02:37:21.611105 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/eaa90bcd-7367-47bd-ab29-0fe04562014b-dev\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.611263 master-0 kubenswrapper[31411]: I0224 02:37:21.611130 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaa90bcd-7367-47bd-ab29-0fe04562014b-scripts\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.611263 master-0 kubenswrapper[31411]: I0224 02:37:21.611168 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d6d3b198-9449-43a4-9252-2659f60e7959-etc-machine-id\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.611263 master-0 kubenswrapper[31411]: I0224 02:37:21.611260 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72s9z\" (UniqueName: \"kubernetes.io/projected/d6d3b198-9449-43a4-9252-2659f60e7959-kube-api-access-72s9z\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.611386 master-0 kubenswrapper[31411]: I0224 02:37:21.611285 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6d3b198-9449-43a4-9252-2659f60e7959-config-data\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.611386 master-0 kubenswrapper[31411]: I0224 02:37:21.611321 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d6d3b198-9449-43a4-9252-2659f60e7959-run\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.611386 master-0 kubenswrapper[31411]: I0224 02:37:21.611323 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d6d3b198-9449-43a4-9252-2659f60e7959-var-lib-cinder\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.611489 master-0 kubenswrapper[31411]: I0224 02:37:21.611342 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6d3b198-9449-43a4-9252-2659f60e7959-scripts\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.611523 master-0 kubenswrapper[31411]: I0224 02:37:21.611495 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/eaa90bcd-7367-47bd-ab29-0fe04562014b-etc-iscsi\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.611664 master-0 kubenswrapper[31411]: I0224 02:37:21.611598 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/eaa90bcd-7367-47bd-ab29-0fe04562014b-var-locks-brick\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.611724 master-0 kubenswrapper[31411]: I0224 02:37:21.611700 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6d3b198-9449-43a4-9252-2659f60e7959-config-data-custom\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.612795 master-0 kubenswrapper[31411]: I0224 02:37:21.612742 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d6d3b198-9449-43a4-9252-2659f60e7959-var-locks-cinder\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.612853 master-0 kubenswrapper[31411]: I0224 02:37:21.612809 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6d3b198-9449-43a4-9252-2659f60e7959-lib-modules\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.614810 master-0 kubenswrapper[31411]: I0224 02:37:21.614745 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d6d3b198-9449-43a4-9252-2659f60e7959-dev\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.614898 master-0 kubenswrapper[31411]: I0224 02:37:21.614829 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d6d3b198-9449-43a4-9252-2659f60e7959-etc-nvme\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.614898 master-0 kubenswrapper[31411]: I0224 02:37:21.614850 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d6d3b198-9449-43a4-9252-2659f60e7959-run\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.615487 master-0 kubenswrapper[31411]: I0224 02:37:21.615441 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d6d3b198-9449-43a4-9252-2659f60e7959-etc-machine-id\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.615551 master-0 kubenswrapper[31411]: I0224 02:37:21.615523 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6d3b198-9449-43a4-9252-2659f60e7959-scripts\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.615605 master-0 kubenswrapper[31411]: I0224 02:37:21.615591 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d6d3b198-9449-43a4-9252-2659f60e7959-etc-iscsi\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.615638 master-0 kubenswrapper[31411]: I0224 02:37:21.615615 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d6d3b198-9449-43a4-9252-2659f60e7959-sys\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.616443 master-0 kubenswrapper[31411]: I0224 02:37:21.616389 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d6d3b198-9449-43a4-9252-2659f60e7959-var-locks-brick\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.622180 master-0 kubenswrapper[31411]: I0224 02:37:21.621938 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6d3b198-9449-43a4-9252-2659f60e7959-combined-ca-bundle\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.624821 master-0 kubenswrapper[31411]: I0224 02:37:21.624786 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6d3b198-9449-43a4-9252-2659f60e7959-config-data-custom\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.626803 master-0 kubenswrapper[31411]: I0224 02:37:21.626752 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6d3b198-9449-43a4-9252-2659f60e7959-config-data\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.639556 master-0 kubenswrapper[31411]: I0224 02:37:21.639508 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72s9z\" (UniqueName: \"kubernetes.io/projected/d6d3b198-9449-43a4-9252-2659f60e7959-kube-api-access-72s9z\") pod \"cinder-6ac23-volume-lvm-iscsi-0\" (UID: \"d6d3b198-9449-43a4-9252-2659f60e7959\") " pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.724862 master-0 kubenswrapper[31411]: I0224 02:37:21.723034 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa90bcd-7367-47bd-ab29-0fe04562014b-config-data\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.725142 master-0 kubenswrapper[31411]: I0224 02:37:21.725020 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa90bcd-7367-47bd-ab29-0fe04562014b-combined-ca-bundle\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.725410 master-0 kubenswrapper[31411]: I0224 02:37:21.725360 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/eaa90bcd-7367-47bd-ab29-0fe04562014b-run\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.725457 master-0 kubenswrapper[31411]: I0224 02:37:21.725422 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/eaa90bcd-7367-47bd-ab29-0fe04562014b-etc-nvme\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.725457 master-0 kubenswrapper[31411]: I0224 02:37:21.725446 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eaa90bcd-7367-47bd-ab29-0fe04562014b-lib-modules\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.725537 master-0 kubenswrapper[31411]: I0224 02:37:21.725502 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaa90bcd-7367-47bd-ab29-0fe04562014b-config-data-custom\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.725606 master-0 kubenswrapper[31411]: I0224 02:37:21.725558 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/eaa90bcd-7367-47bd-ab29-0fe04562014b-var-lib-cinder\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.725644 master-0 kubenswrapper[31411]: I0224 02:37:21.725618 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/eaa90bcd-7367-47bd-ab29-0fe04562014b-dev\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.725675 master-0 kubenswrapper[31411]: I0224 02:37:21.725652 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaa90bcd-7367-47bd-ab29-0fe04562014b-scripts\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.725815 master-0 kubenswrapper[31411]: I0224 02:37:21.725764 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/eaa90bcd-7367-47bd-ab29-0fe04562014b-etc-iscsi\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.725815 master-0 kubenswrapper[31411]: I0224 02:37:21.725807 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/eaa90bcd-7367-47bd-ab29-0fe04562014b-var-locks-brick\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.725889 master-0 kubenswrapper[31411]: I0224 02:37:21.725876 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/eaa90bcd-7367-47bd-ab29-0fe04562014b-var-locks-cinder\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.726008 master-0 kubenswrapper[31411]: I0224 02:37:21.725947 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/eaa90bcd-7367-47bd-ab29-0fe04562014b-sys\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.726058 master-0 kubenswrapper[31411]: I0224 02:37:21.726027 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/eaa90bcd-7367-47bd-ab29-0fe04562014b-sys\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.726058 master-0 kubenswrapper[31411]: I0224 02:37:21.723306 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:21.726351 master-0 kubenswrapper[31411]: I0224 02:37:21.726316 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eaa90bcd-7367-47bd-ab29-0fe04562014b-etc-machine-id\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.726394 master-0 kubenswrapper[31411]: I0224 02:37:21.726364 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qps49\" (UniqueName: \"kubernetes.io/projected/eaa90bcd-7367-47bd-ab29-0fe04562014b-kube-api-access-qps49\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.727286 master-0 kubenswrapper[31411]: I0224 02:37:21.727234 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/eaa90bcd-7367-47bd-ab29-0fe04562014b-var-lib-cinder\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.727286 master-0 kubenswrapper[31411]: I0224 02:37:21.727235 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/eaa90bcd-7367-47bd-ab29-0fe04562014b-etc-iscsi\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.727390 master-0 kubenswrapper[31411]: I0224 02:37:21.727335 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/eaa90bcd-7367-47bd-ab29-0fe04562014b-var-locks-cinder\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.727390 master-0 kubenswrapper[31411]: I0224 02:37:21.727355 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/eaa90bcd-7367-47bd-ab29-0fe04562014b-dev\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.727458 master-0 kubenswrapper[31411]: I0224 02:37:21.727382 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/eaa90bcd-7367-47bd-ab29-0fe04562014b-var-locks-brick\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.727458 master-0 kubenswrapper[31411]: I0224 02:37:21.727417 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/eaa90bcd-7367-47bd-ab29-0fe04562014b-run\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.727458 master-0 kubenswrapper[31411]: I0224 02:37:21.727454 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/eaa90bcd-7367-47bd-ab29-0fe04562014b-etc-nvme\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.727555 master-0 kubenswrapper[31411]: I0224 02:37:21.727479 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eaa90bcd-7367-47bd-ab29-0fe04562014b-lib-modules\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.729495 master-0 kubenswrapper[31411]: I0224 02:37:21.729414 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa90bcd-7367-47bd-ab29-0fe04562014b-config-data\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.729595 master-0 kubenswrapper[31411]: I0224 02:37:21.729508 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eaa90bcd-7367-47bd-ab29-0fe04562014b-etc-machine-id\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.731534 master-0 kubenswrapper[31411]: I0224 02:37:21.731471 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa90bcd-7367-47bd-ab29-0fe04562014b-combined-ca-bundle\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.735360 master-0 kubenswrapper[31411]: I0224 02:37:21.734846 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-8f98fb65f-btxw6" Feb 24 02:37:21.735441 master-0 kubenswrapper[31411]: I0224 02:37:21.735387 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaa90bcd-7367-47bd-ab29-0fe04562014b-config-data-custom\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.744059 master-0 kubenswrapper[31411]: I0224 02:37:21.744013 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaa90bcd-7367-47bd-ab29-0fe04562014b-scripts\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.747902 master-0 kubenswrapper[31411]: I0224 02:37:21.747838 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qps49\" (UniqueName: \"kubernetes.io/projected/eaa90bcd-7367-47bd-ab29-0fe04562014b-kube-api-access-qps49\") pod \"cinder-6ac23-backup-0\" (UID: \"eaa90bcd-7367-47bd-ab29-0fe04562014b\") " pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.859405 master-0 kubenswrapper[31411]: I0224 02:37:21.854224 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:21.992626 master-0 kubenswrapper[31411]: I0224 02:37:21.992528 31411 generic.go:334] "Generic (PLEG): container finished" podID="9ef8199f-6610-44a1-b85c-fc7f2bea6294" containerID="8ddc546cc3f3991e336a928e41297a654e01303391232133cff6addd93a06bab" exitCode=0 Feb 24 02:37:21.992823 master-0 kubenswrapper[31411]: I0224 02:37:21.992646 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-75c678c459-9mmbb" event={"ID":"9ef8199f-6610-44a1-b85c-fc7f2bea6294","Type":"ContainerDied","Data":"8ddc546cc3f3991e336a928e41297a654e01303391232133cff6addd93a06bab"} Feb 24 02:37:22.006719 master-0 kubenswrapper[31411]: I0224 02:37:22.004086 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8","Type":"ContainerStarted","Data":"73f82ba17bb35991aff5a85c3b724dfe36b60d1b23908714aa8508722fb4f546"} Feb 24 02:37:22.006719 master-0 kubenswrapper[31411]: I0224 02:37:22.004144 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8","Type":"ContainerStarted","Data":"05571f529ae34793a400f6fafd01945978b9052621b641ac755ef23dc4c36f60"} Feb 24 02:37:22.013348 master-0 kubenswrapper[31411]: I0224 02:37:22.011655 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-85b75c94bc-pp6mc" event={"ID":"da4d9085-6b7b-4507-803b-39a20e05bf2c","Type":"ContainerStarted","Data":"3a3c3ab94a38c8f01b1b8b46a73a1c5d2ac1e35407f58286d656791d541c75fc"} Feb 24 02:37:22.013348 master-0 kubenswrapper[31411]: I0224 02:37:22.011729 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-85b75c94bc-pp6mc" event={"ID":"da4d9085-6b7b-4507-803b-39a20e05bf2c","Type":"ContainerStarted","Data":"88a2dd95f7838512c21aadfd44d231f7268e8c6dcdff95ecaef44489a5911049"} Feb 24 02:37:22.236258 master-0 kubenswrapper[31411]: I0224 02:37:22.236202 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6ac23-volume-lvm-iscsi-0"] Feb 24 02:37:22.528772 master-0 kubenswrapper[31411]: I0224 02:37:22.528714 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-f597cf46d-llslv" Feb 24 02:37:22.663176 master-0 kubenswrapper[31411]: I0224 02:37:22.663136 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-f597cf46d-llslv" Feb 24 02:37:22.814840 master-0 kubenswrapper[31411]: I0224 02:37:22.770968 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6ac23-backup-0"] Feb 24 02:37:22.819140 master-0 kubenswrapper[31411]: W0224 02:37:22.817771 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaa90bcd_7367_47bd_ab29_0fe04562014b.slice/crio-00aadca967448045a45a6aaeafecdd8a824348708c4fc88ce6252620f432b9e7 WatchSource:0}: Error finding container 00aadca967448045a45a6aaeafecdd8a824348708c4fc88ce6252620f432b9e7: Status 404 returned error can't find the container with id 00aadca967448045a45a6aaeafecdd8a824348708c4fc88ce6252620f432b9e7 Feb 24 02:37:23.034014 master-0 kubenswrapper[31411]: I0224 02:37:23.033751 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-backup-0" event={"ID":"eaa90bcd-7367-47bd-ab29-0fe04562014b","Type":"ContainerStarted","Data":"00aadca967448045a45a6aaeafecdd8a824348708c4fc88ce6252620f432b9e7"} Feb 24 02:37:23.038415 master-0 kubenswrapper[31411]: I0224 02:37:23.037605 31411 generic.go:334] "Generic (PLEG): container finished" podID="9ef8199f-6610-44a1-b85c-fc7f2bea6294" containerID="75c2bc9278d53eece4e02e001ae4ac0840c5b0ff484c1768c89b37f29acd5aad" exitCode=1 Feb 24 02:37:23.038415 master-0 kubenswrapper[31411]: I0224 02:37:23.037732 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-75c678c459-9mmbb" event={"ID":"9ef8199f-6610-44a1-b85c-fc7f2bea6294","Type":"ContainerDied","Data":"75c2bc9278d53eece4e02e001ae4ac0840c5b0ff484c1768c89b37f29acd5aad"} Feb 24 02:37:23.038415 master-0 kubenswrapper[31411]: I0224 02:37:23.037800 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-75c678c459-9mmbb" event={"ID":"9ef8199f-6610-44a1-b85c-fc7f2bea6294","Type":"ContainerStarted","Data":"40661640f2bd585544ed15b57dab77da0a8d6a773cf8897a5977a704b6656bb7"} Feb 24 02:37:23.038950 master-0 kubenswrapper[31411]: I0224 02:37:23.038920 31411 scope.go:117] "RemoveContainer" containerID="75c2bc9278d53eece4e02e001ae4ac0840c5b0ff484c1768c89b37f29acd5aad" Feb 24 02:37:23.043419 master-0 kubenswrapper[31411]: I0224 02:37:23.043385 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" event={"ID":"d6d3b198-9449-43a4-9252-2659f60e7959","Type":"ContainerStarted","Data":"d790054aacee411367ace863e11a0bd6004e84aeec7586fffd3b275e8ba35c26"} Feb 24 02:37:23.043493 master-0 kubenswrapper[31411]: I0224 02:37:23.043428 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" event={"ID":"d6d3b198-9449-43a4-9252-2659f60e7959","Type":"ContainerStarted","Data":"329d3473b057bc3cd4fdfbe309a368fc8a6bdc06739dd63fa9972f359f98c47d"} Feb 24 02:37:23.043493 master-0 kubenswrapper[31411]: I0224 02:37:23.043443 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" event={"ID":"d6d3b198-9449-43a4-9252-2659f60e7959","Type":"ContainerStarted","Data":"7c9d618b0196add63448bddc3b496ceecfb9be5cb31e13faa249d8c2414e4683"} Feb 24 02:37:23.047607 master-0 kubenswrapper[31411]: I0224 02:37:23.047526 31411 generic.go:334] "Generic (PLEG): container finished" podID="da4d9085-6b7b-4507-803b-39a20e05bf2c" containerID="3a3c3ab94a38c8f01b1b8b46a73a1c5d2ac1e35407f58286d656791d541c75fc" exitCode=0 Feb 24 02:37:23.047771 master-0 kubenswrapper[31411]: I0224 02:37:23.047711 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-85b75c94bc-pp6mc" event={"ID":"da4d9085-6b7b-4507-803b-39a20e05bf2c","Type":"ContainerDied","Data":"3a3c3ab94a38c8f01b1b8b46a73a1c5d2ac1e35407f58286d656791d541c75fc"} Feb 24 02:37:23.128677 master-0 kubenswrapper[31411]: I0224 02:37:23.123649 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fcfbc46-898e-4e7b-a96a-e2170c671a45" path="/var/lib/kubelet/pods/6fcfbc46-898e-4e7b-a96a-e2170c671a45/volumes" Feb 24 02:37:23.128677 master-0 kubenswrapper[31411]: I0224 02:37:23.124617 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e200864-094f-471c-ba9d-863d14ff8404" path="/var/lib/kubelet/pods/8e200864-094f-471c-ba9d-863d14ff8404/volumes" Feb 24 02:37:23.138149 master-0 kubenswrapper[31411]: I0224 02:37:23.138037 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" podStartSLOduration=2.138010502 podStartE2EDuration="2.138010502s" podCreationTimestamp="2026-02-24 02:37:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:37:23.119801101 +0000 UTC m=+986.336998947" watchObservedRunningTime="2026-02-24 02:37:23.138010502 +0000 UTC m=+986.355208348" Feb 24 02:37:23.544725 master-0 kubenswrapper[31411]: I0224 02:37:23.542130 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 24 02:37:23.544725 master-0 kubenswrapper[31411]: I0224 02:37:23.544684 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 24 02:37:23.564601 master-0 kubenswrapper[31411]: I0224 02:37:23.562795 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 24 02:37:23.564601 master-0 kubenswrapper[31411]: I0224 02:37:23.562918 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 24 02:37:23.604394 master-0 kubenswrapper[31411]: I0224 02:37:23.604312 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 24 02:37:23.638965 master-0 kubenswrapper[31411]: I0224 02:37:23.638891 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/361cf20c-412e-43a0-b2e7-44a9d4964b05-openstack-config-secret\") pod \"openstackclient\" (UID: \"361cf20c-412e-43a0-b2e7-44a9d4964b05\") " pod="openstack/openstackclient" Feb 24 02:37:23.639180 master-0 kubenswrapper[31411]: I0224 02:37:23.639094 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwnrx\" (UniqueName: \"kubernetes.io/projected/361cf20c-412e-43a0-b2e7-44a9d4964b05-kube-api-access-lwnrx\") pod \"openstackclient\" (UID: \"361cf20c-412e-43a0-b2e7-44a9d4964b05\") " pod="openstack/openstackclient" Feb 24 02:37:23.639244 master-0 kubenswrapper[31411]: I0224 02:37:23.639218 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/361cf20c-412e-43a0-b2e7-44a9d4964b05-combined-ca-bundle\") pod \"openstackclient\" (UID: \"361cf20c-412e-43a0-b2e7-44a9d4964b05\") " pod="openstack/openstackclient" Feb 24 02:37:23.639298 master-0 kubenswrapper[31411]: I0224 02:37:23.639282 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/361cf20c-412e-43a0-b2e7-44a9d4964b05-openstack-config\") pod \"openstackclient\" (UID: \"361cf20c-412e-43a0-b2e7-44a9d4964b05\") " pod="openstack/openstackclient" Feb 24 02:37:23.741781 master-0 kubenswrapper[31411]: I0224 02:37:23.741667 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/361cf20c-412e-43a0-b2e7-44a9d4964b05-openstack-config\") pod \"openstackclient\" (UID: \"361cf20c-412e-43a0-b2e7-44a9d4964b05\") " pod="openstack/openstackclient" Feb 24 02:37:23.741994 master-0 kubenswrapper[31411]: I0224 02:37:23.741966 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/361cf20c-412e-43a0-b2e7-44a9d4964b05-openstack-config-secret\") pod \"openstackclient\" (UID: \"361cf20c-412e-43a0-b2e7-44a9d4964b05\") " pod="openstack/openstackclient" Feb 24 02:37:23.742097 master-0 kubenswrapper[31411]: I0224 02:37:23.742058 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwnrx\" (UniqueName: \"kubernetes.io/projected/361cf20c-412e-43a0-b2e7-44a9d4964b05-kube-api-access-lwnrx\") pod \"openstackclient\" (UID: \"361cf20c-412e-43a0-b2e7-44a9d4964b05\") " pod="openstack/openstackclient" Feb 24 02:37:23.742259 master-0 kubenswrapper[31411]: I0224 02:37:23.742237 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/361cf20c-412e-43a0-b2e7-44a9d4964b05-combined-ca-bundle\") pod \"openstackclient\" (UID: \"361cf20c-412e-43a0-b2e7-44a9d4964b05\") " pod="openstack/openstackclient" Feb 24 02:37:23.745146 master-0 kubenswrapper[31411]: I0224 02:37:23.744777 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/361cf20c-412e-43a0-b2e7-44a9d4964b05-openstack-config\") pod \"openstackclient\" (UID: \"361cf20c-412e-43a0-b2e7-44a9d4964b05\") " pod="openstack/openstackclient" Feb 24 02:37:23.759243 master-0 kubenswrapper[31411]: I0224 02:37:23.759191 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/361cf20c-412e-43a0-b2e7-44a9d4964b05-openstack-config-secret\") pod \"openstackclient\" (UID: \"361cf20c-412e-43a0-b2e7-44a9d4964b05\") " pod="openstack/openstackclient" Feb 24 02:37:23.766311 master-0 kubenswrapper[31411]: I0224 02:37:23.766231 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwnrx\" (UniqueName: \"kubernetes.io/projected/361cf20c-412e-43a0-b2e7-44a9d4964b05-kube-api-access-lwnrx\") pod \"openstackclient\" (UID: \"361cf20c-412e-43a0-b2e7-44a9d4964b05\") " pod="openstack/openstackclient" Feb 24 02:37:23.767783 master-0 kubenswrapper[31411]: I0224 02:37:23.767097 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/361cf20c-412e-43a0-b2e7-44a9d4964b05-combined-ca-bundle\") pod \"openstackclient\" (UID: \"361cf20c-412e-43a0-b2e7-44a9d4964b05\") " pod="openstack/openstackclient" Feb 24 02:37:23.975271 master-0 kubenswrapper[31411]: I0224 02:37:23.975189 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 24 02:37:24.072012 master-0 kubenswrapper[31411]: I0224 02:37:24.071803 31411 generic.go:334] "Generic (PLEG): container finished" podID="72106e8c-2a98-4a82-9f36-c820986c5665" containerID="13e540b83a6ce8264cb378fa002ef09e27fbf9e555b424f3a00379cdfc0c4c84" exitCode=1 Feb 24 02:37:24.072012 master-0 kubenswrapper[31411]: I0224 02:37:24.071910 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7d8f6784f6-dqjdm" event={"ID":"72106e8c-2a98-4a82-9f36-c820986c5665","Type":"ContainerDied","Data":"13e540b83a6ce8264cb378fa002ef09e27fbf9e555b424f3a00379cdfc0c4c84"} Feb 24 02:37:24.074417 master-0 kubenswrapper[31411]: I0224 02:37:24.073905 31411 scope.go:117] "RemoveContainer" containerID="13e540b83a6ce8264cb378fa002ef09e27fbf9e555b424f3a00379cdfc0c4c84" Feb 24 02:37:24.083126 master-0 kubenswrapper[31411]: I0224 02:37:24.083064 31411 generic.go:334] "Generic (PLEG): container finished" podID="8657494d-8b2e-4545-9713-e55ace12f329" containerID="eeb78254bf7f2baaf7618d1c8289888c1d4aedf8e156f196588221c7066ca527" exitCode=0 Feb 24 02:37:24.083245 master-0 kubenswrapper[31411]: I0224 02:37:24.083211 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-scheduler-0" event={"ID":"8657494d-8b2e-4545-9713-e55ace12f329","Type":"ContainerDied","Data":"eeb78254bf7f2baaf7618d1c8289888c1d4aedf8e156f196588221c7066ca527"} Feb 24 02:37:24.089383 master-0 kubenswrapper[31411]: I0224 02:37:24.089355 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-75c678c459-9mmbb" event={"ID":"9ef8199f-6610-44a1-b85c-fc7f2bea6294","Type":"ContainerStarted","Data":"29d1fd3e2cc0d6b1ac29b953abb9753599726a368e8a64ba12da6ee6a3c363e4"} Feb 24 02:37:24.089782 master-0 kubenswrapper[31411]: I0224 02:37:24.089753 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-75c678c459-9mmbb" Feb 24 02:37:24.114895 master-0 kubenswrapper[31411]: I0224 02:37:24.114651 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-85b75c94bc-pp6mc" event={"ID":"da4d9085-6b7b-4507-803b-39a20e05bf2c","Type":"ContainerStarted","Data":"0d465ed7ca1351b93a93fbc27c7f38cb0290864de09182f0320a25d41bb86092"} Feb 24 02:37:24.161841 master-0 kubenswrapper[31411]: I0224 02:37:24.161410 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-backup-0" event={"ID":"eaa90bcd-7367-47bd-ab29-0fe04562014b","Type":"ContainerStarted","Data":"f1edd546c525952d70138dd7c121098e468e4f9c488da30df27afb4dd2919092"} Feb 24 02:37:24.185236 master-0 kubenswrapper[31411]: I0224 02:37:24.185103 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-75c678c459-9mmbb" podStartSLOduration=6.470619621 podStartE2EDuration="9.185080284s" podCreationTimestamp="2026-02-24 02:37:15 +0000 UTC" firstStartedPulling="2026-02-24 02:37:17.250202871 +0000 UTC m=+980.467400717" lastFinishedPulling="2026-02-24 02:37:19.964663524 +0000 UTC m=+983.181861380" observedRunningTime="2026-02-24 02:37:24.153963272 +0000 UTC m=+987.371161128" watchObservedRunningTime="2026-02-24 02:37:24.185080284 +0000 UTC m=+987.402278130" Feb 24 02:37:24.437994 master-0 kubenswrapper[31411]: I0224 02:37:24.437340 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:24.591749 master-0 kubenswrapper[31411]: I0224 02:37:24.591692 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8657494d-8b2e-4545-9713-e55ace12f329-combined-ca-bundle\") pod \"8657494d-8b2e-4545-9713-e55ace12f329\" (UID: \"8657494d-8b2e-4545-9713-e55ace12f329\") " Feb 24 02:37:24.592209 master-0 kubenswrapper[31411]: I0224 02:37:24.592158 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8657494d-8b2e-4545-9713-e55ace12f329-config-data-custom\") pod \"8657494d-8b2e-4545-9713-e55ace12f329\" (UID: \"8657494d-8b2e-4545-9713-e55ace12f329\") " Feb 24 02:37:24.592295 master-0 kubenswrapper[31411]: I0224 02:37:24.592273 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wzsv\" (UniqueName: \"kubernetes.io/projected/8657494d-8b2e-4545-9713-e55ace12f329-kube-api-access-2wzsv\") pod \"8657494d-8b2e-4545-9713-e55ace12f329\" (UID: \"8657494d-8b2e-4545-9713-e55ace12f329\") " Feb 24 02:37:24.592678 master-0 kubenswrapper[31411]: I0224 02:37:24.592655 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8657494d-8b2e-4545-9713-e55ace12f329-etc-machine-id\") pod \"8657494d-8b2e-4545-9713-e55ace12f329\" (UID: \"8657494d-8b2e-4545-9713-e55ace12f329\") " Feb 24 02:37:24.592751 master-0 kubenswrapper[31411]: I0224 02:37:24.592738 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8657494d-8b2e-4545-9713-e55ace12f329-scripts\") pod \"8657494d-8b2e-4545-9713-e55ace12f329\" (UID: \"8657494d-8b2e-4545-9713-e55ace12f329\") " Feb 24 02:37:24.592804 master-0 kubenswrapper[31411]: I0224 02:37:24.592785 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8657494d-8b2e-4545-9713-e55ace12f329-config-data\") pod \"8657494d-8b2e-4545-9713-e55ace12f329\" (UID: \"8657494d-8b2e-4545-9713-e55ace12f329\") " Feb 24 02:37:24.598869 master-0 kubenswrapper[31411]: I0224 02:37:24.598804 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8657494d-8b2e-4545-9713-e55ace12f329-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8657494d-8b2e-4545-9713-e55ace12f329" (UID: "8657494d-8b2e-4545-9713-e55ace12f329"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 02:37:24.608354 master-0 kubenswrapper[31411]: I0224 02:37:24.608292 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 24 02:37:24.638665 master-0 kubenswrapper[31411]: I0224 02:37:24.638548 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8657494d-8b2e-4545-9713-e55ace12f329-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8657494d-8b2e-4545-9713-e55ace12f329" (UID: "8657494d-8b2e-4545-9713-e55ace12f329"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:24.639410 master-0 kubenswrapper[31411]: I0224 02:37:24.639286 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8657494d-8b2e-4545-9713-e55ace12f329-scripts" (OuterVolumeSpecName: "scripts") pod "8657494d-8b2e-4545-9713-e55ace12f329" (UID: "8657494d-8b2e-4545-9713-e55ace12f329"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:24.646072 master-0 kubenswrapper[31411]: I0224 02:37:24.646025 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8657494d-8b2e-4545-9713-e55ace12f329-kube-api-access-2wzsv" (OuterVolumeSpecName: "kube-api-access-2wzsv") pod "8657494d-8b2e-4545-9713-e55ace12f329" (UID: "8657494d-8b2e-4545-9713-e55ace12f329"). InnerVolumeSpecName "kube-api-access-2wzsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:37:24.648753 master-0 kubenswrapper[31411]: W0224 02:37:24.648702 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod361cf20c_412e_43a0_b2e7_44a9d4964b05.slice/crio-4bb00a00ee6c219af5233deed78bab33e8f5e42539fa30a19a025284ac5b1470 WatchSource:0}: Error finding container 4bb00a00ee6c219af5233deed78bab33e8f5e42539fa30a19a025284ac5b1470: Status 404 returned error can't find the container with id 4bb00a00ee6c219af5233deed78bab33e8f5e42539fa30a19a025284ac5b1470 Feb 24 02:37:24.700106 master-0 kubenswrapper[31411]: I0224 02:37:24.700056 31411 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8657494d-8b2e-4545-9713-e55ace12f329-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:24.700106 master-0 kubenswrapper[31411]: I0224 02:37:24.700101 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wzsv\" (UniqueName: \"kubernetes.io/projected/8657494d-8b2e-4545-9713-e55ace12f329-kube-api-access-2wzsv\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:24.700843 master-0 kubenswrapper[31411]: I0224 02:37:24.700118 31411 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8657494d-8b2e-4545-9713-e55ace12f329-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:24.700843 master-0 kubenswrapper[31411]: I0224 02:37:24.700127 31411 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8657494d-8b2e-4545-9713-e55ace12f329-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:24.804294 master-0 kubenswrapper[31411]: I0224 02:37:24.804133 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8657494d-8b2e-4545-9713-e55ace12f329-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8657494d-8b2e-4545-9713-e55ace12f329" (UID: "8657494d-8b2e-4545-9713-e55ace12f329"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:24.847608 master-0 kubenswrapper[31411]: I0224 02:37:24.843382 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8657494d-8b2e-4545-9713-e55ace12f329-config-data" (OuterVolumeSpecName: "config-data") pod "8657494d-8b2e-4545-9713-e55ace12f329" (UID: "8657494d-8b2e-4545-9713-e55ace12f329"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:24.912354 master-0 kubenswrapper[31411]: I0224 02:37:24.912280 31411 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8657494d-8b2e-4545-9713-e55ace12f329-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:24.912354 master-0 kubenswrapper[31411]: I0224 02:37:24.912326 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8657494d-8b2e-4545-9713-e55ace12f329-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:25.179526 master-0 kubenswrapper[31411]: I0224 02:37:25.179265 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:25.180740 master-0 kubenswrapper[31411]: I0224 02:37:25.180696 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-scheduler-0" event={"ID":"8657494d-8b2e-4545-9713-e55ace12f329","Type":"ContainerDied","Data":"88ff67047f2a86ba7b082e089a5ee6ec71ff924c9922bd408b5c2f89646f327a"} Feb 24 02:37:25.180806 master-0 kubenswrapper[31411]: I0224 02:37:25.180760 31411 scope.go:117] "RemoveContainer" containerID="86a3152df7e3a2c46f55c0278d2eb568f064bdbae35cf4046fa2c6c2d3131640" Feb 24 02:37:25.191787 master-0 kubenswrapper[31411]: I0224 02:37:25.191727 31411 generic.go:334] "Generic (PLEG): container finished" podID="9ef8199f-6610-44a1-b85c-fc7f2bea6294" containerID="29d1fd3e2cc0d6b1ac29b953abb9753599726a368e8a64ba12da6ee6a3c363e4" exitCode=1 Feb 24 02:37:25.191921 master-0 kubenswrapper[31411]: I0224 02:37:25.191826 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-75c678c459-9mmbb" event={"ID":"9ef8199f-6610-44a1-b85c-fc7f2bea6294","Type":"ContainerDied","Data":"29d1fd3e2cc0d6b1ac29b953abb9753599726a368e8a64ba12da6ee6a3c363e4"} Feb 24 02:37:25.193065 master-0 kubenswrapper[31411]: I0224 02:37:25.193032 31411 scope.go:117] "RemoveContainer" containerID="29d1fd3e2cc0d6b1ac29b953abb9753599726a368e8a64ba12da6ee6a3c363e4" Feb 24 02:37:25.193376 master-0 kubenswrapper[31411]: E0224 02:37:25.193343 31411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-75c678c459-9mmbb_openstack(9ef8199f-6610-44a1-b85c-fc7f2bea6294)\"" pod="openstack/ironic-75c678c459-9mmbb" podUID="9ef8199f-6610-44a1-b85c-fc7f2bea6294" Feb 24 02:37:25.196016 master-0 kubenswrapper[31411]: I0224 02:37:25.195643 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"361cf20c-412e-43a0-b2e7-44a9d4964b05","Type":"ContainerStarted","Data":"4bb00a00ee6c219af5233deed78bab33e8f5e42539fa30a19a025284ac5b1470"} Feb 24 02:37:25.201451 master-0 kubenswrapper[31411]: I0224 02:37:25.201372 31411 generic.go:334] "Generic (PLEG): container finished" podID="d289f4ce-9a2f-4d57-bf7b-414618c7c4e8" containerID="73f82ba17bb35991aff5a85c3b724dfe36b60d1b23908714aa8508722fb4f546" exitCode=0 Feb 24 02:37:25.201543 master-0 kubenswrapper[31411]: I0224 02:37:25.201505 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8","Type":"ContainerDied","Data":"73f82ba17bb35991aff5a85c3b724dfe36b60d1b23908714aa8508722fb4f546"} Feb 24 02:37:25.205715 master-0 kubenswrapper[31411]: I0224 02:37:25.205675 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-85b75c94bc-pp6mc" event={"ID":"da4d9085-6b7b-4507-803b-39a20e05bf2c","Type":"ContainerStarted","Data":"fe31b308323aefbb4c3d8a3473e46e9dac54b78a9facf605c65f3f55ae6b7861"} Feb 24 02:37:25.206069 master-0 kubenswrapper[31411]: I0224 02:37:25.205921 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:25.212736 master-0 kubenswrapper[31411]: I0224 02:37:25.212680 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-backup-0" event={"ID":"eaa90bcd-7367-47bd-ab29-0fe04562014b","Type":"ContainerStarted","Data":"7a8cfd8317da09fe60bb72df477a4dc1f342d45d72d1bbcba3f8d99b251032a1"} Feb 24 02:37:25.215128 master-0 kubenswrapper[31411]: I0224 02:37:25.215087 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7d8f6784f6-dqjdm" event={"ID":"72106e8c-2a98-4a82-9f36-c820986c5665","Type":"ContainerStarted","Data":"73d9ad9599a99a5baabf99f214a4ed014fbc75e5756f2c85822b5bc219ed46c4"} Feb 24 02:37:25.215953 master-0 kubenswrapper[31411]: I0224 02:37:25.215921 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-7d8f6784f6-dqjdm" Feb 24 02:37:25.227816 master-0 kubenswrapper[31411]: I0224 02:37:25.227101 31411 scope.go:117] "RemoveContainer" containerID="eeb78254bf7f2baaf7618d1c8289888c1d4aedf8e156f196588221c7066ca527" Feb 24 02:37:25.265040 master-0 kubenswrapper[31411]: I0224 02:37:25.258336 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-6ac23-scheduler-0"] Feb 24 02:37:25.317550 master-0 kubenswrapper[31411]: I0224 02:37:25.317486 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-6ac23-scheduler-0"] Feb 24 02:37:25.320129 master-0 kubenswrapper[31411]: I0224 02:37:25.320099 31411 scope.go:117] "RemoveContainer" containerID="75c2bc9278d53eece4e02e001ae4ac0840c5b0ff484c1768c89b37f29acd5aad" Feb 24 02:37:25.356594 master-0 kubenswrapper[31411]: I0224 02:37:25.351900 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-6ac23-scheduler-0"] Feb 24 02:37:25.356594 master-0 kubenswrapper[31411]: E0224 02:37:25.352832 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8657494d-8b2e-4545-9713-e55ace12f329" containerName="probe" Feb 24 02:37:25.356594 master-0 kubenswrapper[31411]: I0224 02:37:25.352849 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="8657494d-8b2e-4545-9713-e55ace12f329" containerName="probe" Feb 24 02:37:25.356594 master-0 kubenswrapper[31411]: E0224 02:37:25.352916 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8657494d-8b2e-4545-9713-e55ace12f329" containerName="cinder-scheduler" Feb 24 02:37:25.356594 master-0 kubenswrapper[31411]: I0224 02:37:25.352923 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="8657494d-8b2e-4545-9713-e55ace12f329" containerName="cinder-scheduler" Feb 24 02:37:25.356594 master-0 kubenswrapper[31411]: I0224 02:37:25.353208 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="8657494d-8b2e-4545-9713-e55ace12f329" containerName="probe" Feb 24 02:37:25.356594 master-0 kubenswrapper[31411]: I0224 02:37:25.353269 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="8657494d-8b2e-4545-9713-e55ace12f329" containerName="cinder-scheduler" Feb 24 02:37:25.356594 master-0 kubenswrapper[31411]: I0224 02:37:25.355027 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:25.367602 master-0 kubenswrapper[31411]: I0224 02:37:25.367068 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-85b75c94bc-pp6mc" podStartSLOduration=7.367044488 podStartE2EDuration="7.367044488s" podCreationTimestamp="2026-02-24 02:37:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:37:25.29077218 +0000 UTC m=+988.507970026" watchObservedRunningTime="2026-02-24 02:37:25.367044488 +0000 UTC m=+988.584242334" Feb 24 02:37:25.367602 master-0 kubenswrapper[31411]: I0224 02:37:25.367382 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-6ac23-scheduler-config-data" Feb 24 02:37:25.426647 master-0 kubenswrapper[31411]: I0224 02:37:25.424022 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6ac23-scheduler-0"] Feb 24 02:37:25.430614 master-0 kubenswrapper[31411]: I0224 02:37:25.429933 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-6ac23-backup-0" podStartSLOduration=4.429909311 podStartE2EDuration="4.429909311s" podCreationTimestamp="2026-02-24 02:37:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:37:25.320688579 +0000 UTC m=+988.537886425" watchObservedRunningTime="2026-02-24 02:37:25.429909311 +0000 UTC m=+988.647107157" Feb 24 02:37:25.440206 master-0 kubenswrapper[31411]: I0224 02:37:25.440122 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed7505b2-4ddf-4df3-a4f7-7b198aacd70b-combined-ca-bundle\") pod \"cinder-6ac23-scheduler-0\" (UID: \"ed7505b2-4ddf-4df3-a4f7-7b198aacd70b\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:25.440500 master-0 kubenswrapper[31411]: I0224 02:37:25.440253 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed7505b2-4ddf-4df3-a4f7-7b198aacd70b-config-data\") pod \"cinder-6ac23-scheduler-0\" (UID: \"ed7505b2-4ddf-4df3-a4f7-7b198aacd70b\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:25.440500 master-0 kubenswrapper[31411]: I0224 02:37:25.440347 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n92k\" (UniqueName: \"kubernetes.io/projected/ed7505b2-4ddf-4df3-a4f7-7b198aacd70b-kube-api-access-4n92k\") pod \"cinder-6ac23-scheduler-0\" (UID: \"ed7505b2-4ddf-4df3-a4f7-7b198aacd70b\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:25.440850 master-0 kubenswrapper[31411]: I0224 02:37:25.440817 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ed7505b2-4ddf-4df3-a4f7-7b198aacd70b-etc-machine-id\") pod \"cinder-6ac23-scheduler-0\" (UID: \"ed7505b2-4ddf-4df3-a4f7-7b198aacd70b\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:25.440919 master-0 kubenswrapper[31411]: I0224 02:37:25.440899 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed7505b2-4ddf-4df3-a4f7-7b198aacd70b-scripts\") pod \"cinder-6ac23-scheduler-0\" (UID: \"ed7505b2-4ddf-4df3-a4f7-7b198aacd70b\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:25.442204 master-0 kubenswrapper[31411]: I0224 02:37:25.440956 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed7505b2-4ddf-4df3-a4f7-7b198aacd70b-config-data-custom\") pod \"cinder-6ac23-scheduler-0\" (UID: \"ed7505b2-4ddf-4df3-a4f7-7b198aacd70b\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:25.543568 master-0 kubenswrapper[31411]: I0224 02:37:25.543475 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed7505b2-4ddf-4df3-a4f7-7b198aacd70b-combined-ca-bundle\") pod \"cinder-6ac23-scheduler-0\" (UID: \"ed7505b2-4ddf-4df3-a4f7-7b198aacd70b\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:25.544020 master-0 kubenswrapper[31411]: I0224 02:37:25.543706 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed7505b2-4ddf-4df3-a4f7-7b198aacd70b-config-data\") pod \"cinder-6ac23-scheduler-0\" (UID: \"ed7505b2-4ddf-4df3-a4f7-7b198aacd70b\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:25.544020 master-0 kubenswrapper[31411]: I0224 02:37:25.543770 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n92k\" (UniqueName: \"kubernetes.io/projected/ed7505b2-4ddf-4df3-a4f7-7b198aacd70b-kube-api-access-4n92k\") pod \"cinder-6ac23-scheduler-0\" (UID: \"ed7505b2-4ddf-4df3-a4f7-7b198aacd70b\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:25.544020 master-0 kubenswrapper[31411]: I0224 02:37:25.543798 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ed7505b2-4ddf-4df3-a4f7-7b198aacd70b-etc-machine-id\") pod \"cinder-6ac23-scheduler-0\" (UID: \"ed7505b2-4ddf-4df3-a4f7-7b198aacd70b\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:25.544020 master-0 kubenswrapper[31411]: I0224 02:37:25.543863 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed7505b2-4ddf-4df3-a4f7-7b198aacd70b-scripts\") pod \"cinder-6ac23-scheduler-0\" (UID: \"ed7505b2-4ddf-4df3-a4f7-7b198aacd70b\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:25.544217 master-0 kubenswrapper[31411]: I0224 02:37:25.544055 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed7505b2-4ddf-4df3-a4f7-7b198aacd70b-config-data-custom\") pod \"cinder-6ac23-scheduler-0\" (UID: \"ed7505b2-4ddf-4df3-a4f7-7b198aacd70b\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:25.544384 master-0 kubenswrapper[31411]: I0224 02:37:25.544318 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ed7505b2-4ddf-4df3-a4f7-7b198aacd70b-etc-machine-id\") pod \"cinder-6ac23-scheduler-0\" (UID: \"ed7505b2-4ddf-4df3-a4f7-7b198aacd70b\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:25.548303 master-0 kubenswrapper[31411]: I0224 02:37:25.548257 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed7505b2-4ddf-4df3-a4f7-7b198aacd70b-combined-ca-bundle\") pod \"cinder-6ac23-scheduler-0\" (UID: \"ed7505b2-4ddf-4df3-a4f7-7b198aacd70b\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:25.554046 master-0 kubenswrapper[31411]: I0224 02:37:25.549532 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed7505b2-4ddf-4df3-a4f7-7b198aacd70b-config-data-custom\") pod \"cinder-6ac23-scheduler-0\" (UID: \"ed7505b2-4ddf-4df3-a4f7-7b198aacd70b\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:25.554046 master-0 kubenswrapper[31411]: I0224 02:37:25.552899 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed7505b2-4ddf-4df3-a4f7-7b198aacd70b-config-data\") pod \"cinder-6ac23-scheduler-0\" (UID: \"ed7505b2-4ddf-4df3-a4f7-7b198aacd70b\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:25.554046 master-0 kubenswrapper[31411]: I0224 02:37:25.553373 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed7505b2-4ddf-4df3-a4f7-7b198aacd70b-scripts\") pod \"cinder-6ac23-scheduler-0\" (UID: \"ed7505b2-4ddf-4df3-a4f7-7b198aacd70b\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:25.562118 master-0 kubenswrapper[31411]: I0224 02:37:25.562068 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n92k\" (UniqueName: \"kubernetes.io/projected/ed7505b2-4ddf-4df3-a4f7-7b198aacd70b-kube-api-access-4n92k\") pod \"cinder-6ac23-scheduler-0\" (UID: \"ed7505b2-4ddf-4df3-a4f7-7b198aacd70b\") " pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:25.762416 master-0 kubenswrapper[31411]: I0224 02:37:25.761777 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:26.271881 master-0 kubenswrapper[31411]: I0224 02:37:26.271386 31411 scope.go:117] "RemoveContainer" containerID="29d1fd3e2cc0d6b1ac29b953abb9753599726a368e8a64ba12da6ee6a3c363e4" Feb 24 02:37:26.271881 master-0 kubenswrapper[31411]: E0224 02:37:26.271739 31411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-75c678c459-9mmbb_openstack(9ef8199f-6610-44a1-b85c-fc7f2bea6294)\"" pod="openstack/ironic-75c678c459-9mmbb" podUID="9ef8199f-6610-44a1-b85c-fc7f2bea6294" Feb 24 02:37:26.366288 master-0 kubenswrapper[31411]: I0224 02:37:26.366063 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b45666449-v77b5" Feb 24 02:37:26.378631 master-0 kubenswrapper[31411]: I0224 02:37:26.372777 31411 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-75c678c459-9mmbb" Feb 24 02:37:26.511833 master-0 kubenswrapper[31411]: I0224 02:37:26.506363 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6ac23-scheduler-0"] Feb 24 02:37:26.527079 master-0 kubenswrapper[31411]: I0224 02:37:26.525196 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f74bd995c-jflbg"] Feb 24 02:37:26.527079 master-0 kubenswrapper[31411]: I0224 02:37:26.525486 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f74bd995c-jflbg" podUID="78056aef-5d74-4349-9faa-7ee56e5090b6" containerName="dnsmasq-dns" containerID="cri-o://e385e5e5a8a5cfb0183077cd94d1d4cb19ee83e46fecb9613441aff3b0fa8342" gracePeriod=10 Feb 24 02:37:26.723880 master-0 kubenswrapper[31411]: I0224 02:37:26.723852 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:26.816877 master-0 kubenswrapper[31411]: I0224 02:37:26.816809 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-6ac23-api-0" Feb 24 02:37:26.863301 master-0 kubenswrapper[31411]: I0224 02:37:26.860763 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:27.142779 master-0 kubenswrapper[31411]: I0224 02:37:27.142513 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8657494d-8b2e-4545-9713-e55ace12f329" path="/var/lib/kubelet/pods/8657494d-8b2e-4545-9713-e55ace12f329/volumes" Feb 24 02:37:27.341022 master-0 kubenswrapper[31411]: I0224 02:37:27.340932 31411 generic.go:334] "Generic (PLEG): container finished" podID="78056aef-5d74-4349-9faa-7ee56e5090b6" containerID="e385e5e5a8a5cfb0183077cd94d1d4cb19ee83e46fecb9613441aff3b0fa8342" exitCode=0 Feb 24 02:37:27.341747 master-0 kubenswrapper[31411]: I0224 02:37:27.341058 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f74bd995c-jflbg" event={"ID":"78056aef-5d74-4349-9faa-7ee56e5090b6","Type":"ContainerDied","Data":"e385e5e5a8a5cfb0183077cd94d1d4cb19ee83e46fecb9613441aff3b0fa8342"} Feb 24 02:37:27.341747 master-0 kubenswrapper[31411]: I0224 02:37:27.341126 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f74bd995c-jflbg" event={"ID":"78056aef-5d74-4349-9faa-7ee56e5090b6","Type":"ContainerDied","Data":"e747023b7c2b65a08acd038638272e87654f99bdd6bad24712f741a7a4d80736"} Feb 24 02:37:27.341747 master-0 kubenswrapper[31411]: I0224 02:37:27.341146 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e747023b7c2b65a08acd038638272e87654f99bdd6bad24712f741a7a4d80736" Feb 24 02:37:27.348773 master-0 kubenswrapper[31411]: I0224 02:37:27.348712 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-scheduler-0" event={"ID":"ed7505b2-4ddf-4df3-a4f7-7b198aacd70b","Type":"ContainerStarted","Data":"bc9f811aa98064a8b455103d4196202e1352a6798a587f0f3b9d260c177757c6"} Feb 24 02:37:27.353350 master-0 kubenswrapper[31411]: I0224 02:37:27.352675 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f74bd995c-jflbg" Feb 24 02:37:27.356613 master-0 kubenswrapper[31411]: I0224 02:37:27.354445 31411 scope.go:117] "RemoveContainer" containerID="29d1fd3e2cc0d6b1ac29b953abb9753599726a368e8a64ba12da6ee6a3c363e4" Feb 24 02:37:27.356613 master-0 kubenswrapper[31411]: E0224 02:37:27.354718 31411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-75c678c459-9mmbb_openstack(9ef8199f-6610-44a1-b85c-fc7f2bea6294)\"" pod="openstack/ironic-75c678c459-9mmbb" podUID="9ef8199f-6610-44a1-b85c-fc7f2bea6294" Feb 24 02:37:27.534314 master-0 kubenswrapper[31411]: I0224 02:37:27.534124 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78056aef-5d74-4349-9faa-7ee56e5090b6-ovsdbserver-sb\") pod \"78056aef-5d74-4349-9faa-7ee56e5090b6\" (UID: \"78056aef-5d74-4349-9faa-7ee56e5090b6\") " Feb 24 02:37:27.534314 master-0 kubenswrapper[31411]: I0224 02:37:27.534322 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dptb\" (UniqueName: \"kubernetes.io/projected/78056aef-5d74-4349-9faa-7ee56e5090b6-kube-api-access-5dptb\") pod \"78056aef-5d74-4349-9faa-7ee56e5090b6\" (UID: \"78056aef-5d74-4349-9faa-7ee56e5090b6\") " Feb 24 02:37:27.534558 master-0 kubenswrapper[31411]: I0224 02:37:27.534367 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78056aef-5d74-4349-9faa-7ee56e5090b6-ovsdbserver-nb\") pod \"78056aef-5d74-4349-9faa-7ee56e5090b6\" (UID: \"78056aef-5d74-4349-9faa-7ee56e5090b6\") " Feb 24 02:37:27.534558 master-0 kubenswrapper[31411]: I0224 02:37:27.534487 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/78056aef-5d74-4349-9faa-7ee56e5090b6-dns-swift-storage-0\") pod \"78056aef-5d74-4349-9faa-7ee56e5090b6\" (UID: \"78056aef-5d74-4349-9faa-7ee56e5090b6\") " Feb 24 02:37:27.534558 master-0 kubenswrapper[31411]: I0224 02:37:27.534548 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78056aef-5d74-4349-9faa-7ee56e5090b6-dns-svc\") pod \"78056aef-5d74-4349-9faa-7ee56e5090b6\" (UID: \"78056aef-5d74-4349-9faa-7ee56e5090b6\") " Feb 24 02:37:27.536196 master-0 kubenswrapper[31411]: I0224 02:37:27.536171 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78056aef-5d74-4349-9faa-7ee56e5090b6-config\") pod \"78056aef-5d74-4349-9faa-7ee56e5090b6\" (UID: \"78056aef-5d74-4349-9faa-7ee56e5090b6\") " Feb 24 02:37:27.547168 master-0 kubenswrapper[31411]: I0224 02:37:27.546875 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78056aef-5d74-4349-9faa-7ee56e5090b6-kube-api-access-5dptb" (OuterVolumeSpecName: "kube-api-access-5dptb") pod "78056aef-5d74-4349-9faa-7ee56e5090b6" (UID: "78056aef-5d74-4349-9faa-7ee56e5090b6"). InnerVolumeSpecName "kube-api-access-5dptb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:37:27.626510 master-0 kubenswrapper[31411]: I0224 02:37:27.626289 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78056aef-5d74-4349-9faa-7ee56e5090b6-config" (OuterVolumeSpecName: "config") pod "78056aef-5d74-4349-9faa-7ee56e5090b6" (UID: "78056aef-5d74-4349-9faa-7ee56e5090b6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:37:27.628178 master-0 kubenswrapper[31411]: I0224 02:37:27.627075 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78056aef-5d74-4349-9faa-7ee56e5090b6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "78056aef-5d74-4349-9faa-7ee56e5090b6" (UID: "78056aef-5d74-4349-9faa-7ee56e5090b6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:37:27.628178 master-0 kubenswrapper[31411]: I0224 02:37:27.627622 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78056aef-5d74-4349-9faa-7ee56e5090b6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "78056aef-5d74-4349-9faa-7ee56e5090b6" (UID: "78056aef-5d74-4349-9faa-7ee56e5090b6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:37:27.633132 master-0 kubenswrapper[31411]: I0224 02:37:27.633077 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78056aef-5d74-4349-9faa-7ee56e5090b6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "78056aef-5d74-4349-9faa-7ee56e5090b6" (UID: "78056aef-5d74-4349-9faa-7ee56e5090b6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:37:27.641245 master-0 kubenswrapper[31411]: I0224 02:37:27.641188 31411 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/78056aef-5d74-4349-9faa-7ee56e5090b6-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:27.641245 master-0 kubenswrapper[31411]: I0224 02:37:27.641235 31411 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78056aef-5d74-4349-9faa-7ee56e5090b6-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:27.641245 master-0 kubenswrapper[31411]: I0224 02:37:27.641247 31411 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78056aef-5d74-4349-9faa-7ee56e5090b6-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:27.641245 master-0 kubenswrapper[31411]: I0224 02:37:27.641256 31411 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78056aef-5d74-4349-9faa-7ee56e5090b6-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:27.641245 master-0 kubenswrapper[31411]: I0224 02:37:27.641266 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dptb\" (UniqueName: \"kubernetes.io/projected/78056aef-5d74-4349-9faa-7ee56e5090b6-kube-api-access-5dptb\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:27.663266 master-0 kubenswrapper[31411]: I0224 02:37:27.663133 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78056aef-5d74-4349-9faa-7ee56e5090b6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "78056aef-5d74-4349-9faa-7ee56e5090b6" (UID: "78056aef-5d74-4349-9faa-7ee56e5090b6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:37:27.765330 master-0 kubenswrapper[31411]: I0224 02:37:27.765179 31411 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78056aef-5d74-4349-9faa-7ee56e5090b6-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:28.371600 master-0 kubenswrapper[31411]: I0224 02:37:28.371433 31411 generic.go:334] "Generic (PLEG): container finished" podID="72106e8c-2a98-4a82-9f36-c820986c5665" containerID="73d9ad9599a99a5baabf99f214a4ed014fbc75e5756f2c85822b5bc219ed46c4" exitCode=1 Feb 24 02:37:28.371600 master-0 kubenswrapper[31411]: I0224 02:37:28.371523 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7d8f6784f6-dqjdm" event={"ID":"72106e8c-2a98-4a82-9f36-c820986c5665","Type":"ContainerDied","Data":"73d9ad9599a99a5baabf99f214a4ed014fbc75e5756f2c85822b5bc219ed46c4"} Feb 24 02:37:28.372296 master-0 kubenswrapper[31411]: I0224 02:37:28.371642 31411 scope.go:117] "RemoveContainer" containerID="13e540b83a6ce8264cb378fa002ef09e27fbf9e555b424f3a00379cdfc0c4c84" Feb 24 02:37:28.387590 master-0 kubenswrapper[31411]: I0224 02:37:28.381244 31411 scope.go:117] "RemoveContainer" containerID="73d9ad9599a99a5baabf99f214a4ed014fbc75e5756f2c85822b5bc219ed46c4" Feb 24 02:37:28.387590 master-0 kubenswrapper[31411]: E0224 02:37:28.381698 31411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-7d8f6784f6-dqjdm_openstack(72106e8c-2a98-4a82-9f36-c820986c5665)\"" pod="openstack/ironic-neutron-agent-7d8f6784f6-dqjdm" podUID="72106e8c-2a98-4a82-9f36-c820986c5665" Feb 24 02:37:28.407590 master-0 kubenswrapper[31411]: I0224 02:37:28.403666 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f74bd995c-jflbg" Feb 24 02:37:28.407590 master-0 kubenswrapper[31411]: I0224 02:37:28.403751 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-scheduler-0" event={"ID":"ed7505b2-4ddf-4df3-a4f7-7b198aacd70b","Type":"ContainerStarted","Data":"fec8674a538a1358bd8ffb9dfbd2cac58f84eb95c048e66188c4f3d9530de9ad"} Feb 24 02:37:28.407590 master-0 kubenswrapper[31411]: I0224 02:37:28.403839 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ac23-scheduler-0" event={"ID":"ed7505b2-4ddf-4df3-a4f7-7b198aacd70b","Type":"ContainerStarted","Data":"13c3afa135d2184723b936ba17f5cf73e76310b38d781df1d3dfdbf66cbfca75"} Feb 24 02:37:28.461789 master-0 kubenswrapper[31411]: I0224 02:37:28.457293 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-6ac23-scheduler-0" podStartSLOduration=3.457269155 podStartE2EDuration="3.457269155s" podCreationTimestamp="2026-02-24 02:37:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:37:28.448349035 +0000 UTC m=+991.665546881" watchObservedRunningTime="2026-02-24 02:37:28.457269155 +0000 UTC m=+991.674467001" Feb 24 02:37:28.510601 master-0 kubenswrapper[31411]: I0224 02:37:28.509966 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f74bd995c-jflbg"] Feb 24 02:37:28.527590 master-0 kubenswrapper[31411]: I0224 02:37:28.523637 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f74bd995c-jflbg"] Feb 24 02:37:29.159341 master-0 kubenswrapper[31411]: I0224 02:37:29.159058 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78056aef-5d74-4349-9faa-7ee56e5090b6" path="/var/lib/kubelet/pods/78056aef-5d74-4349-9faa-7ee56e5090b6/volumes" Feb 24 02:37:29.433808 master-0 kubenswrapper[31411]: I0224 02:37:29.433648 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-675fbd6d58-pdtfj"] Feb 24 02:37:29.434539 master-0 kubenswrapper[31411]: E0224 02:37:29.434499 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78056aef-5d74-4349-9faa-7ee56e5090b6" containerName="init" Feb 24 02:37:29.434539 master-0 kubenswrapper[31411]: I0224 02:37:29.434536 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="78056aef-5d74-4349-9faa-7ee56e5090b6" containerName="init" Feb 24 02:37:29.434642 master-0 kubenswrapper[31411]: E0224 02:37:29.434632 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78056aef-5d74-4349-9faa-7ee56e5090b6" containerName="dnsmasq-dns" Feb 24 02:37:29.434642 master-0 kubenswrapper[31411]: I0224 02:37:29.434641 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="78056aef-5d74-4349-9faa-7ee56e5090b6" containerName="dnsmasq-dns" Feb 24 02:37:29.435073 master-0 kubenswrapper[31411]: I0224 02:37:29.435003 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="78056aef-5d74-4349-9faa-7ee56e5090b6" containerName="dnsmasq-dns" Feb 24 02:37:29.436731 master-0 kubenswrapper[31411]: I0224 02:37:29.436706 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-675fbd6d58-pdtfj" Feb 24 02:37:29.441674 master-0 kubenswrapper[31411]: I0224 02:37:29.441631 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 24 02:37:29.444261 master-0 kubenswrapper[31411]: I0224 02:37:29.444209 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 24 02:37:29.444716 master-0 kubenswrapper[31411]: I0224 02:37:29.444695 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 24 02:37:29.455664 master-0 kubenswrapper[31411]: I0224 02:37:29.455601 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-675fbd6d58-pdtfj"] Feb 24 02:37:29.553339 master-0 kubenswrapper[31411]: I0224 02:37:29.553274 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea5f1aef-3469-4396-b111-a7fd2dc4f40c-public-tls-certs\") pod \"swift-proxy-675fbd6d58-pdtfj\" (UID: \"ea5f1aef-3469-4396-b111-a7fd2dc4f40c\") " pod="openstack/swift-proxy-675fbd6d58-pdtfj" Feb 24 02:37:29.553762 master-0 kubenswrapper[31411]: I0224 02:37:29.553741 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea5f1aef-3469-4396-b111-a7fd2dc4f40c-combined-ca-bundle\") pod \"swift-proxy-675fbd6d58-pdtfj\" (UID: \"ea5f1aef-3469-4396-b111-a7fd2dc4f40c\") " pod="openstack/swift-proxy-675fbd6d58-pdtfj" Feb 24 02:37:29.553952 master-0 kubenswrapper[31411]: I0224 02:37:29.553937 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea5f1aef-3469-4396-b111-a7fd2dc4f40c-log-httpd\") pod \"swift-proxy-675fbd6d58-pdtfj\" (UID: \"ea5f1aef-3469-4396-b111-a7fd2dc4f40c\") " pod="openstack/swift-proxy-675fbd6d58-pdtfj" Feb 24 02:37:29.554237 master-0 kubenswrapper[31411]: I0224 02:37:29.554213 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea5f1aef-3469-4396-b111-a7fd2dc4f40c-config-data\") pod \"swift-proxy-675fbd6d58-pdtfj\" (UID: \"ea5f1aef-3469-4396-b111-a7fd2dc4f40c\") " pod="openstack/swift-proxy-675fbd6d58-pdtfj" Feb 24 02:37:29.554440 master-0 kubenswrapper[31411]: I0224 02:37:29.554425 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea5f1aef-3469-4396-b111-a7fd2dc4f40c-run-httpd\") pod \"swift-proxy-675fbd6d58-pdtfj\" (UID: \"ea5f1aef-3469-4396-b111-a7fd2dc4f40c\") " pod="openstack/swift-proxy-675fbd6d58-pdtfj" Feb 24 02:37:29.554553 master-0 kubenswrapper[31411]: I0224 02:37:29.554536 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea5f1aef-3469-4396-b111-a7fd2dc4f40c-etc-swift\") pod \"swift-proxy-675fbd6d58-pdtfj\" (UID: \"ea5f1aef-3469-4396-b111-a7fd2dc4f40c\") " pod="openstack/swift-proxy-675fbd6d58-pdtfj" Feb 24 02:37:29.554662 master-0 kubenswrapper[31411]: I0224 02:37:29.554649 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea5f1aef-3469-4396-b111-a7fd2dc4f40c-internal-tls-certs\") pod \"swift-proxy-675fbd6d58-pdtfj\" (UID: \"ea5f1aef-3469-4396-b111-a7fd2dc4f40c\") " pod="openstack/swift-proxy-675fbd6d58-pdtfj" Feb 24 02:37:29.554770 master-0 kubenswrapper[31411]: I0224 02:37:29.554757 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwnv7\" (UniqueName: \"kubernetes.io/projected/ea5f1aef-3469-4396-b111-a7fd2dc4f40c-kube-api-access-qwnv7\") pod \"swift-proxy-675fbd6d58-pdtfj\" (UID: \"ea5f1aef-3469-4396-b111-a7fd2dc4f40c\") " pod="openstack/swift-proxy-675fbd6d58-pdtfj" Feb 24 02:37:29.659424 master-0 kubenswrapper[31411]: I0224 02:37:29.659326 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea5f1aef-3469-4396-b111-a7fd2dc4f40c-config-data\") pod \"swift-proxy-675fbd6d58-pdtfj\" (UID: \"ea5f1aef-3469-4396-b111-a7fd2dc4f40c\") " pod="openstack/swift-proxy-675fbd6d58-pdtfj" Feb 24 02:37:29.660287 master-0 kubenswrapper[31411]: I0224 02:37:29.659649 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea5f1aef-3469-4396-b111-a7fd2dc4f40c-run-httpd\") pod \"swift-proxy-675fbd6d58-pdtfj\" (UID: \"ea5f1aef-3469-4396-b111-a7fd2dc4f40c\") " pod="openstack/swift-proxy-675fbd6d58-pdtfj" Feb 24 02:37:29.660287 master-0 kubenswrapper[31411]: I0224 02:37:29.660015 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea5f1aef-3469-4396-b111-a7fd2dc4f40c-etc-swift\") pod \"swift-proxy-675fbd6d58-pdtfj\" (UID: \"ea5f1aef-3469-4396-b111-a7fd2dc4f40c\") " pod="openstack/swift-proxy-675fbd6d58-pdtfj" Feb 24 02:37:29.660287 master-0 kubenswrapper[31411]: I0224 02:37:29.660088 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea5f1aef-3469-4396-b111-a7fd2dc4f40c-internal-tls-certs\") pod \"swift-proxy-675fbd6d58-pdtfj\" (UID: \"ea5f1aef-3469-4396-b111-a7fd2dc4f40c\") " pod="openstack/swift-proxy-675fbd6d58-pdtfj" Feb 24 02:37:29.660287 master-0 kubenswrapper[31411]: I0224 02:37:29.660162 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwnv7\" (UniqueName: \"kubernetes.io/projected/ea5f1aef-3469-4396-b111-a7fd2dc4f40c-kube-api-access-qwnv7\") pod \"swift-proxy-675fbd6d58-pdtfj\" (UID: \"ea5f1aef-3469-4396-b111-a7fd2dc4f40c\") " pod="openstack/swift-proxy-675fbd6d58-pdtfj" Feb 24 02:37:29.660287 master-0 kubenswrapper[31411]: I0224 02:37:29.660199 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea5f1aef-3469-4396-b111-a7fd2dc4f40c-public-tls-certs\") pod \"swift-proxy-675fbd6d58-pdtfj\" (UID: \"ea5f1aef-3469-4396-b111-a7fd2dc4f40c\") " pod="openstack/swift-proxy-675fbd6d58-pdtfj" Feb 24 02:37:29.660287 master-0 kubenswrapper[31411]: I0224 02:37:29.660237 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea5f1aef-3469-4396-b111-a7fd2dc4f40c-combined-ca-bundle\") pod \"swift-proxy-675fbd6d58-pdtfj\" (UID: \"ea5f1aef-3469-4396-b111-a7fd2dc4f40c\") " pod="openstack/swift-proxy-675fbd6d58-pdtfj" Feb 24 02:37:29.660956 master-0 kubenswrapper[31411]: I0224 02:37:29.660308 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea5f1aef-3469-4396-b111-a7fd2dc4f40c-log-httpd\") pod \"swift-proxy-675fbd6d58-pdtfj\" (UID: \"ea5f1aef-3469-4396-b111-a7fd2dc4f40c\") " pod="openstack/swift-proxy-675fbd6d58-pdtfj" Feb 24 02:37:29.663322 master-0 kubenswrapper[31411]: I0224 02:37:29.663288 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea5f1aef-3469-4396-b111-a7fd2dc4f40c-run-httpd\") pod \"swift-proxy-675fbd6d58-pdtfj\" (UID: \"ea5f1aef-3469-4396-b111-a7fd2dc4f40c\") " pod="openstack/swift-proxy-675fbd6d58-pdtfj" Feb 24 02:37:29.664216 master-0 kubenswrapper[31411]: I0224 02:37:29.664179 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea5f1aef-3469-4396-b111-a7fd2dc4f40c-config-data\") pod \"swift-proxy-675fbd6d58-pdtfj\" (UID: \"ea5f1aef-3469-4396-b111-a7fd2dc4f40c\") " pod="openstack/swift-proxy-675fbd6d58-pdtfj" Feb 24 02:37:29.665785 master-0 kubenswrapper[31411]: I0224 02:37:29.665764 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea5f1aef-3469-4396-b111-a7fd2dc4f40c-combined-ca-bundle\") pod \"swift-proxy-675fbd6d58-pdtfj\" (UID: \"ea5f1aef-3469-4396-b111-a7fd2dc4f40c\") " pod="openstack/swift-proxy-675fbd6d58-pdtfj" Feb 24 02:37:29.672916 master-0 kubenswrapper[31411]: I0224 02:37:29.672854 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-8705a-default-internal-api-0"] Feb 24 02:37:29.673234 master-0 kubenswrapper[31411]: I0224 02:37:29.673176 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-8705a-default-internal-api-0" podUID="d7d5c157-238f-4062-8e57-dccee5fa4f9e" containerName="glance-log" containerID="cri-o://d22d5d85603b1447841eb0a9617422fe92c19152b0c341477cd29d81d8ba7274" gracePeriod=30 Feb 24 02:37:29.676344 master-0 kubenswrapper[31411]: I0224 02:37:29.674069 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-8705a-default-internal-api-0" podUID="d7d5c157-238f-4062-8e57-dccee5fa4f9e" containerName="glance-httpd" containerID="cri-o://ab53938e2cb3d0282666094cb678cbb91c93a2a0e167f53582f18f100dd71ac2" gracePeriod=30 Feb 24 02:37:29.678083 master-0 kubenswrapper[31411]: I0224 02:37:29.678043 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea5f1aef-3469-4396-b111-a7fd2dc4f40c-internal-tls-certs\") pod \"swift-proxy-675fbd6d58-pdtfj\" (UID: \"ea5f1aef-3469-4396-b111-a7fd2dc4f40c\") " pod="openstack/swift-proxy-675fbd6d58-pdtfj" Feb 24 02:37:29.678181 master-0 kubenswrapper[31411]: I0224 02:37:29.678143 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea5f1aef-3469-4396-b111-a7fd2dc4f40c-log-httpd\") pod \"swift-proxy-675fbd6d58-pdtfj\" (UID: \"ea5f1aef-3469-4396-b111-a7fd2dc4f40c\") " pod="openstack/swift-proxy-675fbd6d58-pdtfj" Feb 24 02:37:29.687875 master-0 kubenswrapper[31411]: I0224 02:37:29.687521 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea5f1aef-3469-4396-b111-a7fd2dc4f40c-public-tls-certs\") pod \"swift-proxy-675fbd6d58-pdtfj\" (UID: \"ea5f1aef-3469-4396-b111-a7fd2dc4f40c\") " pod="openstack/swift-proxy-675fbd6d58-pdtfj" Feb 24 02:37:29.687875 master-0 kubenswrapper[31411]: I0224 02:37:29.687794 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea5f1aef-3469-4396-b111-a7fd2dc4f40c-etc-swift\") pod \"swift-proxy-675fbd6d58-pdtfj\" (UID: \"ea5f1aef-3469-4396-b111-a7fd2dc4f40c\") " pod="openstack/swift-proxy-675fbd6d58-pdtfj" Feb 24 02:37:29.695261 master-0 kubenswrapper[31411]: I0224 02:37:29.695222 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwnv7\" (UniqueName: \"kubernetes.io/projected/ea5f1aef-3469-4396-b111-a7fd2dc4f40c-kube-api-access-qwnv7\") pod \"swift-proxy-675fbd6d58-pdtfj\" (UID: \"ea5f1aef-3469-4396-b111-a7fd2dc4f40c\") " pod="openstack/swift-proxy-675fbd6d58-pdtfj" Feb 24 02:37:29.814428 master-0 kubenswrapper[31411]: I0224 02:37:29.814364 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-675fbd6d58-pdtfj" Feb 24 02:37:30.364191 master-0 kubenswrapper[31411]: I0224 02:37:30.364012 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-sync-8hw9n"] Feb 24 02:37:30.366369 master-0 kubenswrapper[31411]: I0224 02:37:30.366335 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-8hw9n" Feb 24 02:37:30.371713 master-0 kubenswrapper[31411]: I0224 02:37:30.371640 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Feb 24 02:37:30.374838 master-0 kubenswrapper[31411]: I0224 02:37:30.374771 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-8hw9n"] Feb 24 02:37:30.376139 master-0 kubenswrapper[31411]: I0224 02:37:30.376102 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Feb 24 02:37:30.504694 master-0 kubenswrapper[31411]: I0224 02:37:30.503774 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/3695612d-87a3-4401-9303-05ae933a9f78-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-8hw9n\" (UID: \"3695612d-87a3-4401-9303-05ae933a9f78\") " pod="openstack/ironic-inspector-db-sync-8hw9n" Feb 24 02:37:30.504694 master-0 kubenswrapper[31411]: I0224 02:37:30.503989 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3695612d-87a3-4401-9303-05ae933a9f78-config\") pod \"ironic-inspector-db-sync-8hw9n\" (UID: \"3695612d-87a3-4401-9303-05ae933a9f78\") " pod="openstack/ironic-inspector-db-sync-8hw9n" Feb 24 02:37:30.504694 master-0 kubenswrapper[31411]: I0224 02:37:30.504292 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3695612d-87a3-4401-9303-05ae933a9f78-combined-ca-bundle\") pod \"ironic-inspector-db-sync-8hw9n\" (UID: \"3695612d-87a3-4401-9303-05ae933a9f78\") " pod="openstack/ironic-inspector-db-sync-8hw9n" Feb 24 02:37:30.504694 master-0 kubenswrapper[31411]: I0224 02:37:30.504548 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3695612d-87a3-4401-9303-05ae933a9f78-scripts\") pod \"ironic-inspector-db-sync-8hw9n\" (UID: \"3695612d-87a3-4401-9303-05ae933a9f78\") " pod="openstack/ironic-inspector-db-sync-8hw9n" Feb 24 02:37:30.504694 master-0 kubenswrapper[31411]: I0224 02:37:30.504682 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/3695612d-87a3-4401-9303-05ae933a9f78-var-lib-ironic\") pod \"ironic-inspector-db-sync-8hw9n\" (UID: \"3695612d-87a3-4401-9303-05ae933a9f78\") " pod="openstack/ironic-inspector-db-sync-8hw9n" Feb 24 02:37:30.507254 master-0 kubenswrapper[31411]: I0224 02:37:30.504922 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3695612d-87a3-4401-9303-05ae933a9f78-etc-podinfo\") pod \"ironic-inspector-db-sync-8hw9n\" (UID: \"3695612d-87a3-4401-9303-05ae933a9f78\") " pod="openstack/ironic-inspector-db-sync-8hw9n" Feb 24 02:37:30.507254 master-0 kubenswrapper[31411]: I0224 02:37:30.505069 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bvdr\" (UniqueName: \"kubernetes.io/projected/3695612d-87a3-4401-9303-05ae933a9f78-kube-api-access-2bvdr\") pod \"ironic-inspector-db-sync-8hw9n\" (UID: \"3695612d-87a3-4401-9303-05ae933a9f78\") " pod="openstack/ironic-inspector-db-sync-8hw9n" Feb 24 02:37:30.555747 master-0 kubenswrapper[31411]: I0224 02:37:30.555650 31411 generic.go:334] "Generic (PLEG): container finished" podID="d7d5c157-238f-4062-8e57-dccee5fa4f9e" containerID="d22d5d85603b1447841eb0a9617422fe92c19152b0c341477cd29d81d8ba7274" exitCode=143 Feb 24 02:37:30.555747 master-0 kubenswrapper[31411]: I0224 02:37:30.555720 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8705a-default-internal-api-0" event={"ID":"d7d5c157-238f-4062-8e57-dccee5fa4f9e","Type":"ContainerDied","Data":"d22d5d85603b1447841eb0a9617422fe92c19152b0c341477cd29d81d8ba7274"} Feb 24 02:37:30.609228 master-0 kubenswrapper[31411]: I0224 02:37:30.609184 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/3695612d-87a3-4401-9303-05ae933a9f78-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-8hw9n\" (UID: \"3695612d-87a3-4401-9303-05ae933a9f78\") " pod="openstack/ironic-inspector-db-sync-8hw9n" Feb 24 02:37:30.609398 master-0 kubenswrapper[31411]: I0224 02:37:30.609249 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3695612d-87a3-4401-9303-05ae933a9f78-config\") pod \"ironic-inspector-db-sync-8hw9n\" (UID: \"3695612d-87a3-4401-9303-05ae933a9f78\") " pod="openstack/ironic-inspector-db-sync-8hw9n" Feb 24 02:37:30.611299 master-0 kubenswrapper[31411]: I0224 02:37:30.609544 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3695612d-87a3-4401-9303-05ae933a9f78-combined-ca-bundle\") pod \"ironic-inspector-db-sync-8hw9n\" (UID: \"3695612d-87a3-4401-9303-05ae933a9f78\") " pod="openstack/ironic-inspector-db-sync-8hw9n" Feb 24 02:37:30.611299 master-0 kubenswrapper[31411]: I0224 02:37:30.610128 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3695612d-87a3-4401-9303-05ae933a9f78-scripts\") pod \"ironic-inspector-db-sync-8hw9n\" (UID: \"3695612d-87a3-4401-9303-05ae933a9f78\") " pod="openstack/ironic-inspector-db-sync-8hw9n" Feb 24 02:37:30.611299 master-0 kubenswrapper[31411]: I0224 02:37:30.610234 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/3695612d-87a3-4401-9303-05ae933a9f78-var-lib-ironic\") pod \"ironic-inspector-db-sync-8hw9n\" (UID: \"3695612d-87a3-4401-9303-05ae933a9f78\") " pod="openstack/ironic-inspector-db-sync-8hw9n" Feb 24 02:37:30.611299 master-0 kubenswrapper[31411]: I0224 02:37:30.610438 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3695612d-87a3-4401-9303-05ae933a9f78-etc-podinfo\") pod \"ironic-inspector-db-sync-8hw9n\" (UID: \"3695612d-87a3-4401-9303-05ae933a9f78\") " pod="openstack/ironic-inspector-db-sync-8hw9n" Feb 24 02:37:30.611299 master-0 kubenswrapper[31411]: I0224 02:37:30.610564 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bvdr\" (UniqueName: \"kubernetes.io/projected/3695612d-87a3-4401-9303-05ae933a9f78-kube-api-access-2bvdr\") pod \"ironic-inspector-db-sync-8hw9n\" (UID: \"3695612d-87a3-4401-9303-05ae933a9f78\") " pod="openstack/ironic-inspector-db-sync-8hw9n" Feb 24 02:37:30.611299 master-0 kubenswrapper[31411]: I0224 02:37:30.610892 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/3695612d-87a3-4401-9303-05ae933a9f78-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-8hw9n\" (UID: \"3695612d-87a3-4401-9303-05ae933a9f78\") " pod="openstack/ironic-inspector-db-sync-8hw9n" Feb 24 02:37:30.615338 master-0 kubenswrapper[31411]: I0224 02:37:30.615233 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/3695612d-87a3-4401-9303-05ae933a9f78-var-lib-ironic\") pod \"ironic-inspector-db-sync-8hw9n\" (UID: \"3695612d-87a3-4401-9303-05ae933a9f78\") " pod="openstack/ironic-inspector-db-sync-8hw9n" Feb 24 02:37:30.615902 master-0 kubenswrapper[31411]: I0224 02:37:30.615860 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3695612d-87a3-4401-9303-05ae933a9f78-scripts\") pod \"ironic-inspector-db-sync-8hw9n\" (UID: \"3695612d-87a3-4401-9303-05ae933a9f78\") " pod="openstack/ironic-inspector-db-sync-8hw9n" Feb 24 02:37:30.618874 master-0 kubenswrapper[31411]: I0224 02:37:30.618825 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3695612d-87a3-4401-9303-05ae933a9f78-etc-podinfo\") pod \"ironic-inspector-db-sync-8hw9n\" (UID: \"3695612d-87a3-4401-9303-05ae933a9f78\") " pod="openstack/ironic-inspector-db-sync-8hw9n" Feb 24 02:37:30.621725 master-0 kubenswrapper[31411]: I0224 02:37:30.621646 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3695612d-87a3-4401-9303-05ae933a9f78-combined-ca-bundle\") pod \"ironic-inspector-db-sync-8hw9n\" (UID: \"3695612d-87a3-4401-9303-05ae933a9f78\") " pod="openstack/ironic-inspector-db-sync-8hw9n" Feb 24 02:37:30.622705 master-0 kubenswrapper[31411]: I0224 02:37:30.622677 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3695612d-87a3-4401-9303-05ae933a9f78-config\") pod \"ironic-inspector-db-sync-8hw9n\" (UID: \"3695612d-87a3-4401-9303-05ae933a9f78\") " pod="openstack/ironic-inspector-db-sync-8hw9n" Feb 24 02:37:30.632082 master-0 kubenswrapper[31411]: I0224 02:37:30.632016 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bvdr\" (UniqueName: \"kubernetes.io/projected/3695612d-87a3-4401-9303-05ae933a9f78-kube-api-access-2bvdr\") pod \"ironic-inspector-db-sync-8hw9n\" (UID: \"3695612d-87a3-4401-9303-05ae933a9f78\") " pod="openstack/ironic-inspector-db-sync-8hw9n" Feb 24 02:37:30.717881 master-0 kubenswrapper[31411]: I0224 02:37:30.717798 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-8hw9n" Feb 24 02:37:30.762119 master-0 kubenswrapper[31411]: I0224 02:37:30.762055 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:30.983299 master-0 kubenswrapper[31411]: I0224 02:37:30.981203 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-85b75c94bc-pp6mc" Feb 24 02:37:31.046821 master-0 kubenswrapper[31411]: I0224 02:37:31.045774 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-55455d5d8d-zzwzz" Feb 24 02:37:31.097082 master-0 kubenswrapper[31411]: I0224 02:37:31.095131 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-75c678c459-9mmbb"] Feb 24 02:37:31.097082 master-0 kubenswrapper[31411]: I0224 02:37:31.095589 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-75c678c459-9mmbb" podUID="9ef8199f-6610-44a1-b85c-fc7f2bea6294" containerName="ironic-api-log" containerID="cri-o://40661640f2bd585544ed15b57dab77da0a8d6a773cf8897a5977a704b6656bb7" gracePeriod=60 Feb 24 02:37:31.249894 master-0 kubenswrapper[31411]: I0224 02:37:31.235418 31411 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-7d8f6784f6-dqjdm" Feb 24 02:37:31.249894 master-0 kubenswrapper[31411]: I0224 02:37:31.236261 31411 scope.go:117] "RemoveContainer" containerID="73d9ad9599a99a5baabf99f214a4ed014fbc75e5756f2c85822b5bc219ed46c4" Feb 24 02:37:31.249894 master-0 kubenswrapper[31411]: E0224 02:37:31.238186 31411 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-7d8f6784f6-dqjdm_openstack(72106e8c-2a98-4a82-9f36-c820986c5665)\"" pod="openstack/ironic-neutron-agent-7d8f6784f6-dqjdm" podUID="72106e8c-2a98-4a82-9f36-c820986c5665" Feb 24 02:37:31.374597 master-0 kubenswrapper[31411]: E0224 02:37:31.363095 31411 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ef8199f_6610_44a1_b85c_fc7f2bea6294.slice/crio-conmon-40661640f2bd585544ed15b57dab77da0a8d6a773cf8897a5977a704b6656bb7.scope\": RecentStats: unable to find data in memory cache]" Feb 24 02:37:31.590201 master-0 kubenswrapper[31411]: I0224 02:37:31.590134 31411 generic.go:334] "Generic (PLEG): container finished" podID="9ef8199f-6610-44a1-b85c-fc7f2bea6294" containerID="40661640f2bd585544ed15b57dab77da0a8d6a773cf8897a5977a704b6656bb7" exitCode=143 Feb 24 02:37:31.590201 master-0 kubenswrapper[31411]: I0224 02:37:31.590195 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-75c678c459-9mmbb" event={"ID":"9ef8199f-6610-44a1-b85c-fc7f2bea6294","Type":"ContainerDied","Data":"40661640f2bd585544ed15b57dab77da0a8d6a773cf8897a5977a704b6656bb7"} Feb 24 02:37:32.068412 master-0 kubenswrapper[31411]: I0224 02:37:32.068363 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-6ac23-volume-lvm-iscsi-0" Feb 24 02:37:32.159463 master-0 kubenswrapper[31411]: I0224 02:37:32.158380 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-6ac23-backup-0" Feb 24 02:37:32.507417 master-0 kubenswrapper[31411]: I0224 02:37:32.507366 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-75c678c459-9mmbb" Feb 24 02:37:32.591036 master-0 kubenswrapper[31411]: I0224 02:37:32.589022 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-8hw9n"] Feb 24 02:37:32.624642 master-0 kubenswrapper[31411]: I0224 02:37:32.624508 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-75c678c459-9mmbb" event={"ID":"9ef8199f-6610-44a1-b85c-fc7f2bea6294","Type":"ContainerDied","Data":"a688c60e9c9a887785f46b66598fa692f80f62afa09344e913a89c83d554cd86"} Feb 24 02:37:32.624642 master-0 kubenswrapper[31411]: I0224 02:37:32.624628 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-675fbd6d58-pdtfj"] Feb 24 02:37:32.624779 master-0 kubenswrapper[31411]: I0224 02:37:32.624657 31411 scope.go:117] "RemoveContainer" containerID="29d1fd3e2cc0d6b1ac29b953abb9753599726a368e8a64ba12da6ee6a3c363e4" Feb 24 02:37:32.624779 master-0 kubenswrapper[31411]: I0224 02:37:32.624705 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-75c678c459-9mmbb" Feb 24 02:37:32.670718 master-0 kubenswrapper[31411]: I0224 02:37:32.670655 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ef8199f-6610-44a1-b85c-fc7f2bea6294-config-data\") pod \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " Feb 24 02:37:32.672917 master-0 kubenswrapper[31411]: I0224 02:37:32.670776 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/9ef8199f-6610-44a1-b85c-fc7f2bea6294-etc-podinfo\") pod \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " Feb 24 02:37:32.672917 master-0 kubenswrapper[31411]: I0224 02:37:32.670885 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ef8199f-6610-44a1-b85c-fc7f2bea6294-combined-ca-bundle\") pod \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " Feb 24 02:37:32.672917 master-0 kubenswrapper[31411]: I0224 02:37:32.671065 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ef8199f-6610-44a1-b85c-fc7f2bea6294-scripts\") pod \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " Feb 24 02:37:32.672917 master-0 kubenswrapper[31411]: I0224 02:37:32.671135 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7m95\" (UniqueName: \"kubernetes.io/projected/9ef8199f-6610-44a1-b85c-fc7f2bea6294-kube-api-access-z7m95\") pod \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " Feb 24 02:37:32.672917 master-0 kubenswrapper[31411]: I0224 02:37:32.671187 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/9ef8199f-6610-44a1-b85c-fc7f2bea6294-config-data-merged\") pod \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " Feb 24 02:37:32.672917 master-0 kubenswrapper[31411]: I0224 02:37:32.671260 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ef8199f-6610-44a1-b85c-fc7f2bea6294-logs\") pod \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " Feb 24 02:37:32.672917 master-0 kubenswrapper[31411]: I0224 02:37:32.671346 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ef8199f-6610-44a1-b85c-fc7f2bea6294-config-data-custom\") pod \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\" (UID: \"9ef8199f-6610-44a1-b85c-fc7f2bea6294\") " Feb 24 02:37:32.674134 master-0 kubenswrapper[31411]: I0224 02:37:32.674075 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ef8199f-6610-44a1-b85c-fc7f2bea6294-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "9ef8199f-6610-44a1-b85c-fc7f2bea6294" (UID: "9ef8199f-6610-44a1-b85c-fc7f2bea6294"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:37:32.678059 master-0 kubenswrapper[31411]: I0224 02:37:32.676547 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef8199f-6610-44a1-b85c-fc7f2bea6294-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9ef8199f-6610-44a1-b85c-fc7f2bea6294" (UID: "9ef8199f-6610-44a1-b85c-fc7f2bea6294"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:32.678059 master-0 kubenswrapper[31411]: I0224 02:37:32.676959 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef8199f-6610-44a1-b85c-fc7f2bea6294-scripts" (OuterVolumeSpecName: "scripts") pod "9ef8199f-6610-44a1-b85c-fc7f2bea6294" (UID: "9ef8199f-6610-44a1-b85c-fc7f2bea6294"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:32.678059 master-0 kubenswrapper[31411]: I0224 02:37:32.677171 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ef8199f-6610-44a1-b85c-fc7f2bea6294-logs" (OuterVolumeSpecName: "logs") pod "9ef8199f-6610-44a1-b85c-fc7f2bea6294" (UID: "9ef8199f-6610-44a1-b85c-fc7f2bea6294"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:37:32.679004 master-0 kubenswrapper[31411]: I0224 02:37:32.678778 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9ef8199f-6610-44a1-b85c-fc7f2bea6294-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "9ef8199f-6610-44a1-b85c-fc7f2bea6294" (UID: "9ef8199f-6610-44a1-b85c-fc7f2bea6294"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 24 02:37:32.681789 master-0 kubenswrapper[31411]: I0224 02:37:32.681712 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ef8199f-6610-44a1-b85c-fc7f2bea6294-kube-api-access-z7m95" (OuterVolumeSpecName: "kube-api-access-z7m95") pod "9ef8199f-6610-44a1-b85c-fc7f2bea6294" (UID: "9ef8199f-6610-44a1-b85c-fc7f2bea6294"). InnerVolumeSpecName "kube-api-access-z7m95". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:37:32.690676 master-0 kubenswrapper[31411]: I0224 02:37:32.690632 31411 scope.go:117] "RemoveContainer" containerID="40661640f2bd585544ed15b57dab77da0a8d6a773cf8897a5977a704b6656bb7" Feb 24 02:37:32.709465 master-0 kubenswrapper[31411]: I0224 02:37:32.709408 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef8199f-6610-44a1-b85c-fc7f2bea6294-config-data" (OuterVolumeSpecName: "config-data") pod "9ef8199f-6610-44a1-b85c-fc7f2bea6294" (UID: "9ef8199f-6610-44a1-b85c-fc7f2bea6294"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:32.738711 master-0 kubenswrapper[31411]: I0224 02:37:32.738652 31411 scope.go:117] "RemoveContainer" containerID="8ddc546cc3f3991e336a928e41297a654e01303391232133cff6addd93a06bab" Feb 24 02:37:32.747875 master-0 kubenswrapper[31411]: I0224 02:37:32.747835 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef8199f-6610-44a1-b85c-fc7f2bea6294-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ef8199f-6610-44a1-b85c-fc7f2bea6294" (UID: "9ef8199f-6610-44a1-b85c-fc7f2bea6294"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:32.775087 master-0 kubenswrapper[31411]: I0224 02:37:32.775032 31411 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ef8199f-6610-44a1-b85c-fc7f2bea6294-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:32.775087 master-0 kubenswrapper[31411]: I0224 02:37:32.775080 31411 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/9ef8199f-6610-44a1-b85c-fc7f2bea6294-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:32.775212 master-0 kubenswrapper[31411]: I0224 02:37:32.775093 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ef8199f-6610-44a1-b85c-fc7f2bea6294-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:32.775212 master-0 kubenswrapper[31411]: I0224 02:37:32.775109 31411 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ef8199f-6610-44a1-b85c-fc7f2bea6294-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:32.775212 master-0 kubenswrapper[31411]: I0224 02:37:32.775127 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7m95\" (UniqueName: \"kubernetes.io/projected/9ef8199f-6610-44a1-b85c-fc7f2bea6294-kube-api-access-z7m95\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:32.775212 master-0 kubenswrapper[31411]: I0224 02:37:32.775138 31411 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/9ef8199f-6610-44a1-b85c-fc7f2bea6294-config-data-merged\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:32.775212 master-0 kubenswrapper[31411]: I0224 02:37:32.775147 31411 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ef8199f-6610-44a1-b85c-fc7f2bea6294-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:32.775212 master-0 kubenswrapper[31411]: I0224 02:37:32.775157 31411 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ef8199f-6610-44a1-b85c-fc7f2bea6294-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:33.152454 master-0 kubenswrapper[31411]: I0224 02:37:33.152361 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-75c678c459-9mmbb"] Feb 24 02:37:33.201597 master-0 kubenswrapper[31411]: I0224 02:37:33.201436 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-75c678c459-9mmbb"] Feb 24 02:37:33.644705 master-0 kubenswrapper[31411]: I0224 02:37:33.644647 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-8hw9n" event={"ID":"3695612d-87a3-4401-9303-05ae933a9f78","Type":"ContainerStarted","Data":"0584c53169051c3b8f3c6fe97050a745191402c66fef44ca5b9f56f8edd8f23c"} Feb 24 02:37:33.649675 master-0 kubenswrapper[31411]: I0224 02:37:33.649648 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-675fbd6d58-pdtfj" event={"ID":"ea5f1aef-3469-4396-b111-a7fd2dc4f40c","Type":"ContainerStarted","Data":"5dd57b5f239221487b54b3aa0cfe643a385a91ebb1fe892b598214bf34a5ecab"} Feb 24 02:37:33.649763 master-0 kubenswrapper[31411]: I0224 02:37:33.649685 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-675fbd6d58-pdtfj" Feb 24 02:37:33.649763 master-0 kubenswrapper[31411]: I0224 02:37:33.649705 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-675fbd6d58-pdtfj" event={"ID":"ea5f1aef-3469-4396-b111-a7fd2dc4f40c","Type":"ContainerStarted","Data":"45dd88569cca83541dd0002edb4beb9ea68e63bd97af6340430e66d9c09dac9d"} Feb 24 02:37:33.649763 master-0 kubenswrapper[31411]: I0224 02:37:33.649718 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-675fbd6d58-pdtfj" event={"ID":"ea5f1aef-3469-4396-b111-a7fd2dc4f40c","Type":"ContainerStarted","Data":"fa507c45f7866a9114fdf50b7c44ecc17378fc175879d9dc56dbd9fdf897f117"} Feb 24 02:37:33.649763 master-0 kubenswrapper[31411]: I0224 02:37:33.649750 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-675fbd6d58-pdtfj" Feb 24 02:37:33.655862 master-0 kubenswrapper[31411]: I0224 02:37:33.655798 31411 generic.go:334] "Generic (PLEG): container finished" podID="d7d5c157-238f-4062-8e57-dccee5fa4f9e" containerID="ab53938e2cb3d0282666094cb678cbb91c93a2a0e167f53582f18f100dd71ac2" exitCode=0 Feb 24 02:37:33.655922 master-0 kubenswrapper[31411]: I0224 02:37:33.655881 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8705a-default-internal-api-0" event={"ID":"d7d5c157-238f-4062-8e57-dccee5fa4f9e","Type":"ContainerDied","Data":"ab53938e2cb3d0282666094cb678cbb91c93a2a0e167f53582f18f100dd71ac2"} Feb 24 02:37:33.689455 master-0 kubenswrapper[31411]: I0224 02:37:33.689353 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-675fbd6d58-pdtfj" podStartSLOduration=4.689322304 podStartE2EDuration="4.689322304s" podCreationTimestamp="2026-02-24 02:37:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:37:33.686912566 +0000 UTC m=+996.904110432" watchObservedRunningTime="2026-02-24 02:37:33.689322304 +0000 UTC m=+996.906520150" Feb 24 02:37:34.077738 master-0 kubenswrapper[31411]: I0224 02:37:34.077661 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-pf58r"] Feb 24 02:37:34.078659 master-0 kubenswrapper[31411]: E0224 02:37:34.078506 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ef8199f-6610-44a1-b85c-fc7f2bea6294" containerName="ironic-api" Feb 24 02:37:34.078659 master-0 kubenswrapper[31411]: I0224 02:37:34.078528 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ef8199f-6610-44a1-b85c-fc7f2bea6294" containerName="ironic-api" Feb 24 02:37:34.078659 master-0 kubenswrapper[31411]: E0224 02:37:34.078553 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ef8199f-6610-44a1-b85c-fc7f2bea6294" containerName="init" Feb 24 02:37:34.078659 master-0 kubenswrapper[31411]: I0224 02:37:34.078559 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ef8199f-6610-44a1-b85c-fc7f2bea6294" containerName="init" Feb 24 02:37:34.078659 master-0 kubenswrapper[31411]: E0224 02:37:34.078607 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ef8199f-6610-44a1-b85c-fc7f2bea6294" containerName="ironic-api-log" Feb 24 02:37:34.078659 master-0 kubenswrapper[31411]: I0224 02:37:34.078618 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ef8199f-6610-44a1-b85c-fc7f2bea6294" containerName="ironic-api-log" Feb 24 02:37:34.081804 master-0 kubenswrapper[31411]: I0224 02:37:34.078894 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ef8199f-6610-44a1-b85c-fc7f2bea6294" containerName="ironic-api" Feb 24 02:37:34.081804 master-0 kubenswrapper[31411]: I0224 02:37:34.078907 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ef8199f-6610-44a1-b85c-fc7f2bea6294" containerName="ironic-api-log" Feb 24 02:37:34.081804 master-0 kubenswrapper[31411]: I0224 02:37:34.079922 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pf58r" Feb 24 02:37:34.124950 master-0 kubenswrapper[31411]: I0224 02:37:34.124888 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-pf58r"] Feb 24 02:37:34.241200 master-0 kubenswrapper[31411]: I0224 02:37:34.231987 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c676cf-386b-455c-b9f8-b88a3a34a136-operator-scripts\") pod \"nova-api-db-create-pf58r\" (UID: \"b8c676cf-386b-455c-b9f8-b88a3a34a136\") " pod="openstack/nova-api-db-create-pf58r" Feb 24 02:37:34.241200 master-0 kubenswrapper[31411]: I0224 02:37:34.232276 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zcjv\" (UniqueName: \"kubernetes.io/projected/b8c676cf-386b-455c-b9f8-b88a3a34a136-kube-api-access-5zcjv\") pod \"nova-api-db-create-pf58r\" (UID: \"b8c676cf-386b-455c-b9f8-b88a3a34a136\") " pod="openstack/nova-api-db-create-pf58r" Feb 24 02:37:34.301686 master-0 kubenswrapper[31411]: I0224 02:37:34.301616 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-d8zwm"] Feb 24 02:37:34.323258 master-0 kubenswrapper[31411]: E0224 02:37:34.302850 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ef8199f-6610-44a1-b85c-fc7f2bea6294" containerName="ironic-api" Feb 24 02:37:34.323679 master-0 kubenswrapper[31411]: I0224 02:37:34.323638 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ef8199f-6610-44a1-b85c-fc7f2bea6294" containerName="ironic-api" Feb 24 02:37:34.324547 master-0 kubenswrapper[31411]: I0224 02:37:34.324531 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ef8199f-6610-44a1-b85c-fc7f2bea6294" containerName="ironic-api" Feb 24 02:37:34.325619 master-0 kubenswrapper[31411]: I0224 02:37:34.325601 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d8zwm" Feb 24 02:37:34.343598 master-0 kubenswrapper[31411]: I0224 02:37:34.335632 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zcjv\" (UniqueName: \"kubernetes.io/projected/b8c676cf-386b-455c-b9f8-b88a3a34a136-kube-api-access-5zcjv\") pod \"nova-api-db-create-pf58r\" (UID: \"b8c676cf-386b-455c-b9f8-b88a3a34a136\") " pod="openstack/nova-api-db-create-pf58r" Feb 24 02:37:34.344088 master-0 kubenswrapper[31411]: I0224 02:37:34.344002 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c676cf-386b-455c-b9f8-b88a3a34a136-operator-scripts\") pod \"nova-api-db-create-pf58r\" (UID: \"b8c676cf-386b-455c-b9f8-b88a3a34a136\") " pod="openstack/nova-api-db-create-pf58r" Feb 24 02:37:34.377691 master-0 kubenswrapper[31411]: I0224 02:37:34.377634 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c676cf-386b-455c-b9f8-b88a3a34a136-operator-scripts\") pod \"nova-api-db-create-pf58r\" (UID: \"b8c676cf-386b-455c-b9f8-b88a3a34a136\") " pod="openstack/nova-api-db-create-pf58r" Feb 24 02:37:34.407094 master-0 kubenswrapper[31411]: I0224 02:37:34.407011 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-d8zwm"] Feb 24 02:37:34.447367 master-0 kubenswrapper[31411]: I0224 02:37:34.447282 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e4defeb-f6b0-46db-9acc-6df2d2490988-operator-scripts\") pod \"nova-cell0-db-create-d8zwm\" (UID: \"0e4defeb-f6b0-46db-9acc-6df2d2490988\") " pod="openstack/nova-cell0-db-create-d8zwm" Feb 24 02:37:34.447367 master-0 kubenswrapper[31411]: I0224 02:37:34.447355 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfckq\" (UniqueName: \"kubernetes.io/projected/0e4defeb-f6b0-46db-9acc-6df2d2490988-kube-api-access-rfckq\") pod \"nova-cell0-db-create-d8zwm\" (UID: \"0e4defeb-f6b0-46db-9acc-6df2d2490988\") " pod="openstack/nova-cell0-db-create-d8zwm" Feb 24 02:37:34.550884 master-0 kubenswrapper[31411]: I0224 02:37:34.549902 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e4defeb-f6b0-46db-9acc-6df2d2490988-operator-scripts\") pod \"nova-cell0-db-create-d8zwm\" (UID: \"0e4defeb-f6b0-46db-9acc-6df2d2490988\") " pod="openstack/nova-cell0-db-create-d8zwm" Feb 24 02:37:34.550884 master-0 kubenswrapper[31411]: I0224 02:37:34.549970 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfckq\" (UniqueName: \"kubernetes.io/projected/0e4defeb-f6b0-46db-9acc-6df2d2490988-kube-api-access-rfckq\") pod \"nova-cell0-db-create-d8zwm\" (UID: \"0e4defeb-f6b0-46db-9acc-6df2d2490988\") " pod="openstack/nova-cell0-db-create-d8zwm" Feb 24 02:37:34.551312 master-0 kubenswrapper[31411]: I0224 02:37:34.551256 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e4defeb-f6b0-46db-9acc-6df2d2490988-operator-scripts\") pod \"nova-cell0-db-create-d8zwm\" (UID: \"0e4defeb-f6b0-46db-9acc-6df2d2490988\") " pod="openstack/nova-cell0-db-create-d8zwm" Feb 24 02:37:35.044595 master-0 kubenswrapper[31411]: I0224 02:37:35.043729 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zcjv\" (UniqueName: \"kubernetes.io/projected/b8c676cf-386b-455c-b9f8-b88a3a34a136-kube-api-access-5zcjv\") pod \"nova-api-db-create-pf58r\" (UID: \"b8c676cf-386b-455c-b9f8-b88a3a34a136\") " pod="openstack/nova-api-db-create-pf58r" Feb 24 02:37:35.117642 master-0 kubenswrapper[31411]: I0224 02:37:35.117082 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfckq\" (UniqueName: \"kubernetes.io/projected/0e4defeb-f6b0-46db-9acc-6df2d2490988-kube-api-access-rfckq\") pod \"nova-cell0-db-create-d8zwm\" (UID: \"0e4defeb-f6b0-46db-9acc-6df2d2490988\") " pod="openstack/nova-cell0-db-create-d8zwm" Feb 24 02:37:35.119842 master-0 kubenswrapper[31411]: I0224 02:37:35.118167 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ef8199f-6610-44a1-b85c-fc7f2bea6294" path="/var/lib/kubelet/pods/9ef8199f-6610-44a1-b85c-fc7f2bea6294/volumes" Feb 24 02:37:35.133078 master-0 kubenswrapper[31411]: I0224 02:37:35.132333 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-xrhk2"] Feb 24 02:37:35.136750 master-0 kubenswrapper[31411]: I0224 02:37:35.135148 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xrhk2" Feb 24 02:37:35.173136 master-0 kubenswrapper[31411]: I0224 02:37:35.173022 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdr9m\" (UniqueName: \"kubernetes.io/projected/80c28d7c-77ba-477e-9b90-8432c2c7b48f-kube-api-access-hdr9m\") pod \"nova-cell1-db-create-xrhk2\" (UID: \"80c28d7c-77ba-477e-9b90-8432c2c7b48f\") " pod="openstack/nova-cell1-db-create-xrhk2" Feb 24 02:37:35.173445 master-0 kubenswrapper[31411]: I0224 02:37:35.173149 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80c28d7c-77ba-477e-9b90-8432c2c7b48f-operator-scripts\") pod \"nova-cell1-db-create-xrhk2\" (UID: \"80c28d7c-77ba-477e-9b90-8432c2c7b48f\") " pod="openstack/nova-cell1-db-create-xrhk2" Feb 24 02:37:35.178801 master-0 kubenswrapper[31411]: I0224 02:37:35.178755 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6b46dbc6bf-ngrn9" Feb 24 02:37:35.282598 master-0 kubenswrapper[31411]: I0224 02:37:35.282472 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdr9m\" (UniqueName: \"kubernetes.io/projected/80c28d7c-77ba-477e-9b90-8432c2c7b48f-kube-api-access-hdr9m\") pod \"nova-cell1-db-create-xrhk2\" (UID: \"80c28d7c-77ba-477e-9b90-8432c2c7b48f\") " pod="openstack/nova-cell1-db-create-xrhk2" Feb 24 02:37:35.283082 master-0 kubenswrapper[31411]: I0224 02:37:35.283035 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80c28d7c-77ba-477e-9b90-8432c2c7b48f-operator-scripts\") pod \"nova-cell1-db-create-xrhk2\" (UID: \"80c28d7c-77ba-477e-9b90-8432c2c7b48f\") " pod="openstack/nova-cell1-db-create-xrhk2" Feb 24 02:37:35.284298 master-0 kubenswrapper[31411]: I0224 02:37:35.284249 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80c28d7c-77ba-477e-9b90-8432c2c7b48f-operator-scripts\") pod \"nova-cell1-db-create-xrhk2\" (UID: \"80c28d7c-77ba-477e-9b90-8432c2c7b48f\") " pod="openstack/nova-cell1-db-create-xrhk2" Feb 24 02:37:35.326668 master-0 kubenswrapper[31411]: I0224 02:37:35.325173 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d8zwm" Feb 24 02:37:35.336612 master-0 kubenswrapper[31411]: I0224 02:37:35.334069 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pf58r" Feb 24 02:37:35.344009 master-0 kubenswrapper[31411]: I0224 02:37:35.339118 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-05a5-account-create-update-bt8vb"] Feb 24 02:37:35.344009 master-0 kubenswrapper[31411]: I0224 02:37:35.340841 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-05a5-account-create-update-bt8vb" Feb 24 02:37:35.347813 master-0 kubenswrapper[31411]: I0224 02:37:35.344278 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 24 02:37:35.354774 master-0 kubenswrapper[31411]: I0224 02:37:35.354558 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xrhk2"] Feb 24 02:37:35.388731 master-0 kubenswrapper[31411]: I0224 02:37:35.388587 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c1ff294-924a-46b5-9107-84238d30135f-operator-scripts\") pod \"nova-api-05a5-account-create-update-bt8vb\" (UID: \"8c1ff294-924a-46b5-9107-84238d30135f\") " pod="openstack/nova-api-05a5-account-create-update-bt8vb" Feb 24 02:37:35.389008 master-0 kubenswrapper[31411]: I0224 02:37:35.388951 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsktd\" (UniqueName: \"kubernetes.io/projected/8c1ff294-924a-46b5-9107-84238d30135f-kube-api-access-zsktd\") pod \"nova-api-05a5-account-create-update-bt8vb\" (UID: \"8c1ff294-924a-46b5-9107-84238d30135f\") " pod="openstack/nova-api-05a5-account-create-update-bt8vb" Feb 24 02:37:35.493004 master-0 kubenswrapper[31411]: I0224 02:37:35.492827 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c1ff294-924a-46b5-9107-84238d30135f-operator-scripts\") pod \"nova-api-05a5-account-create-update-bt8vb\" (UID: \"8c1ff294-924a-46b5-9107-84238d30135f\") " pod="openstack/nova-api-05a5-account-create-update-bt8vb" Feb 24 02:37:35.493279 master-0 kubenswrapper[31411]: I0224 02:37:35.493013 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsktd\" (UniqueName: \"kubernetes.io/projected/8c1ff294-924a-46b5-9107-84238d30135f-kube-api-access-zsktd\") pod \"nova-api-05a5-account-create-update-bt8vb\" (UID: \"8c1ff294-924a-46b5-9107-84238d30135f\") " pod="openstack/nova-api-05a5-account-create-update-bt8vb" Feb 24 02:37:35.493931 master-0 kubenswrapper[31411]: I0224 02:37:35.493875 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c1ff294-924a-46b5-9107-84238d30135f-operator-scripts\") pod \"nova-api-05a5-account-create-update-bt8vb\" (UID: \"8c1ff294-924a-46b5-9107-84238d30135f\") " pod="openstack/nova-api-05a5-account-create-update-bt8vb" Feb 24 02:37:35.494372 master-0 kubenswrapper[31411]: I0224 02:37:35.494328 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-05a5-account-create-update-bt8vb"] Feb 24 02:37:35.561395 master-0 kubenswrapper[31411]: I0224 02:37:35.561319 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdr9m\" (UniqueName: \"kubernetes.io/projected/80c28d7c-77ba-477e-9b90-8432c2c7b48f-kube-api-access-hdr9m\") pod \"nova-cell1-db-create-xrhk2\" (UID: \"80c28d7c-77ba-477e-9b90-8432c2c7b48f\") " pod="openstack/nova-cell1-db-create-xrhk2" Feb 24 02:37:35.570531 master-0 kubenswrapper[31411]: I0224 02:37:35.570473 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsktd\" (UniqueName: \"kubernetes.io/projected/8c1ff294-924a-46b5-9107-84238d30135f-kube-api-access-zsktd\") pod \"nova-api-05a5-account-create-update-bt8vb\" (UID: \"8c1ff294-924a-46b5-9107-84238d30135f\") " pod="openstack/nova-api-05a5-account-create-update-bt8vb" Feb 24 02:37:35.673524 master-0 kubenswrapper[31411]: I0224 02:37:35.673418 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-05a5-account-create-update-bt8vb" Feb 24 02:37:35.789062 master-0 kubenswrapper[31411]: I0224 02:37:35.789026 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xrhk2" Feb 24 02:37:35.789993 master-0 kubenswrapper[31411]: I0224 02:37:35.789761 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-55455d5d8d-zzwzz"] Feb 24 02:37:35.790131 master-0 kubenswrapper[31411]: I0224 02:37:35.790094 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-55455d5d8d-zzwzz" podUID="65df10fc-36c4-4eab-aaf3-962a5294face" containerName="neutron-api" containerID="cri-o://1e06af42a1cc4bcf137d694554941c425d9cdbfbc5a16e50b74086ff3b340afc" gracePeriod=30 Feb 24 02:37:35.791058 master-0 kubenswrapper[31411]: I0224 02:37:35.790996 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-55455d5d8d-zzwzz" podUID="65df10fc-36c4-4eab-aaf3-962a5294face" containerName="neutron-httpd" containerID="cri-o://ad67ba0dbffb3aee2d1f766a706493369bcb70ce69df62dd27486df44d71d40b" gracePeriod=30 Feb 24 02:37:35.892257 master-0 kubenswrapper[31411]: I0224 02:37:35.892184 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7331-account-create-update-4cdxr"] Feb 24 02:37:35.894277 master-0 kubenswrapper[31411]: I0224 02:37:35.894237 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7331-account-create-update-4cdxr" Feb 24 02:37:35.901815 master-0 kubenswrapper[31411]: I0224 02:37:35.901767 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 24 02:37:35.915606 master-0 kubenswrapper[31411]: I0224 02:37:35.913646 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7331-account-create-update-4cdxr"] Feb 24 02:37:35.925676 master-0 kubenswrapper[31411]: I0224 02:37:35.925606 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tvw5\" (UniqueName: \"kubernetes.io/projected/264be425-2328-436a-9a2d-0215e640276c-kube-api-access-8tvw5\") pod \"nova-cell0-7331-account-create-update-4cdxr\" (UID: \"264be425-2328-436a-9a2d-0215e640276c\") " pod="openstack/nova-cell0-7331-account-create-update-4cdxr" Feb 24 02:37:35.926866 master-0 kubenswrapper[31411]: I0224 02:37:35.926799 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/264be425-2328-436a-9a2d-0215e640276c-operator-scripts\") pod \"nova-cell0-7331-account-create-update-4cdxr\" (UID: \"264be425-2328-436a-9a2d-0215e640276c\") " pod="openstack/nova-cell0-7331-account-create-update-4cdxr" Feb 24 02:37:35.927090 master-0 kubenswrapper[31411]: I0224 02:37:35.925286 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-d8b9-account-create-update-kq9f4"] Feb 24 02:37:35.932215 master-0 kubenswrapper[31411]: I0224 02:37:35.932119 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d8b9-account-create-update-kq9f4" Feb 24 02:37:35.939979 master-0 kubenswrapper[31411]: I0224 02:37:35.939929 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 24 02:37:35.941083 master-0 kubenswrapper[31411]: I0224 02:37:35.941044 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d8b9-account-create-update-kq9f4"] Feb 24 02:37:36.031077 master-0 kubenswrapper[31411]: I0224 02:37:36.031003 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/264be425-2328-436a-9a2d-0215e640276c-operator-scripts\") pod \"nova-cell0-7331-account-create-update-4cdxr\" (UID: \"264be425-2328-436a-9a2d-0215e640276c\") " pod="openstack/nova-cell0-7331-account-create-update-4cdxr" Feb 24 02:37:36.031077 master-0 kubenswrapper[31411]: I0224 02:37:36.031078 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85b32976-b3fc-498f-b15b-690cee8bfc95-operator-scripts\") pod \"nova-cell1-d8b9-account-create-update-kq9f4\" (UID: \"85b32976-b3fc-498f-b15b-690cee8bfc95\") " pod="openstack/nova-cell1-d8b9-account-create-update-kq9f4" Feb 24 02:37:36.031382 master-0 kubenswrapper[31411]: I0224 02:37:36.031226 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6j65\" (UniqueName: \"kubernetes.io/projected/85b32976-b3fc-498f-b15b-690cee8bfc95-kube-api-access-c6j65\") pod \"nova-cell1-d8b9-account-create-update-kq9f4\" (UID: \"85b32976-b3fc-498f-b15b-690cee8bfc95\") " pod="openstack/nova-cell1-d8b9-account-create-update-kq9f4" Feb 24 02:37:36.032214 master-0 kubenswrapper[31411]: I0224 02:37:36.032181 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tvw5\" (UniqueName: \"kubernetes.io/projected/264be425-2328-436a-9a2d-0215e640276c-kube-api-access-8tvw5\") pod \"nova-cell0-7331-account-create-update-4cdxr\" (UID: \"264be425-2328-436a-9a2d-0215e640276c\") " pod="openstack/nova-cell0-7331-account-create-update-4cdxr" Feb 24 02:37:36.035512 master-0 kubenswrapper[31411]: I0224 02:37:36.035471 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/264be425-2328-436a-9a2d-0215e640276c-operator-scripts\") pod \"nova-cell0-7331-account-create-update-4cdxr\" (UID: \"264be425-2328-436a-9a2d-0215e640276c\") " pod="openstack/nova-cell0-7331-account-create-update-4cdxr" Feb 24 02:37:36.041919 master-0 kubenswrapper[31411]: I0224 02:37:36.041867 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-6ac23-scheduler-0" Feb 24 02:37:36.051135 master-0 kubenswrapper[31411]: I0224 02:37:36.050413 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tvw5\" (UniqueName: \"kubernetes.io/projected/264be425-2328-436a-9a2d-0215e640276c-kube-api-access-8tvw5\") pod \"nova-cell0-7331-account-create-update-4cdxr\" (UID: \"264be425-2328-436a-9a2d-0215e640276c\") " pod="openstack/nova-cell0-7331-account-create-update-4cdxr" Feb 24 02:37:36.135404 master-0 kubenswrapper[31411]: I0224 02:37:36.135335 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85b32976-b3fc-498f-b15b-690cee8bfc95-operator-scripts\") pod \"nova-cell1-d8b9-account-create-update-kq9f4\" (UID: \"85b32976-b3fc-498f-b15b-690cee8bfc95\") " pod="openstack/nova-cell1-d8b9-account-create-update-kq9f4" Feb 24 02:37:36.135697 master-0 kubenswrapper[31411]: I0224 02:37:36.135459 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6j65\" (UniqueName: \"kubernetes.io/projected/85b32976-b3fc-498f-b15b-690cee8bfc95-kube-api-access-c6j65\") pod \"nova-cell1-d8b9-account-create-update-kq9f4\" (UID: \"85b32976-b3fc-498f-b15b-690cee8bfc95\") " pod="openstack/nova-cell1-d8b9-account-create-update-kq9f4" Feb 24 02:37:36.136241 master-0 kubenswrapper[31411]: I0224 02:37:36.136192 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85b32976-b3fc-498f-b15b-690cee8bfc95-operator-scripts\") pod \"nova-cell1-d8b9-account-create-update-kq9f4\" (UID: \"85b32976-b3fc-498f-b15b-690cee8bfc95\") " pod="openstack/nova-cell1-d8b9-account-create-update-kq9f4" Feb 24 02:37:36.153403 master-0 kubenswrapper[31411]: I0224 02:37:36.153287 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6j65\" (UniqueName: \"kubernetes.io/projected/85b32976-b3fc-498f-b15b-690cee8bfc95-kube-api-access-c6j65\") pod \"nova-cell1-d8b9-account-create-update-kq9f4\" (UID: \"85b32976-b3fc-498f-b15b-690cee8bfc95\") " pod="openstack/nova-cell1-d8b9-account-create-update-kq9f4" Feb 24 02:37:36.254952 master-0 kubenswrapper[31411]: I0224 02:37:36.253052 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7331-account-create-update-4cdxr" Feb 24 02:37:36.263631 master-0 kubenswrapper[31411]: I0224 02:37:36.263567 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d8b9-account-create-update-kq9f4" Feb 24 02:37:36.268057 master-0 kubenswrapper[31411]: I0224 02:37:36.267427 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-8705a-default-external-api-0"] Feb 24 02:37:36.268057 master-0 kubenswrapper[31411]: I0224 02:37:36.267777 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-8705a-default-external-api-0" podUID="3e330fef-38b6-4b5d-b001-886ecfdd4028" containerName="glance-log" containerID="cri-o://4e2c4e26527f21cb6fd2affdd10a05d6c19198a22fef1d0abeba78f369c2da3a" gracePeriod=30 Feb 24 02:37:36.268508 master-0 kubenswrapper[31411]: I0224 02:37:36.268461 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-8705a-default-external-api-0" podUID="3e330fef-38b6-4b5d-b001-886ecfdd4028" containerName="glance-httpd" containerID="cri-o://e7ed8eeb5bf5c7ee8f1fab4c4a4bb1d397726e2e8d6ff8b11297690c72eea8cf" gracePeriod=30 Feb 24 02:37:36.767724 master-0 kubenswrapper[31411]: I0224 02:37:36.767633 31411 generic.go:334] "Generic (PLEG): container finished" podID="65df10fc-36c4-4eab-aaf3-962a5294face" containerID="ad67ba0dbffb3aee2d1f766a706493369bcb70ce69df62dd27486df44d71d40b" exitCode=0 Feb 24 02:37:36.768002 master-0 kubenswrapper[31411]: I0224 02:37:36.767701 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55455d5d8d-zzwzz" event={"ID":"65df10fc-36c4-4eab-aaf3-962a5294face","Type":"ContainerDied","Data":"ad67ba0dbffb3aee2d1f766a706493369bcb70ce69df62dd27486df44d71d40b"} Feb 24 02:37:36.771198 master-0 kubenswrapper[31411]: I0224 02:37:36.771163 31411 generic.go:334] "Generic (PLEG): container finished" podID="3e330fef-38b6-4b5d-b001-886ecfdd4028" containerID="4e2c4e26527f21cb6fd2affdd10a05d6c19198a22fef1d0abeba78f369c2da3a" exitCode=143 Feb 24 02:37:36.771263 master-0 kubenswrapper[31411]: I0224 02:37:36.771196 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8705a-default-external-api-0" event={"ID":"3e330fef-38b6-4b5d-b001-886ecfdd4028","Type":"ContainerDied","Data":"4e2c4e26527f21cb6fd2affdd10a05d6c19198a22fef1d0abeba78f369c2da3a"} Feb 24 02:37:38.809086 master-0 kubenswrapper[31411]: I0224 02:37:38.804705 31411 generic.go:334] "Generic (PLEG): container finished" podID="65df10fc-36c4-4eab-aaf3-962a5294face" containerID="1e06af42a1cc4bcf137d694554941c425d9cdbfbc5a16e50b74086ff3b340afc" exitCode=0 Feb 24 02:37:38.809086 master-0 kubenswrapper[31411]: I0224 02:37:38.804783 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55455d5d8d-zzwzz" event={"ID":"65df10fc-36c4-4eab-aaf3-962a5294face","Type":"ContainerDied","Data":"1e06af42a1cc4bcf137d694554941c425d9cdbfbc5a16e50b74086ff3b340afc"} Feb 24 02:37:39.821950 master-0 kubenswrapper[31411]: I0224 02:37:39.821767 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-675fbd6d58-pdtfj" Feb 24 02:37:39.825750 master-0 kubenswrapper[31411]: I0224 02:37:39.823799 31411 generic.go:334] "Generic (PLEG): container finished" podID="3e330fef-38b6-4b5d-b001-886ecfdd4028" containerID="e7ed8eeb5bf5c7ee8f1fab4c4a4bb1d397726e2e8d6ff8b11297690c72eea8cf" exitCode=0 Feb 24 02:37:39.825750 master-0 kubenswrapper[31411]: I0224 02:37:39.823835 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8705a-default-external-api-0" event={"ID":"3e330fef-38b6-4b5d-b001-886ecfdd4028","Type":"ContainerDied","Data":"e7ed8eeb5bf5c7ee8f1fab4c4a4bb1d397726e2e8d6ff8b11297690c72eea8cf"} Feb 24 02:37:39.825750 master-0 kubenswrapper[31411]: I0224 02:37:39.824566 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-675fbd6d58-pdtfj" Feb 24 02:37:40.392556 master-0 kubenswrapper[31411]: I0224 02:37:40.392147 31411 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-8705a-default-external-api-0" podUID="3e330fef-38b6-4b5d-b001-886ecfdd4028" containerName="glance-log" probeResult="failure" output="Get \"https://10.128.0.215:9292/healthcheck\": dial tcp 10.128.0.215:9292: connect: connection refused" Feb 24 02:37:40.392556 master-0 kubenswrapper[31411]: I0224 02:37:40.392262 31411 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-8705a-default-external-api-0" podUID="3e330fef-38b6-4b5d-b001-886ecfdd4028" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.128.0.215:9292/healthcheck\": dial tcp 10.128.0.215:9292: connect: connection refused" Feb 24 02:37:41.155496 master-0 kubenswrapper[31411]: I0224 02:37:41.142340 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:41.178944 master-0 kubenswrapper[31411]: I0224 02:37:41.178462 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^29278cd9-fc15-4e42-b224-808884eb5fd0\") pod \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " Feb 24 02:37:41.178944 master-0 kubenswrapper[31411]: I0224 02:37:41.178657 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tz5wk\" (UniqueName: \"kubernetes.io/projected/d7d5c157-238f-4062-8e57-dccee5fa4f9e-kube-api-access-tz5wk\") pod \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " Feb 24 02:37:41.178944 master-0 kubenswrapper[31411]: I0224 02:37:41.178788 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d7d5c157-238f-4062-8e57-dccee5fa4f9e-httpd-run\") pod \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " Feb 24 02:37:41.178944 master-0 kubenswrapper[31411]: I0224 02:37:41.178822 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7d5c157-238f-4062-8e57-dccee5fa4f9e-scripts\") pod \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " Feb 24 02:37:41.178944 master-0 kubenswrapper[31411]: I0224 02:37:41.178842 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7d5c157-238f-4062-8e57-dccee5fa4f9e-logs\") pod \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " Feb 24 02:37:41.179290 master-0 kubenswrapper[31411]: I0224 02:37:41.178983 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7d5c157-238f-4062-8e57-dccee5fa4f9e-internal-tls-certs\") pod \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " Feb 24 02:37:41.179290 master-0 kubenswrapper[31411]: I0224 02:37:41.179054 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7d5c157-238f-4062-8e57-dccee5fa4f9e-config-data\") pod \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " Feb 24 02:37:41.179290 master-0 kubenswrapper[31411]: I0224 02:37:41.179107 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7d5c157-238f-4062-8e57-dccee5fa4f9e-combined-ca-bundle\") pod \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\" (UID: \"d7d5c157-238f-4062-8e57-dccee5fa4f9e\") " Feb 24 02:37:41.190376 master-0 kubenswrapper[31411]: I0224 02:37:41.190318 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7d5c157-238f-4062-8e57-dccee5fa4f9e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d7d5c157-238f-4062-8e57-dccee5fa4f9e" (UID: "d7d5c157-238f-4062-8e57-dccee5fa4f9e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:37:41.196463 master-0 kubenswrapper[31411]: I0224 02:37:41.196397 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7d5c157-238f-4062-8e57-dccee5fa4f9e-logs" (OuterVolumeSpecName: "logs") pod "d7d5c157-238f-4062-8e57-dccee5fa4f9e" (UID: "d7d5c157-238f-4062-8e57-dccee5fa4f9e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:37:41.210849 master-0 kubenswrapper[31411]: I0224 02:37:41.210797 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7d5c157-238f-4062-8e57-dccee5fa4f9e-kube-api-access-tz5wk" (OuterVolumeSpecName: "kube-api-access-tz5wk") pod "d7d5c157-238f-4062-8e57-dccee5fa4f9e" (UID: "d7d5c157-238f-4062-8e57-dccee5fa4f9e"). InnerVolumeSpecName "kube-api-access-tz5wk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:37:41.227377 master-0 kubenswrapper[31411]: I0224 02:37:41.225252 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7d5c157-238f-4062-8e57-dccee5fa4f9e-scripts" (OuterVolumeSpecName: "scripts") pod "d7d5c157-238f-4062-8e57-dccee5fa4f9e" (UID: "d7d5c157-238f-4062-8e57-dccee5fa4f9e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:41.231099 master-0 kubenswrapper[31411]: I0224 02:37:41.231036 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7d5c157-238f-4062-8e57-dccee5fa4f9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7d5c157-238f-4062-8e57-dccee5fa4f9e" (UID: "d7d5c157-238f-4062-8e57-dccee5fa4f9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:41.247622 master-0 kubenswrapper[31411]: I0224 02:37:41.247102 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^29278cd9-fc15-4e42-b224-808884eb5fd0" (OuterVolumeSpecName: "glance") pod "d7d5c157-238f-4062-8e57-dccee5fa4f9e" (UID: "d7d5c157-238f-4062-8e57-dccee5fa4f9e"). InnerVolumeSpecName "pvc-57eabf84-c6fa-42eb-b9cc-5a07d1a482b8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 24 02:37:41.270082 master-0 kubenswrapper[31411]: I0224 02:37:41.266994 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7d5c157-238f-4062-8e57-dccee5fa4f9e-config-data" (OuterVolumeSpecName: "config-data") pod "d7d5c157-238f-4062-8e57-dccee5fa4f9e" (UID: "d7d5c157-238f-4062-8e57-dccee5fa4f9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:41.314898 master-0 kubenswrapper[31411]: I0224 02:37:41.313840 31411 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-57eabf84-c6fa-42eb-b9cc-5a07d1a482b8\" (UniqueName: \"kubernetes.io/csi/topolvm.io^29278cd9-fc15-4e42-b224-808884eb5fd0\") on node \"master-0\" " Feb 24 02:37:41.314898 master-0 kubenswrapper[31411]: I0224 02:37:41.313893 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tz5wk\" (UniqueName: \"kubernetes.io/projected/d7d5c157-238f-4062-8e57-dccee5fa4f9e-kube-api-access-tz5wk\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:41.314898 master-0 kubenswrapper[31411]: I0224 02:37:41.313908 31411 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d7d5c157-238f-4062-8e57-dccee5fa4f9e-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:41.314898 master-0 kubenswrapper[31411]: I0224 02:37:41.313920 31411 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7d5c157-238f-4062-8e57-dccee5fa4f9e-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:41.314898 master-0 kubenswrapper[31411]: I0224 02:37:41.313930 31411 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7d5c157-238f-4062-8e57-dccee5fa4f9e-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:41.314898 master-0 kubenswrapper[31411]: I0224 02:37:41.313940 31411 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7d5c157-238f-4062-8e57-dccee5fa4f9e-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:41.314898 master-0 kubenswrapper[31411]: I0224 02:37:41.313949 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7d5c157-238f-4062-8e57-dccee5fa4f9e-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:41.324263 master-0 kubenswrapper[31411]: I0224 02:37:41.322778 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7d5c157-238f-4062-8e57-dccee5fa4f9e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d7d5c157-238f-4062-8e57-dccee5fa4f9e" (UID: "d7d5c157-238f-4062-8e57-dccee5fa4f9e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:41.359118 master-0 kubenswrapper[31411]: I0224 02:37:41.359070 31411 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 24 02:37:41.359355 master-0 kubenswrapper[31411]: I0224 02:37:41.359237 31411 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-57eabf84-c6fa-42eb-b9cc-5a07d1a482b8" (UniqueName: "kubernetes.io/csi/topolvm.io^29278cd9-fc15-4e42-b224-808884eb5fd0") on node "master-0" Feb 24 02:37:41.418766 master-0 kubenswrapper[31411]: I0224 02:37:41.418704 31411 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7d5c157-238f-4062-8e57-dccee5fa4f9e-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:41.418766 master-0 kubenswrapper[31411]: I0224 02:37:41.418775 31411 reconciler_common.go:293] "Volume detached for volume \"pvc-57eabf84-c6fa-42eb-b9cc-5a07d1a482b8\" (UniqueName: \"kubernetes.io/csi/topolvm.io^29278cd9-fc15-4e42-b224-808884eb5fd0\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:41.856952 master-0 kubenswrapper[31411]: I0224 02:37:41.856396 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8705a-default-internal-api-0" event={"ID":"d7d5c157-238f-4062-8e57-dccee5fa4f9e","Type":"ContainerDied","Data":"80ca5bbe767d764d34da8388315905b2af3069e5506597963e2473e36dcb7731"} Feb 24 02:37:41.856952 master-0 kubenswrapper[31411]: I0224 02:37:41.856458 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:41.856952 master-0 kubenswrapper[31411]: I0224 02:37:41.856473 31411 scope.go:117] "RemoveContainer" containerID="ab53938e2cb3d0282666094cb678cbb91c93a2a0e167f53582f18f100dd71ac2" Feb 24 02:37:41.947180 master-0 kubenswrapper[31411]: I0224 02:37:41.944383 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-8705a-default-internal-api-0"] Feb 24 02:37:42.026833 master-0 kubenswrapper[31411]: I0224 02:37:42.023614 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-8705a-default-internal-api-0"] Feb 24 02:37:42.039626 master-0 kubenswrapper[31411]: I0224 02:37:42.039550 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-8705a-default-internal-api-0"] Feb 24 02:37:42.040660 master-0 kubenswrapper[31411]: E0224 02:37:42.040643 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7d5c157-238f-4062-8e57-dccee5fa4f9e" containerName="glance-log" Feb 24 02:37:42.040750 master-0 kubenswrapper[31411]: I0224 02:37:42.040740 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7d5c157-238f-4062-8e57-dccee5fa4f9e" containerName="glance-log" Feb 24 02:37:42.040876 master-0 kubenswrapper[31411]: E0224 02:37:42.040865 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7d5c157-238f-4062-8e57-dccee5fa4f9e" containerName="glance-httpd" Feb 24 02:37:42.040939 master-0 kubenswrapper[31411]: I0224 02:37:42.040930 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7d5c157-238f-4062-8e57-dccee5fa4f9e" containerName="glance-httpd" Feb 24 02:37:42.041376 master-0 kubenswrapper[31411]: I0224 02:37:42.041360 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7d5c157-238f-4062-8e57-dccee5fa4f9e" containerName="glance-httpd" Feb 24 02:37:42.042083 master-0 kubenswrapper[31411]: I0224 02:37:42.041452 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7d5c157-238f-4062-8e57-dccee5fa4f9e" containerName="glance-log" Feb 24 02:37:42.051732 master-0 kubenswrapper[31411]: I0224 02:37:42.051699 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8705a-default-internal-api-0"] Feb 24 02:37:42.053742 master-0 kubenswrapper[31411]: I0224 02:37:42.051901 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:42.056953 master-0 kubenswrapper[31411]: I0224 02:37:42.056931 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-8705a-default-internal-config-data" Feb 24 02:37:42.057358 master-0 kubenswrapper[31411]: I0224 02:37:42.057333 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 24 02:37:42.196832 master-0 kubenswrapper[31411]: I0224 02:37:42.196664 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a09f571-15e6-485f-af69-147f5e94181f-combined-ca-bundle\") pod \"glance-8705a-default-internal-api-0\" (UID: \"4a09f571-15e6-485f-af69-147f5e94181f\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:42.197547 master-0 kubenswrapper[31411]: I0224 02:37:42.196934 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4a09f571-15e6-485f-af69-147f5e94181f-httpd-run\") pod \"glance-8705a-default-internal-api-0\" (UID: \"4a09f571-15e6-485f-af69-147f5e94181f\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:42.197547 master-0 kubenswrapper[31411]: I0224 02:37:42.197192 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-57eabf84-c6fa-42eb-b9cc-5a07d1a482b8\" (UniqueName: \"kubernetes.io/csi/topolvm.io^29278cd9-fc15-4e42-b224-808884eb5fd0\") pod \"glance-8705a-default-internal-api-0\" (UID: \"4a09f571-15e6-485f-af69-147f5e94181f\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:42.197547 master-0 kubenswrapper[31411]: I0224 02:37:42.197288 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qshjd\" (UniqueName: \"kubernetes.io/projected/4a09f571-15e6-485f-af69-147f5e94181f-kube-api-access-qshjd\") pod \"glance-8705a-default-internal-api-0\" (UID: \"4a09f571-15e6-485f-af69-147f5e94181f\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:42.197547 master-0 kubenswrapper[31411]: I0224 02:37:42.197490 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a09f571-15e6-485f-af69-147f5e94181f-config-data\") pod \"glance-8705a-default-internal-api-0\" (UID: \"4a09f571-15e6-485f-af69-147f5e94181f\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:42.197972 master-0 kubenswrapper[31411]: I0224 02:37:42.197922 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a09f571-15e6-485f-af69-147f5e94181f-internal-tls-certs\") pod \"glance-8705a-default-internal-api-0\" (UID: \"4a09f571-15e6-485f-af69-147f5e94181f\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:42.198337 master-0 kubenswrapper[31411]: I0224 02:37:42.198317 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a09f571-15e6-485f-af69-147f5e94181f-logs\") pod \"glance-8705a-default-internal-api-0\" (UID: \"4a09f571-15e6-485f-af69-147f5e94181f\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:42.198428 master-0 kubenswrapper[31411]: I0224 02:37:42.198414 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a09f571-15e6-485f-af69-147f5e94181f-scripts\") pod \"glance-8705a-default-internal-api-0\" (UID: \"4a09f571-15e6-485f-af69-147f5e94181f\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:42.302392 master-0 kubenswrapper[31411]: I0224 02:37:42.301771 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a09f571-15e6-485f-af69-147f5e94181f-logs\") pod \"glance-8705a-default-internal-api-0\" (UID: \"4a09f571-15e6-485f-af69-147f5e94181f\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:42.302392 master-0 kubenswrapper[31411]: I0224 02:37:42.302280 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a09f571-15e6-485f-af69-147f5e94181f-scripts\") pod \"glance-8705a-default-internal-api-0\" (UID: \"4a09f571-15e6-485f-af69-147f5e94181f\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:42.303485 master-0 kubenswrapper[31411]: I0224 02:37:42.302310 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a09f571-15e6-485f-af69-147f5e94181f-logs\") pod \"glance-8705a-default-internal-api-0\" (UID: \"4a09f571-15e6-485f-af69-147f5e94181f\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:42.303485 master-0 kubenswrapper[31411]: I0224 02:37:42.302873 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a09f571-15e6-485f-af69-147f5e94181f-combined-ca-bundle\") pod \"glance-8705a-default-internal-api-0\" (UID: \"4a09f571-15e6-485f-af69-147f5e94181f\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:42.303485 master-0 kubenswrapper[31411]: I0224 02:37:42.303013 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4a09f571-15e6-485f-af69-147f5e94181f-httpd-run\") pod \"glance-8705a-default-internal-api-0\" (UID: \"4a09f571-15e6-485f-af69-147f5e94181f\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:42.303485 master-0 kubenswrapper[31411]: I0224 02:37:42.303162 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-57eabf84-c6fa-42eb-b9cc-5a07d1a482b8\" (UniqueName: \"kubernetes.io/csi/topolvm.io^29278cd9-fc15-4e42-b224-808884eb5fd0\") pod \"glance-8705a-default-internal-api-0\" (UID: \"4a09f571-15e6-485f-af69-147f5e94181f\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:42.303485 master-0 kubenswrapper[31411]: I0224 02:37:42.303228 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qshjd\" (UniqueName: \"kubernetes.io/projected/4a09f571-15e6-485f-af69-147f5e94181f-kube-api-access-qshjd\") pod \"glance-8705a-default-internal-api-0\" (UID: \"4a09f571-15e6-485f-af69-147f5e94181f\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:42.303799 master-0 kubenswrapper[31411]: I0224 02:37:42.303758 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4a09f571-15e6-485f-af69-147f5e94181f-httpd-run\") pod \"glance-8705a-default-internal-api-0\" (UID: \"4a09f571-15e6-485f-af69-147f5e94181f\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:42.303860 master-0 kubenswrapper[31411]: I0224 02:37:42.303825 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a09f571-15e6-485f-af69-147f5e94181f-config-data\") pod \"glance-8705a-default-internal-api-0\" (UID: \"4a09f571-15e6-485f-af69-147f5e94181f\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:42.304278 master-0 kubenswrapper[31411]: I0224 02:37:42.303901 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a09f571-15e6-485f-af69-147f5e94181f-internal-tls-certs\") pod \"glance-8705a-default-internal-api-0\" (UID: \"4a09f571-15e6-485f-af69-147f5e94181f\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:42.307691 master-0 kubenswrapper[31411]: I0224 02:37:42.307667 31411 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 02:37:42.308476 master-0 kubenswrapper[31411]: I0224 02:37:42.307705 31411 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-57eabf84-c6fa-42eb-b9cc-5a07d1a482b8\" (UniqueName: \"kubernetes.io/csi/topolvm.io^29278cd9-fc15-4e42-b224-808884eb5fd0\") pod \"glance-8705a-default-internal-api-0\" (UID: \"4a09f571-15e6-485f-af69-147f5e94181f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/568a2d9cea509129bb10d2a8daaed82c1a66b4ccaea30faa55e2d5b91b5cf92d/globalmount\"" pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:42.309520 master-0 kubenswrapper[31411]: I0224 02:37:42.309239 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a09f571-15e6-485f-af69-147f5e94181f-internal-tls-certs\") pod \"glance-8705a-default-internal-api-0\" (UID: \"4a09f571-15e6-485f-af69-147f5e94181f\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:42.309520 master-0 kubenswrapper[31411]: I0224 02:37:42.309457 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a09f571-15e6-485f-af69-147f5e94181f-combined-ca-bundle\") pod \"glance-8705a-default-internal-api-0\" (UID: \"4a09f571-15e6-485f-af69-147f5e94181f\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:42.310327 master-0 kubenswrapper[31411]: I0224 02:37:42.310284 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a09f571-15e6-485f-af69-147f5e94181f-scripts\") pod \"glance-8705a-default-internal-api-0\" (UID: \"4a09f571-15e6-485f-af69-147f5e94181f\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:42.315903 master-0 kubenswrapper[31411]: I0224 02:37:42.315839 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a09f571-15e6-485f-af69-147f5e94181f-config-data\") pod \"glance-8705a-default-internal-api-0\" (UID: \"4a09f571-15e6-485f-af69-147f5e94181f\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:42.320767 master-0 kubenswrapper[31411]: I0224 02:37:42.320714 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qshjd\" (UniqueName: \"kubernetes.io/projected/4a09f571-15e6-485f-af69-147f5e94181f-kube-api-access-qshjd\") pod \"glance-8705a-default-internal-api-0\" (UID: \"4a09f571-15e6-485f-af69-147f5e94181f\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:42.397213 master-0 kubenswrapper[31411]: I0224 02:37:42.387086 31411 scope.go:117] "RemoveContainer" containerID="d22d5d85603b1447841eb0a9617422fe92c19152b0c341477cd29d81d8ba7274" Feb 24 02:37:42.470547 master-0 kubenswrapper[31411]: I0224 02:37:42.469503 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55455d5d8d-zzwzz" Feb 24 02:37:42.629200 master-0 kubenswrapper[31411]: I0224 02:37:42.623597 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcz2q\" (UniqueName: \"kubernetes.io/projected/65df10fc-36c4-4eab-aaf3-962a5294face-kube-api-access-mcz2q\") pod \"65df10fc-36c4-4eab-aaf3-962a5294face\" (UID: \"65df10fc-36c4-4eab-aaf3-962a5294face\") " Feb 24 02:37:42.629200 master-0 kubenswrapper[31411]: I0224 02:37:42.626393 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/65df10fc-36c4-4eab-aaf3-962a5294face-ovndb-tls-certs\") pod \"65df10fc-36c4-4eab-aaf3-962a5294face\" (UID: \"65df10fc-36c4-4eab-aaf3-962a5294face\") " Feb 24 02:37:42.629200 master-0 kubenswrapper[31411]: I0224 02:37:42.626474 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/65df10fc-36c4-4eab-aaf3-962a5294face-config\") pod \"65df10fc-36c4-4eab-aaf3-962a5294face\" (UID: \"65df10fc-36c4-4eab-aaf3-962a5294face\") " Feb 24 02:37:42.629200 master-0 kubenswrapper[31411]: I0224 02:37:42.626503 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/65df10fc-36c4-4eab-aaf3-962a5294face-httpd-config\") pod \"65df10fc-36c4-4eab-aaf3-962a5294face\" (UID: \"65df10fc-36c4-4eab-aaf3-962a5294face\") " Feb 24 02:37:42.629200 master-0 kubenswrapper[31411]: I0224 02:37:42.626653 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65df10fc-36c4-4eab-aaf3-962a5294face-combined-ca-bundle\") pod \"65df10fc-36c4-4eab-aaf3-962a5294face\" (UID: \"65df10fc-36c4-4eab-aaf3-962a5294face\") " Feb 24 02:37:42.645499 master-0 kubenswrapper[31411]: I0224 02:37:42.645417 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65df10fc-36c4-4eab-aaf3-962a5294face-kube-api-access-mcz2q" (OuterVolumeSpecName: "kube-api-access-mcz2q") pod "65df10fc-36c4-4eab-aaf3-962a5294face" (UID: "65df10fc-36c4-4eab-aaf3-962a5294face"). InnerVolumeSpecName "kube-api-access-mcz2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:37:42.657954 master-0 kubenswrapper[31411]: I0224 02:37:42.656815 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65df10fc-36c4-4eab-aaf3-962a5294face-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "65df10fc-36c4-4eab-aaf3-962a5294face" (UID: "65df10fc-36c4-4eab-aaf3-962a5294face"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:42.755636 master-0 kubenswrapper[31411]: I0224 02:37:42.751912 31411 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/65df10fc-36c4-4eab-aaf3-962a5294face-httpd-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:42.755636 master-0 kubenswrapper[31411]: I0224 02:37:42.751978 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcz2q\" (UniqueName: \"kubernetes.io/projected/65df10fc-36c4-4eab-aaf3-962a5294face-kube-api-access-mcz2q\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:42.784905 master-0 kubenswrapper[31411]: I0224 02:37:42.784822 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65df10fc-36c4-4eab-aaf3-962a5294face-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65df10fc-36c4-4eab-aaf3-962a5294face" (UID: "65df10fc-36c4-4eab-aaf3-962a5294face"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:42.871466 master-0 kubenswrapper[31411]: I0224 02:37:42.871414 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65df10fc-36c4-4eab-aaf3-962a5294face-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:42.898831 master-0 kubenswrapper[31411]: I0224 02:37:42.898776 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65df10fc-36c4-4eab-aaf3-962a5294face-config" (OuterVolumeSpecName: "config") pod "65df10fc-36c4-4eab-aaf3-962a5294face" (UID: "65df10fc-36c4-4eab-aaf3-962a5294face"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:42.928836 master-0 kubenswrapper[31411]: I0224 02:37:42.908529 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55455d5d8d-zzwzz" event={"ID":"65df10fc-36c4-4eab-aaf3-962a5294face","Type":"ContainerDied","Data":"a97d3681a366aef008dd8b4e672ebfda294bf7bedf2b08bec5b44d62e29da8e7"} Feb 24 02:37:42.928836 master-0 kubenswrapper[31411]: I0224 02:37:42.908740 31411 scope.go:117] "RemoveContainer" containerID="ad67ba0dbffb3aee2d1f766a706493369bcb70ce69df62dd27486df44d71d40b" Feb 24 02:37:42.928836 master-0 kubenswrapper[31411]: I0224 02:37:42.909022 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55455d5d8d-zzwzz" Feb 24 02:37:42.928836 master-0 kubenswrapper[31411]: I0224 02:37:42.914202 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65df10fc-36c4-4eab-aaf3-962a5294face-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "65df10fc-36c4-4eab-aaf3-962a5294face" (UID: "65df10fc-36c4-4eab-aaf3-962a5294face"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:42.974026 master-0 kubenswrapper[31411]: I0224 02:37:42.973693 31411 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/65df10fc-36c4-4eab-aaf3-962a5294face-ovndb-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:42.974026 master-0 kubenswrapper[31411]: I0224 02:37:42.973730 31411 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/65df10fc-36c4-4eab-aaf3-962a5294face-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:43.034595 master-0 kubenswrapper[31411]: I0224 02:37:43.031492 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:43.064236 master-0 kubenswrapper[31411]: I0224 02:37:43.062817 31411 scope.go:117] "RemoveContainer" containerID="1e06af42a1cc4bcf137d694554941c425d9cdbfbc5a16e50b74086ff3b340afc" Feb 24 02:37:43.095135 master-0 kubenswrapper[31411]: I0224 02:37:43.095082 31411 scope.go:117] "RemoveContainer" containerID="73d9ad9599a99a5baabf99f214a4ed014fbc75e5756f2c85822b5bc219ed46c4" Feb 24 02:37:43.169645 master-0 kubenswrapper[31411]: I0224 02:37:43.169368 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7d5c157-238f-4062-8e57-dccee5fa4f9e" path="/var/lib/kubelet/pods/d7d5c157-238f-4062-8e57-dccee5fa4f9e/volumes" Feb 24 02:37:43.180174 master-0 kubenswrapper[31411]: I0224 02:37:43.179925 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e330fef-38b6-4b5d-b001-886ecfdd4028-logs\") pod \"3e330fef-38b6-4b5d-b001-886ecfdd4028\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " Feb 24 02:37:43.180174 master-0 kubenswrapper[31411]: I0224 02:37:43.180034 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e330fef-38b6-4b5d-b001-886ecfdd4028-combined-ca-bundle\") pod \"3e330fef-38b6-4b5d-b001-886ecfdd4028\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " Feb 24 02:37:43.180517 master-0 kubenswrapper[31411]: I0224 02:37:43.180224 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e330fef-38b6-4b5d-b001-886ecfdd4028-httpd-run\") pod \"3e330fef-38b6-4b5d-b001-886ecfdd4028\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " Feb 24 02:37:43.180517 master-0 kubenswrapper[31411]: I0224 02:37:43.180349 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e330fef-38b6-4b5d-b001-886ecfdd4028-scripts\") pod \"3e330fef-38b6-4b5d-b001-886ecfdd4028\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " Feb 24 02:37:43.180517 master-0 kubenswrapper[31411]: I0224 02:37:43.180509 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a5d836f3-76db-4c3b-84c0-25ab845f7f66\") pod \"3e330fef-38b6-4b5d-b001-886ecfdd4028\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " Feb 24 02:37:43.180686 master-0 kubenswrapper[31411]: I0224 02:37:43.180534 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e330fef-38b6-4b5d-b001-886ecfdd4028-config-data\") pod \"3e330fef-38b6-4b5d-b001-886ecfdd4028\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " Feb 24 02:37:43.180686 master-0 kubenswrapper[31411]: I0224 02:37:43.180609 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e330fef-38b6-4b5d-b001-886ecfdd4028-public-tls-certs\") pod \"3e330fef-38b6-4b5d-b001-886ecfdd4028\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " Feb 24 02:37:43.180686 master-0 kubenswrapper[31411]: I0224 02:37:43.180683 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmgph\" (UniqueName: \"kubernetes.io/projected/3e330fef-38b6-4b5d-b001-886ecfdd4028-kube-api-access-kmgph\") pod \"3e330fef-38b6-4b5d-b001-886ecfdd4028\" (UID: \"3e330fef-38b6-4b5d-b001-886ecfdd4028\") " Feb 24 02:37:43.183096 master-0 kubenswrapper[31411]: I0224 02:37:43.183022 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e330fef-38b6-4b5d-b001-886ecfdd4028-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3e330fef-38b6-4b5d-b001-886ecfdd4028" (UID: "3e330fef-38b6-4b5d-b001-886ecfdd4028"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:37:43.184943 master-0 kubenswrapper[31411]: I0224 02:37:43.184901 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e330fef-38b6-4b5d-b001-886ecfdd4028-logs" (OuterVolumeSpecName: "logs") pod "3e330fef-38b6-4b5d-b001-886ecfdd4028" (UID: "3e330fef-38b6-4b5d-b001-886ecfdd4028"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:37:43.186510 master-0 kubenswrapper[31411]: I0224 02:37:43.186295 31411 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e330fef-38b6-4b5d-b001-886ecfdd4028-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:43.186510 master-0 kubenswrapper[31411]: I0224 02:37:43.186356 31411 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e330fef-38b6-4b5d-b001-886ecfdd4028-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:43.195464 master-0 kubenswrapper[31411]: I0224 02:37:43.195392 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e330fef-38b6-4b5d-b001-886ecfdd4028-kube-api-access-kmgph" (OuterVolumeSpecName: "kube-api-access-kmgph") pod "3e330fef-38b6-4b5d-b001-886ecfdd4028" (UID: "3e330fef-38b6-4b5d-b001-886ecfdd4028"). InnerVolumeSpecName "kube-api-access-kmgph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:37:43.197924 master-0 kubenswrapper[31411]: I0224 02:37:43.197865 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-05a5-account-create-update-bt8vb"] Feb 24 02:37:43.202253 master-0 kubenswrapper[31411]: I0224 02:37:43.202011 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e330fef-38b6-4b5d-b001-886ecfdd4028-scripts" (OuterVolumeSpecName: "scripts") pod "3e330fef-38b6-4b5d-b001-886ecfdd4028" (UID: "3e330fef-38b6-4b5d-b001-886ecfdd4028"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:43.243121 master-0 kubenswrapper[31411]: I0224 02:37:43.242813 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e330fef-38b6-4b5d-b001-886ecfdd4028-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e330fef-38b6-4b5d-b001-886ecfdd4028" (UID: "3e330fef-38b6-4b5d-b001-886ecfdd4028"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:43.252295 master-0 kubenswrapper[31411]: W0224 02:37:43.252227 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8c676cf_386b_455c_b9f8_b88a3a34a136.slice/crio-d74c62343886793796f91cdf7e5e8f9388fc9e71864eeacff4fa6e265e03b348 WatchSource:0}: Error finding container d74c62343886793796f91cdf7e5e8f9388fc9e71864eeacff4fa6e265e03b348: Status 404 returned error can't find the container with id d74c62343886793796f91cdf7e5e8f9388fc9e71864eeacff4fa6e265e03b348 Feb 24 02:37:43.291354 master-0 kubenswrapper[31411]: I0224 02:37:43.291300 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-pf58r"] Feb 24 02:37:43.293210 master-0 kubenswrapper[31411]: I0224 02:37:43.293157 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^a5d836f3-76db-4c3b-84c0-25ab845f7f66" (OuterVolumeSpecName: "glance") pod "3e330fef-38b6-4b5d-b001-886ecfdd4028" (UID: "3e330fef-38b6-4b5d-b001-886ecfdd4028"). InnerVolumeSpecName "pvc-77a90a1f-3b19-443f-bfa7-9776b1f847b6". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 24 02:37:43.300484 master-0 kubenswrapper[31411]: I0224 02:37:43.300439 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmgph\" (UniqueName: \"kubernetes.io/projected/3e330fef-38b6-4b5d-b001-886ecfdd4028-kube-api-access-kmgph\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:43.300565 master-0 kubenswrapper[31411]: I0224 02:37:43.300498 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e330fef-38b6-4b5d-b001-886ecfdd4028-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:43.300565 master-0 kubenswrapper[31411]: I0224 02:37:43.300512 31411 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e330fef-38b6-4b5d-b001-886ecfdd4028-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:43.300565 master-0 kubenswrapper[31411]: I0224 02:37:43.300545 31411 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-77a90a1f-3b19-443f-bfa7-9776b1f847b6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a5d836f3-76db-4c3b-84c0-25ab845f7f66\") on node \"master-0\" " Feb 24 02:37:43.338840 master-0 kubenswrapper[31411]: I0224 02:37:43.338732 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e330fef-38b6-4b5d-b001-886ecfdd4028-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3e330fef-38b6-4b5d-b001-886ecfdd4028" (UID: "3e330fef-38b6-4b5d-b001-886ecfdd4028"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:43.357437 master-0 kubenswrapper[31411]: I0224 02:37:43.357391 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-57eabf84-c6fa-42eb-b9cc-5a07d1a482b8\" (UniqueName: \"kubernetes.io/csi/topolvm.io^29278cd9-fc15-4e42-b224-808884eb5fd0\") pod \"glance-8705a-default-internal-api-0\" (UID: \"4a09f571-15e6-485f-af69-147f5e94181f\") " pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:43.365614 master-0 kubenswrapper[31411]: I0224 02:37:43.365525 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7331-account-create-update-4cdxr"] Feb 24 02:37:43.366239 master-0 kubenswrapper[31411]: I0224 02:37:43.366192 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e330fef-38b6-4b5d-b001-886ecfdd4028-config-data" (OuterVolumeSpecName: "config-data") pod "3e330fef-38b6-4b5d-b001-886ecfdd4028" (UID: "3e330fef-38b6-4b5d-b001-886ecfdd4028"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:43.408287 master-0 kubenswrapper[31411]: I0224 02:37:43.408252 31411 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e330fef-38b6-4b5d-b001-886ecfdd4028-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:43.408468 master-0 kubenswrapper[31411]: I0224 02:37:43.408452 31411 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e330fef-38b6-4b5d-b001-886ecfdd4028-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:43.423741 master-0 kubenswrapper[31411]: I0224 02:37:43.423650 31411 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 24 02:37:43.424392 master-0 kubenswrapper[31411]: I0224 02:37:43.424360 31411 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-77a90a1f-3b19-443f-bfa7-9776b1f847b6" (UniqueName: "kubernetes.io/csi/topolvm.io^a5d836f3-76db-4c3b-84c0-25ab845f7f66") on node "master-0" Feb 24 02:37:43.481846 master-0 kubenswrapper[31411]: I0224 02:37:43.481673 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xrhk2"] Feb 24 02:37:43.518099 master-0 kubenswrapper[31411]: I0224 02:37:43.514433 31411 reconciler_common.go:293] "Volume detached for volume \"pvc-77a90a1f-3b19-443f-bfa7-9776b1f847b6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a5d836f3-76db-4c3b-84c0-25ab845f7f66\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:43.529853 master-0 kubenswrapper[31411]: I0224 02:37:43.524942 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d8b9-account-create-update-kq9f4"] Feb 24 02:37:43.550329 master-0 kubenswrapper[31411]: I0224 02:37:43.550295 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-d8zwm"] Feb 24 02:37:43.568076 master-0 kubenswrapper[31411]: I0224 02:37:43.568017 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-55455d5d8d-zzwzz"] Feb 24 02:37:43.580331 master-0 kubenswrapper[31411]: I0224 02:37:43.580284 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:43.584373 master-0 kubenswrapper[31411]: I0224 02:37:43.584314 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-55455d5d8d-zzwzz"] Feb 24 02:37:43.961630 master-0 kubenswrapper[31411]: I0224 02:37:43.961304 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pf58r" event={"ID":"b8c676cf-386b-455c-b9f8-b88a3a34a136","Type":"ContainerStarted","Data":"38ac37b6a76645d3cf79e9320b586251da08acfbf09dc87365e36648638ac236"} Feb 24 02:37:43.961630 master-0 kubenswrapper[31411]: I0224 02:37:43.961368 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pf58r" event={"ID":"b8c676cf-386b-455c-b9f8-b88a3a34a136","Type":"ContainerStarted","Data":"d74c62343886793796f91cdf7e5e8f9388fc9e71864eeacff4fa6e265e03b348"} Feb 24 02:37:43.972598 master-0 kubenswrapper[31411]: I0224 02:37:43.970351 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-d8zwm" event={"ID":"0e4defeb-f6b0-46db-9acc-6df2d2490988","Type":"ContainerStarted","Data":"42a465e40ef7b774a830f3590092a0cf93e2e6de5a60c4d097f6652a6877c75a"} Feb 24 02:37:43.972598 master-0 kubenswrapper[31411]: I0224 02:37:43.970415 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-d8zwm" event={"ID":"0e4defeb-f6b0-46db-9acc-6df2d2490988","Type":"ContainerStarted","Data":"c8e4c6703ae7f976629c9672fab48af5a8505ebe0ee72c1d7a8cad7e83f838ea"} Feb 24 02:37:43.976591 master-0 kubenswrapper[31411]: I0224 02:37:43.972842 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d8b9-account-create-update-kq9f4" event={"ID":"85b32976-b3fc-498f-b15b-690cee8bfc95","Type":"ContainerStarted","Data":"ddb0bf913baafc8ac71e50bd0cf260ae7dc8f2f523de3cba1b5ccee0768e147a"} Feb 24 02:37:43.976591 master-0 kubenswrapper[31411]: I0224 02:37:43.972879 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d8b9-account-create-update-kq9f4" event={"ID":"85b32976-b3fc-498f-b15b-690cee8bfc95","Type":"ContainerStarted","Data":"bfd808febe62f84818865ee571b761e239a8cf107711594e5d31854c8c4f7ab4"} Feb 24 02:37:43.976591 master-0 kubenswrapper[31411]: I0224 02:37:43.975115 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7d8f6784f6-dqjdm" event={"ID":"72106e8c-2a98-4a82-9f36-c820986c5665","Type":"ContainerStarted","Data":"d14cc8b840006c962f1d7d58a9a4a236da5dbd25d4e05db1a54d35988ef9ba25"} Feb 24 02:37:43.976591 master-0 kubenswrapper[31411]: I0224 02:37:43.975744 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-7d8f6784f6-dqjdm" Feb 24 02:37:43.987603 master-0 kubenswrapper[31411]: I0224 02:37:43.977049 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-05a5-account-create-update-bt8vb" event={"ID":"8c1ff294-924a-46b5-9107-84238d30135f","Type":"ContainerStarted","Data":"b052e406402d006adcd8d7e77760763fe494cdc2a2e6d160a6e36e94f549d3d4"} Feb 24 02:37:43.987603 master-0 kubenswrapper[31411]: I0224 02:37:43.977099 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-05a5-account-create-update-bt8vb" event={"ID":"8c1ff294-924a-46b5-9107-84238d30135f","Type":"ContainerStarted","Data":"7ea88c51b82789c2612c6b6897f22dd1fb751ee3f061f5d3ac924a7c53d478ee"} Feb 24 02:37:43.987603 master-0 kubenswrapper[31411]: I0224 02:37:43.980996 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"361cf20c-412e-43a0-b2e7-44a9d4964b05","Type":"ContainerStarted","Data":"598a8669907a3251ba3dfcb7d059166093f3126c514c9648c64ad669fe42b0c3"} Feb 24 02:37:44.001597 master-0 kubenswrapper[31411]: I0224 02:37:43.993961 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-pf58r" podStartSLOduration=9.99393876 podStartE2EDuration="9.99393876s" podCreationTimestamp="2026-02-24 02:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:37:43.984964168 +0000 UTC m=+1007.202162014" watchObservedRunningTime="2026-02-24 02:37:43.99393876 +0000 UTC m=+1007.211136606" Feb 24 02:37:44.036879 master-0 kubenswrapper[31411]: I0224 02:37:44.036604 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-d8b9-account-create-update-kq9f4" podStartSLOduration=9.036501343 podStartE2EDuration="9.036501343s" podCreationTimestamp="2026-02-24 02:37:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:37:44.008173389 +0000 UTC m=+1007.225371235" watchObservedRunningTime="2026-02-24 02:37:44.036501343 +0000 UTC m=+1007.253699189" Feb 24 02:37:44.058320 master-0 kubenswrapper[31411]: I0224 02:37:44.051672 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-8hw9n" event={"ID":"3695612d-87a3-4401-9303-05ae933a9f78","Type":"ContainerStarted","Data":"42fee9d4a5f97dca1ebdfeeb55d78aa1a173c534dbd768ee386a6a27c02352df"} Feb 24 02:37:44.058320 master-0 kubenswrapper[31411]: I0224 02:37:44.055773 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:44.058320 master-0 kubenswrapper[31411]: I0224 02:37:44.055768 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8705a-default-external-api-0" event={"ID":"3e330fef-38b6-4b5d-b001-886ecfdd4028","Type":"ContainerDied","Data":"012d64fe28e7c9831f2ab24bc518fc8d7b005b340b7e3147a5d25b756f0f93d4"} Feb 24 02:37:44.058320 master-0 kubenswrapper[31411]: I0224 02:37:44.056254 31411 scope.go:117] "RemoveContainer" containerID="e7ed8eeb5bf5c7ee8f1fab4c4a4bb1d397726e2e8d6ff8b11297690c72eea8cf" Feb 24 02:37:44.058320 master-0 kubenswrapper[31411]: I0224 02:37:44.057305 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7331-account-create-update-4cdxr" event={"ID":"264be425-2328-436a-9a2d-0215e640276c","Type":"ContainerStarted","Data":"f3b27cff129a8568d482620266fcff99c1cd11397d9f6cef9d597683f1f311f1"} Feb 24 02:37:44.058320 master-0 kubenswrapper[31411]: I0224 02:37:44.057333 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7331-account-create-update-4cdxr" event={"ID":"264be425-2328-436a-9a2d-0215e640276c","Type":"ContainerStarted","Data":"a57cb8b32405bbdb8a2f1bf41e61d55f9e7dede930df7d38c69829f9fc31d2d8"} Feb 24 02:37:44.079150 master-0 kubenswrapper[31411]: I0224 02:37:44.078888 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xrhk2" event={"ID":"80c28d7c-77ba-477e-9b90-8432c2c7b48f","Type":"ContainerStarted","Data":"4e6f99bfdd10b2eb4b70128246adebd3bd4068497ea6351d2322dddc725b1621"} Feb 24 02:37:44.079150 master-0 kubenswrapper[31411]: I0224 02:37:44.078963 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xrhk2" event={"ID":"80c28d7c-77ba-477e-9b90-8432c2c7b48f","Type":"ContainerStarted","Data":"46aaee8b06bd79130a707c2aaf60707ce33a41a6a4b4bfa3273759592b1f2a0f"} Feb 24 02:37:44.100392 master-0 kubenswrapper[31411]: I0224 02:37:44.099965 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.287112279 podStartE2EDuration="21.099938651s" podCreationTimestamp="2026-02-24 02:37:23 +0000 UTC" firstStartedPulling="2026-02-24 02:37:24.65473185 +0000 UTC m=+987.871929696" lastFinishedPulling="2026-02-24 02:37:42.467558222 +0000 UTC m=+1005.684756068" observedRunningTime="2026-02-24 02:37:44.041178354 +0000 UTC m=+1007.258376200" watchObservedRunningTime="2026-02-24 02:37:44.099938651 +0000 UTC m=+1007.317136497" Feb 24 02:37:44.122864 master-0 kubenswrapper[31411]: I0224 02:37:44.121069 31411 scope.go:117] "RemoveContainer" containerID="4e2c4e26527f21cb6fd2affdd10a05d6c19198a22fef1d0abeba78f369c2da3a" Feb 24 02:37:44.157385 master-0 kubenswrapper[31411]: I0224 02:37:44.154963 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-d8zwm" podStartSLOduration=10.154935133 podStartE2EDuration="10.154935133s" podCreationTimestamp="2026-02-24 02:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:37:44.065226318 +0000 UTC m=+1007.282424164" watchObservedRunningTime="2026-02-24 02:37:44.154935133 +0000 UTC m=+1007.372132979" Feb 24 02:37:44.182387 master-0 kubenswrapper[31411]: I0224 02:37:44.181329 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-05a5-account-create-update-bt8vb" podStartSLOduration=10.177080744 podStartE2EDuration="10.177080744s" podCreationTimestamp="2026-02-24 02:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:37:44.081939637 +0000 UTC m=+1007.299137483" watchObservedRunningTime="2026-02-24 02:37:44.177080744 +0000 UTC m=+1007.394278590" Feb 24 02:37:44.281193 master-0 kubenswrapper[31411]: I0224 02:37:44.281065 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-xrhk2" podStartSLOduration=10.275917344 podStartE2EDuration="10.275917344s" podCreationTimestamp="2026-02-24 02:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:37:44.124989423 +0000 UTC m=+1007.342187259" watchObservedRunningTime="2026-02-24 02:37:44.275917344 +0000 UTC m=+1007.493115180" Feb 24 02:37:44.309975 master-0 kubenswrapper[31411]: I0224 02:37:44.309794 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-8705a-default-external-api-0"] Feb 24 02:37:44.331788 master-0 kubenswrapper[31411]: I0224 02:37:44.330793 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-8705a-default-external-api-0"] Feb 24 02:37:44.341826 master-0 kubenswrapper[31411]: I0224 02:37:44.341740 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-7331-account-create-update-4cdxr" podStartSLOduration=9.341716319 podStartE2EDuration="9.341716319s" podCreationTimestamp="2026-02-24 02:37:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:37:44.181542669 +0000 UTC m=+1007.398740515" watchObservedRunningTime="2026-02-24 02:37:44.341716319 +0000 UTC m=+1007.558914165" Feb 24 02:37:44.362499 master-0 kubenswrapper[31411]: I0224 02:37:44.362421 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-8705a-default-external-api-0"] Feb 24 02:37:44.363186 master-0 kubenswrapper[31411]: E0224 02:37:44.363158 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65df10fc-36c4-4eab-aaf3-962a5294face" containerName="neutron-api" Feb 24 02:37:44.363186 master-0 kubenswrapper[31411]: I0224 02:37:44.363182 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="65df10fc-36c4-4eab-aaf3-962a5294face" containerName="neutron-api" Feb 24 02:37:44.363257 master-0 kubenswrapper[31411]: E0224 02:37:44.363206 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e330fef-38b6-4b5d-b001-886ecfdd4028" containerName="glance-httpd" Feb 24 02:37:44.363257 master-0 kubenswrapper[31411]: I0224 02:37:44.363213 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e330fef-38b6-4b5d-b001-886ecfdd4028" containerName="glance-httpd" Feb 24 02:37:44.363257 master-0 kubenswrapper[31411]: E0224 02:37:44.363251 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e330fef-38b6-4b5d-b001-886ecfdd4028" containerName="glance-log" Feb 24 02:37:44.363257 master-0 kubenswrapper[31411]: I0224 02:37:44.363258 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e330fef-38b6-4b5d-b001-886ecfdd4028" containerName="glance-log" Feb 24 02:37:44.363379 master-0 kubenswrapper[31411]: E0224 02:37:44.363270 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65df10fc-36c4-4eab-aaf3-962a5294face" containerName="neutron-httpd" Feb 24 02:37:44.363379 master-0 kubenswrapper[31411]: I0224 02:37:44.363278 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="65df10fc-36c4-4eab-aaf3-962a5294face" containerName="neutron-httpd" Feb 24 02:37:44.363611 master-0 kubenswrapper[31411]: I0224 02:37:44.363592 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="65df10fc-36c4-4eab-aaf3-962a5294face" containerName="neutron-httpd" Feb 24 02:37:44.363687 master-0 kubenswrapper[31411]: I0224 02:37:44.363626 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e330fef-38b6-4b5d-b001-886ecfdd4028" containerName="glance-httpd" Feb 24 02:37:44.363687 master-0 kubenswrapper[31411]: I0224 02:37:44.363641 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e330fef-38b6-4b5d-b001-886ecfdd4028" containerName="glance-log" Feb 24 02:37:44.363687 master-0 kubenswrapper[31411]: I0224 02:37:44.363676 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="65df10fc-36c4-4eab-aaf3-962a5294face" containerName="neutron-api" Feb 24 02:37:44.365492 master-0 kubenswrapper[31411]: I0224 02:37:44.365459 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:44.379519 master-0 kubenswrapper[31411]: I0224 02:37:44.379483 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-8705a-default-external-config-data" Feb 24 02:37:44.379758 master-0 kubenswrapper[31411]: I0224 02:37:44.379706 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 24 02:37:44.380431 master-0 kubenswrapper[31411]: I0224 02:37:44.380378 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8705a-default-external-api-0"] Feb 24 02:37:44.381159 master-0 kubenswrapper[31411]: I0224 02:37:44.381065 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-db-sync-8hw9n" podStartSLOduration=4.48504455 podStartE2EDuration="14.381042911s" podCreationTimestamp="2026-02-24 02:37:30 +0000 UTC" firstStartedPulling="2026-02-24 02:37:32.627426127 +0000 UTC m=+995.844623983" lastFinishedPulling="2026-02-24 02:37:42.523424498 +0000 UTC m=+1005.740622344" observedRunningTime="2026-02-24 02:37:44.224814132 +0000 UTC m=+1007.442011978" watchObservedRunningTime="2026-02-24 02:37:44.381042911 +0000 UTC m=+1007.598240757" Feb 24 02:37:44.416724 master-0 kubenswrapper[31411]: I0224 02:37:44.416652 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8705a-default-internal-api-0"] Feb 24 02:37:44.550172 master-0 kubenswrapper[31411]: I0224 02:37:44.549556 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9860c0f4-4e67-4271-8e5f-e69506201f8b-public-tls-certs\") pod \"glance-8705a-default-external-api-0\" (UID: \"9860c0f4-4e67-4271-8e5f-e69506201f8b\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:44.550172 master-0 kubenswrapper[31411]: I0224 02:37:44.549634 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9860c0f4-4e67-4271-8e5f-e69506201f8b-logs\") pod \"glance-8705a-default-external-api-0\" (UID: \"9860c0f4-4e67-4271-8e5f-e69506201f8b\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:44.550172 master-0 kubenswrapper[31411]: I0224 02:37:44.549699 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8czq\" (UniqueName: \"kubernetes.io/projected/9860c0f4-4e67-4271-8e5f-e69506201f8b-kube-api-access-r8czq\") pod \"glance-8705a-default-external-api-0\" (UID: \"9860c0f4-4e67-4271-8e5f-e69506201f8b\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:44.550172 master-0 kubenswrapper[31411]: I0224 02:37:44.549904 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-77a90a1f-3b19-443f-bfa7-9776b1f847b6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a5d836f3-76db-4c3b-84c0-25ab845f7f66\") pod \"glance-8705a-default-external-api-0\" (UID: \"9860c0f4-4e67-4271-8e5f-e69506201f8b\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:44.550452 master-0 kubenswrapper[31411]: I0224 02:37:44.550296 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9860c0f4-4e67-4271-8e5f-e69506201f8b-httpd-run\") pod \"glance-8705a-default-external-api-0\" (UID: \"9860c0f4-4e67-4271-8e5f-e69506201f8b\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:44.550903 master-0 kubenswrapper[31411]: I0224 02:37:44.550810 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9860c0f4-4e67-4271-8e5f-e69506201f8b-config-data\") pod \"glance-8705a-default-external-api-0\" (UID: \"9860c0f4-4e67-4271-8e5f-e69506201f8b\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:44.550991 master-0 kubenswrapper[31411]: I0224 02:37:44.550965 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9860c0f4-4e67-4271-8e5f-e69506201f8b-combined-ca-bundle\") pod \"glance-8705a-default-external-api-0\" (UID: \"9860c0f4-4e67-4271-8e5f-e69506201f8b\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:44.551072 master-0 kubenswrapper[31411]: I0224 02:37:44.551036 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9860c0f4-4e67-4271-8e5f-e69506201f8b-scripts\") pod \"glance-8705a-default-external-api-0\" (UID: \"9860c0f4-4e67-4271-8e5f-e69506201f8b\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:44.653909 master-0 kubenswrapper[31411]: I0224 02:37:44.653455 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8czq\" (UniqueName: \"kubernetes.io/projected/9860c0f4-4e67-4271-8e5f-e69506201f8b-kube-api-access-r8czq\") pod \"glance-8705a-default-external-api-0\" (UID: \"9860c0f4-4e67-4271-8e5f-e69506201f8b\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:44.654182 master-0 kubenswrapper[31411]: I0224 02:37:44.653943 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-77a90a1f-3b19-443f-bfa7-9776b1f847b6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a5d836f3-76db-4c3b-84c0-25ab845f7f66\") pod \"glance-8705a-default-external-api-0\" (UID: \"9860c0f4-4e67-4271-8e5f-e69506201f8b\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:44.654182 master-0 kubenswrapper[31411]: I0224 02:37:44.654030 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9860c0f4-4e67-4271-8e5f-e69506201f8b-httpd-run\") pod \"glance-8705a-default-external-api-0\" (UID: \"9860c0f4-4e67-4271-8e5f-e69506201f8b\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:44.654736 master-0 kubenswrapper[31411]: I0224 02:37:44.654704 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9860c0f4-4e67-4271-8e5f-e69506201f8b-httpd-run\") pod \"glance-8705a-default-external-api-0\" (UID: \"9860c0f4-4e67-4271-8e5f-e69506201f8b\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:44.654843 master-0 kubenswrapper[31411]: I0224 02:37:44.654817 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9860c0f4-4e67-4271-8e5f-e69506201f8b-config-data\") pod \"glance-8705a-default-external-api-0\" (UID: \"9860c0f4-4e67-4271-8e5f-e69506201f8b\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:44.656174 master-0 kubenswrapper[31411]: I0224 02:37:44.655993 31411 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 02:37:44.656174 master-0 kubenswrapper[31411]: I0224 02:37:44.656047 31411 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-77a90a1f-3b19-443f-bfa7-9776b1f847b6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a5d836f3-76db-4c3b-84c0-25ab845f7f66\") pod \"glance-8705a-default-external-api-0\" (UID: \"9860c0f4-4e67-4271-8e5f-e69506201f8b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/e389381360593e6090fc0484f10effeb9de4577cbce267bb52534b321405c56c/globalmount\"" pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:44.659929 master-0 kubenswrapper[31411]: I0224 02:37:44.659612 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9860c0f4-4e67-4271-8e5f-e69506201f8b-config-data\") pod \"glance-8705a-default-external-api-0\" (UID: \"9860c0f4-4e67-4271-8e5f-e69506201f8b\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:44.662191 master-0 kubenswrapper[31411]: I0224 02:37:44.662143 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9860c0f4-4e67-4271-8e5f-e69506201f8b-combined-ca-bundle\") pod \"glance-8705a-default-external-api-0\" (UID: \"9860c0f4-4e67-4271-8e5f-e69506201f8b\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:44.662252 master-0 kubenswrapper[31411]: I0224 02:37:44.662222 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9860c0f4-4e67-4271-8e5f-e69506201f8b-scripts\") pod \"glance-8705a-default-external-api-0\" (UID: \"9860c0f4-4e67-4271-8e5f-e69506201f8b\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:44.662307 master-0 kubenswrapper[31411]: I0224 02:37:44.662286 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9860c0f4-4e67-4271-8e5f-e69506201f8b-public-tls-certs\") pod \"glance-8705a-default-external-api-0\" (UID: \"9860c0f4-4e67-4271-8e5f-e69506201f8b\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:44.662356 master-0 kubenswrapper[31411]: I0224 02:37:44.662334 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9860c0f4-4e67-4271-8e5f-e69506201f8b-logs\") pod \"glance-8705a-default-external-api-0\" (UID: \"9860c0f4-4e67-4271-8e5f-e69506201f8b\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:44.662987 master-0 kubenswrapper[31411]: I0224 02:37:44.662877 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9860c0f4-4e67-4271-8e5f-e69506201f8b-logs\") pod \"glance-8705a-default-external-api-0\" (UID: \"9860c0f4-4e67-4271-8e5f-e69506201f8b\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:44.684627 master-0 kubenswrapper[31411]: I0224 02:37:44.677883 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9860c0f4-4e67-4271-8e5f-e69506201f8b-combined-ca-bundle\") pod \"glance-8705a-default-external-api-0\" (UID: \"9860c0f4-4e67-4271-8e5f-e69506201f8b\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:44.684627 master-0 kubenswrapper[31411]: I0224 02:37:44.678831 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9860c0f4-4e67-4271-8e5f-e69506201f8b-public-tls-certs\") pod \"glance-8705a-default-external-api-0\" (UID: \"9860c0f4-4e67-4271-8e5f-e69506201f8b\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:44.684627 master-0 kubenswrapper[31411]: I0224 02:37:44.682986 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8czq\" (UniqueName: \"kubernetes.io/projected/9860c0f4-4e67-4271-8e5f-e69506201f8b-kube-api-access-r8czq\") pod \"glance-8705a-default-external-api-0\" (UID: \"9860c0f4-4e67-4271-8e5f-e69506201f8b\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:44.708528 master-0 kubenswrapper[31411]: I0224 02:37:44.708469 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9860c0f4-4e67-4271-8e5f-e69506201f8b-scripts\") pod \"glance-8705a-default-external-api-0\" (UID: \"9860c0f4-4e67-4271-8e5f-e69506201f8b\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:45.184644 master-0 kubenswrapper[31411]: I0224 02:37:45.179861 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e330fef-38b6-4b5d-b001-886ecfdd4028" path="/var/lib/kubelet/pods/3e330fef-38b6-4b5d-b001-886ecfdd4028/volumes" Feb 24 02:37:45.184644 master-0 kubenswrapper[31411]: I0224 02:37:45.180349 31411 generic.go:334] "Generic (PLEG): container finished" podID="b8c676cf-386b-455c-b9f8-b88a3a34a136" containerID="38ac37b6a76645d3cf79e9320b586251da08acfbf09dc87365e36648638ac236" exitCode=0 Feb 24 02:37:45.184644 master-0 kubenswrapper[31411]: I0224 02:37:45.183241 31411 generic.go:334] "Generic (PLEG): container finished" podID="0e4defeb-f6b0-46db-9acc-6df2d2490988" containerID="42a465e40ef7b774a830f3590092a0cf93e2e6de5a60c4d097f6652a6877c75a" exitCode=0 Feb 24 02:37:45.185196 master-0 kubenswrapper[31411]: I0224 02:37:45.185132 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65df10fc-36c4-4eab-aaf3-962a5294face" path="/var/lib/kubelet/pods/65df10fc-36c4-4eab-aaf3-962a5294face/volumes" Feb 24 02:37:45.186434 master-0 kubenswrapper[31411]: I0224 02:37:45.186360 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8705a-default-internal-api-0" event={"ID":"4a09f571-15e6-485f-af69-147f5e94181f","Type":"ContainerStarted","Data":"edcee0c091c3cc62f80e3897368497848c40d25fab843ddc1a9c607a647347d9"} Feb 24 02:37:45.186434 master-0 kubenswrapper[31411]: I0224 02:37:45.186402 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8705a-default-internal-api-0" event={"ID":"4a09f571-15e6-485f-af69-147f5e94181f","Type":"ContainerStarted","Data":"c28130e647b5181156f4e55d08c1f5c7510670b83c1fc09e088fe2cf44d90939"} Feb 24 02:37:45.186434 master-0 kubenswrapper[31411]: I0224 02:37:45.186421 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pf58r" event={"ID":"b8c676cf-386b-455c-b9f8-b88a3a34a136","Type":"ContainerDied","Data":"38ac37b6a76645d3cf79e9320b586251da08acfbf09dc87365e36648638ac236"} Feb 24 02:37:45.186434 master-0 kubenswrapper[31411]: I0224 02:37:45.186441 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-d8zwm" event={"ID":"0e4defeb-f6b0-46db-9acc-6df2d2490988","Type":"ContainerDied","Data":"42a465e40ef7b774a830f3590092a0cf93e2e6de5a60c4d097f6652a6877c75a"} Feb 24 02:37:45.187300 master-0 kubenswrapper[31411]: I0224 02:37:45.186863 31411 generic.go:334] "Generic (PLEG): container finished" podID="85b32976-b3fc-498f-b15b-690cee8bfc95" containerID="ddb0bf913baafc8ac71e50bd0cf260ae7dc8f2f523de3cba1b5ccee0768e147a" exitCode=0 Feb 24 02:37:45.187300 master-0 kubenswrapper[31411]: I0224 02:37:45.186924 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d8b9-account-create-update-kq9f4" event={"ID":"85b32976-b3fc-498f-b15b-690cee8bfc95","Type":"ContainerDied","Data":"ddb0bf913baafc8ac71e50bd0cf260ae7dc8f2f523de3cba1b5ccee0768e147a"} Feb 24 02:37:45.195654 master-0 kubenswrapper[31411]: I0224 02:37:45.195398 31411 generic.go:334] "Generic (PLEG): container finished" podID="8c1ff294-924a-46b5-9107-84238d30135f" containerID="b052e406402d006adcd8d7e77760763fe494cdc2a2e6d160a6e36e94f549d3d4" exitCode=0 Feb 24 02:37:45.195654 master-0 kubenswrapper[31411]: I0224 02:37:45.195553 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-05a5-account-create-update-bt8vb" event={"ID":"8c1ff294-924a-46b5-9107-84238d30135f","Type":"ContainerDied","Data":"b052e406402d006adcd8d7e77760763fe494cdc2a2e6d160a6e36e94f549d3d4"} Feb 24 02:37:45.203238 master-0 kubenswrapper[31411]: I0224 02:37:45.202989 31411 generic.go:334] "Generic (PLEG): container finished" podID="264be425-2328-436a-9a2d-0215e640276c" containerID="f3b27cff129a8568d482620266fcff99c1cd11397d9f6cef9d597683f1f311f1" exitCode=0 Feb 24 02:37:45.203238 master-0 kubenswrapper[31411]: I0224 02:37:45.203082 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7331-account-create-update-4cdxr" event={"ID":"264be425-2328-436a-9a2d-0215e640276c","Type":"ContainerDied","Data":"f3b27cff129a8568d482620266fcff99c1cd11397d9f6cef9d597683f1f311f1"} Feb 24 02:37:45.217681 master-0 kubenswrapper[31411]: I0224 02:37:45.213935 31411 generic.go:334] "Generic (PLEG): container finished" podID="80c28d7c-77ba-477e-9b90-8432c2c7b48f" containerID="4e6f99bfdd10b2eb4b70128246adebd3bd4068497ea6351d2322dddc725b1621" exitCode=0 Feb 24 02:37:45.217681 master-0 kubenswrapper[31411]: I0224 02:37:45.214268 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xrhk2" event={"ID":"80c28d7c-77ba-477e-9b90-8432c2c7b48f","Type":"ContainerDied","Data":"4e6f99bfdd10b2eb4b70128246adebd3bd4068497ea6351d2322dddc725b1621"} Feb 24 02:37:45.821422 master-0 kubenswrapper[31411]: I0224 02:37:45.821380 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-77a90a1f-3b19-443f-bfa7-9776b1f847b6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a5d836f3-76db-4c3b-84c0-25ab845f7f66\") pod \"glance-8705a-default-external-api-0\" (UID: \"9860c0f4-4e67-4271-8e5f-e69506201f8b\") " pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:45.996588 master-0 kubenswrapper[31411]: I0224 02:37:45.996449 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:46.231014 master-0 kubenswrapper[31411]: I0224 02:37:46.230929 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8705a-default-internal-api-0" event={"ID":"4a09f571-15e6-485f-af69-147f5e94181f","Type":"ContainerStarted","Data":"2dc9a6500a322d74be919de210e4e8dfa0060453dfca49f9a67c7926b0a2fd46"} Feb 24 02:37:46.235160 master-0 kubenswrapper[31411]: I0224 02:37:46.235124 31411 generic.go:334] "Generic (PLEG): container finished" podID="3695612d-87a3-4401-9303-05ae933a9f78" containerID="42fee9d4a5f97dca1ebdfeeb55d78aa1a173c534dbd768ee386a6a27c02352df" exitCode=0 Feb 24 02:37:46.235306 master-0 kubenswrapper[31411]: I0224 02:37:46.235264 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-8hw9n" event={"ID":"3695612d-87a3-4401-9303-05ae933a9f78","Type":"ContainerDied","Data":"42fee9d4a5f97dca1ebdfeeb55d78aa1a173c534dbd768ee386a6a27c02352df"} Feb 24 02:37:46.314320 master-0 kubenswrapper[31411]: I0224 02:37:46.314181 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-7d8f6784f6-dqjdm" Feb 24 02:37:46.320701 master-0 kubenswrapper[31411]: I0224 02:37:46.320552 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-8705a-default-internal-api-0" podStartSLOduration=5.320530941 podStartE2EDuration="5.320530941s" podCreationTimestamp="2026-02-24 02:37:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:37:46.309253335 +0000 UTC m=+1009.526451181" watchObservedRunningTime="2026-02-24 02:37:46.320530941 +0000 UTC m=+1009.537728787" Feb 24 02:37:46.660876 master-0 kubenswrapper[31411]: I0224 02:37:46.660800 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8705a-default-external-api-0"] Feb 24 02:37:50.803511 master-0 kubenswrapper[31411]: W0224 02:37:50.803445 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9860c0f4_4e67_4271_8e5f_e69506201f8b.slice/crio-2263dcfc50c55644569e380ab452909811ca39936d0a525456e01cd1f591464a WatchSource:0}: Error finding container 2263dcfc50c55644569e380ab452909811ca39936d0a525456e01cd1f591464a: Status 404 returned error can't find the container with id 2263dcfc50c55644569e380ab452909811ca39936d0a525456e01cd1f591464a Feb 24 02:37:50.998284 master-0 kubenswrapper[31411]: I0224 02:37:50.998234 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pf58r" Feb 24 02:37:51.054415 master-0 kubenswrapper[31411]: I0224 02:37:51.054370 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-05a5-account-create-update-bt8vb" Feb 24 02:37:51.055911 master-0 kubenswrapper[31411]: I0224 02:37:51.055861 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xrhk2" Feb 24 02:37:51.162232 master-0 kubenswrapper[31411]: I0224 02:37:51.162189 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d8zwm" Feb 24 02:37:51.173268 master-0 kubenswrapper[31411]: I0224 02:37:51.173221 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d8b9-account-create-update-kq9f4" Feb 24 02:37:51.185492 master-0 kubenswrapper[31411]: I0224 02:37:51.185342 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdr9m\" (UniqueName: \"kubernetes.io/projected/80c28d7c-77ba-477e-9b90-8432c2c7b48f-kube-api-access-hdr9m\") pod \"80c28d7c-77ba-477e-9b90-8432c2c7b48f\" (UID: \"80c28d7c-77ba-477e-9b90-8432c2c7b48f\") " Feb 24 02:37:51.186919 master-0 kubenswrapper[31411]: I0224 02:37:51.186872 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-8hw9n" Feb 24 02:37:51.187531 master-0 kubenswrapper[31411]: I0224 02:37:51.187492 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80c28d7c-77ba-477e-9b90-8432c2c7b48f-operator-scripts\") pod \"80c28d7c-77ba-477e-9b90-8432c2c7b48f\" (UID: \"80c28d7c-77ba-477e-9b90-8432c2c7b48f\") " Feb 24 02:37:51.187663 master-0 kubenswrapper[31411]: I0224 02:37:51.187651 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c676cf-386b-455c-b9f8-b88a3a34a136-operator-scripts\") pod \"b8c676cf-386b-455c-b9f8-b88a3a34a136\" (UID: \"b8c676cf-386b-455c-b9f8-b88a3a34a136\") " Feb 24 02:37:51.187745 master-0 kubenswrapper[31411]: I0224 02:37:51.187725 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c1ff294-924a-46b5-9107-84238d30135f-operator-scripts\") pod \"8c1ff294-924a-46b5-9107-84238d30135f\" (UID: \"8c1ff294-924a-46b5-9107-84238d30135f\") " Feb 24 02:37:51.187788 master-0 kubenswrapper[31411]: I0224 02:37:51.187757 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsktd\" (UniqueName: \"kubernetes.io/projected/8c1ff294-924a-46b5-9107-84238d30135f-kube-api-access-zsktd\") pod \"8c1ff294-924a-46b5-9107-84238d30135f\" (UID: \"8c1ff294-924a-46b5-9107-84238d30135f\") " Feb 24 02:37:51.188114 master-0 kubenswrapper[31411]: I0224 02:37:51.187968 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zcjv\" (UniqueName: \"kubernetes.io/projected/b8c676cf-386b-455c-b9f8-b88a3a34a136-kube-api-access-5zcjv\") pod \"b8c676cf-386b-455c-b9f8-b88a3a34a136\" (UID: \"b8c676cf-386b-455c-b9f8-b88a3a34a136\") " Feb 24 02:37:51.188160 master-0 kubenswrapper[31411]: I0224 02:37:51.188134 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80c28d7c-77ba-477e-9b90-8432c2c7b48f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "80c28d7c-77ba-477e-9b90-8432c2c7b48f" (UID: "80c28d7c-77ba-477e-9b90-8432c2c7b48f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:37:51.189246 master-0 kubenswrapper[31411]: I0224 02:37:51.189179 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c1ff294-924a-46b5-9107-84238d30135f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c1ff294-924a-46b5-9107-84238d30135f" (UID: "8c1ff294-924a-46b5-9107-84238d30135f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:37:51.189510 master-0 kubenswrapper[31411]: I0224 02:37:51.189464 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8c676cf-386b-455c-b9f8-b88a3a34a136-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b8c676cf-386b-455c-b9f8-b88a3a34a136" (UID: "b8c676cf-386b-455c-b9f8-b88a3a34a136"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:37:51.191947 master-0 kubenswrapper[31411]: I0224 02:37:51.191907 31411 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80c28d7c-77ba-477e-9b90-8432c2c7b48f-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:51.191947 master-0 kubenswrapper[31411]: I0224 02:37:51.191940 31411 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c676cf-386b-455c-b9f8-b88a3a34a136-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:51.192037 master-0 kubenswrapper[31411]: I0224 02:37:51.191953 31411 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c1ff294-924a-46b5-9107-84238d30135f-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:51.192642 master-0 kubenswrapper[31411]: I0224 02:37:51.192550 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8c676cf-386b-455c-b9f8-b88a3a34a136-kube-api-access-5zcjv" (OuterVolumeSpecName: "kube-api-access-5zcjv") pod "b8c676cf-386b-455c-b9f8-b88a3a34a136" (UID: "b8c676cf-386b-455c-b9f8-b88a3a34a136"). InnerVolumeSpecName "kube-api-access-5zcjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:37:51.195166 master-0 kubenswrapper[31411]: I0224 02:37:51.195125 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c1ff294-924a-46b5-9107-84238d30135f-kube-api-access-zsktd" (OuterVolumeSpecName: "kube-api-access-zsktd") pod "8c1ff294-924a-46b5-9107-84238d30135f" (UID: "8c1ff294-924a-46b5-9107-84238d30135f"). InnerVolumeSpecName "kube-api-access-zsktd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:37:51.197600 master-0 kubenswrapper[31411]: I0224 02:37:51.197322 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80c28d7c-77ba-477e-9b90-8432c2c7b48f-kube-api-access-hdr9m" (OuterVolumeSpecName: "kube-api-access-hdr9m") pod "80c28d7c-77ba-477e-9b90-8432c2c7b48f" (UID: "80c28d7c-77ba-477e-9b90-8432c2c7b48f"). InnerVolumeSpecName "kube-api-access-hdr9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:37:51.223971 master-0 kubenswrapper[31411]: I0224 02:37:51.223912 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7331-account-create-update-4cdxr" Feb 24 02:37:51.298685 master-0 kubenswrapper[31411]: I0224 02:37:51.298512 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bvdr\" (UniqueName: \"kubernetes.io/projected/3695612d-87a3-4401-9303-05ae933a9f78-kube-api-access-2bvdr\") pod \"3695612d-87a3-4401-9303-05ae933a9f78\" (UID: \"3695612d-87a3-4401-9303-05ae933a9f78\") " Feb 24 02:37:51.299015 master-0 kubenswrapper[31411]: I0224 02:37:51.298734 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3695612d-87a3-4401-9303-05ae933a9f78-config\") pod \"3695612d-87a3-4401-9303-05ae933a9f78\" (UID: \"3695612d-87a3-4401-9303-05ae933a9f78\") " Feb 24 02:37:51.299408 master-0 kubenswrapper[31411]: I0224 02:37:51.299348 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/3695612d-87a3-4401-9303-05ae933a9f78-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"3695612d-87a3-4401-9303-05ae933a9f78\" (UID: \"3695612d-87a3-4401-9303-05ae933a9f78\") " Feb 24 02:37:51.299646 master-0 kubenswrapper[31411]: I0224 02:37:51.299620 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e4defeb-f6b0-46db-9acc-6df2d2490988-operator-scripts\") pod \"0e4defeb-f6b0-46db-9acc-6df2d2490988\" (UID: \"0e4defeb-f6b0-46db-9acc-6df2d2490988\") " Feb 24 02:37:51.299774 master-0 kubenswrapper[31411]: I0224 02:37:51.299685 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85b32976-b3fc-498f-b15b-690cee8bfc95-operator-scripts\") pod \"85b32976-b3fc-498f-b15b-690cee8bfc95\" (UID: \"85b32976-b3fc-498f-b15b-690cee8bfc95\") " Feb 24 02:37:51.299774 master-0 kubenswrapper[31411]: I0224 02:37:51.299730 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tvw5\" (UniqueName: \"kubernetes.io/projected/264be425-2328-436a-9a2d-0215e640276c-kube-api-access-8tvw5\") pod \"264be425-2328-436a-9a2d-0215e640276c\" (UID: \"264be425-2328-436a-9a2d-0215e640276c\") " Feb 24 02:37:51.300146 master-0 kubenswrapper[31411]: I0224 02:37:51.300092 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e4defeb-f6b0-46db-9acc-6df2d2490988-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0e4defeb-f6b0-46db-9acc-6df2d2490988" (UID: "0e4defeb-f6b0-46db-9acc-6df2d2490988"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:37:51.300265 master-0 kubenswrapper[31411]: I0224 02:37:51.300182 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85b32976-b3fc-498f-b15b-690cee8bfc95-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "85b32976-b3fc-498f-b15b-690cee8bfc95" (UID: "85b32976-b3fc-498f-b15b-690cee8bfc95"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:37:51.300694 master-0 kubenswrapper[31411]: I0224 02:37:51.300666 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfckq\" (UniqueName: \"kubernetes.io/projected/0e4defeb-f6b0-46db-9acc-6df2d2490988-kube-api-access-rfckq\") pod \"0e4defeb-f6b0-46db-9acc-6df2d2490988\" (UID: \"0e4defeb-f6b0-46db-9acc-6df2d2490988\") " Feb 24 02:37:51.300766 master-0 kubenswrapper[31411]: I0224 02:37:51.300724 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6j65\" (UniqueName: \"kubernetes.io/projected/85b32976-b3fc-498f-b15b-690cee8bfc95-kube-api-access-c6j65\") pod \"85b32976-b3fc-498f-b15b-690cee8bfc95\" (UID: \"85b32976-b3fc-498f-b15b-690cee8bfc95\") " Feb 24 02:37:51.300806 master-0 kubenswrapper[31411]: I0224 02:37:51.300760 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3695612d-87a3-4401-9303-05ae933a9f78-combined-ca-bundle\") pod \"3695612d-87a3-4401-9303-05ae933a9f78\" (UID: \"3695612d-87a3-4401-9303-05ae933a9f78\") " Feb 24 02:37:51.300891 master-0 kubenswrapper[31411]: I0224 02:37:51.300863 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3695612d-87a3-4401-9303-05ae933a9f78-scripts\") pod \"3695612d-87a3-4401-9303-05ae933a9f78\" (UID: \"3695612d-87a3-4401-9303-05ae933a9f78\") " Feb 24 02:37:51.301023 master-0 kubenswrapper[31411]: I0224 02:37:51.300997 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/264be425-2328-436a-9a2d-0215e640276c-operator-scripts\") pod \"264be425-2328-436a-9a2d-0215e640276c\" (UID: \"264be425-2328-436a-9a2d-0215e640276c\") " Feb 24 02:37:51.301098 master-0 kubenswrapper[31411]: I0224 02:37:51.301069 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/3695612d-87a3-4401-9303-05ae933a9f78-var-lib-ironic\") pod \"3695612d-87a3-4401-9303-05ae933a9f78\" (UID: \"3695612d-87a3-4401-9303-05ae933a9f78\") " Feb 24 02:37:51.301147 master-0 kubenswrapper[31411]: I0224 02:37:51.301110 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3695612d-87a3-4401-9303-05ae933a9f78-etc-podinfo\") pod \"3695612d-87a3-4401-9303-05ae933a9f78\" (UID: \"3695612d-87a3-4401-9303-05ae933a9f78\") " Feb 24 02:37:51.301530 master-0 kubenswrapper[31411]: I0224 02:37:51.301485 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3695612d-87a3-4401-9303-05ae933a9f78-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "3695612d-87a3-4401-9303-05ae933a9f78" (UID: "3695612d-87a3-4401-9303-05ae933a9f78"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:37:51.301907 master-0 kubenswrapper[31411]: I0224 02:37:51.301875 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/264be425-2328-436a-9a2d-0215e640276c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "264be425-2328-436a-9a2d-0215e640276c" (UID: "264be425-2328-436a-9a2d-0215e640276c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:37:51.301965 master-0 kubenswrapper[31411]: I0224 02:37:51.301901 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3695612d-87a3-4401-9303-05ae933a9f78-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "3695612d-87a3-4401-9303-05ae933a9f78" (UID: "3695612d-87a3-4401-9303-05ae933a9f78"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:37:51.303522 master-0 kubenswrapper[31411]: I0224 02:37:51.303486 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zcjv\" (UniqueName: \"kubernetes.io/projected/b8c676cf-386b-455c-b9f8-b88a3a34a136-kube-api-access-5zcjv\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:51.303522 master-0 kubenswrapper[31411]: I0224 02:37:51.303514 31411 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/3695612d-87a3-4401-9303-05ae933a9f78-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:51.303801 master-0 kubenswrapper[31411]: I0224 02:37:51.303531 31411 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e4defeb-f6b0-46db-9acc-6df2d2490988-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:51.303801 master-0 kubenswrapper[31411]: I0224 02:37:51.303544 31411 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85b32976-b3fc-498f-b15b-690cee8bfc95-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:51.303801 master-0 kubenswrapper[31411]: I0224 02:37:51.303563 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdr9m\" (UniqueName: \"kubernetes.io/projected/80c28d7c-77ba-477e-9b90-8432c2c7b48f-kube-api-access-hdr9m\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:51.303801 master-0 kubenswrapper[31411]: I0224 02:37:51.303614 31411 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/264be425-2328-436a-9a2d-0215e640276c-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:51.303801 master-0 kubenswrapper[31411]: I0224 02:37:51.303625 31411 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/3695612d-87a3-4401-9303-05ae933a9f78-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:51.303801 master-0 kubenswrapper[31411]: I0224 02:37:51.303641 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsktd\" (UniqueName: \"kubernetes.io/projected/8c1ff294-924a-46b5-9107-84238d30135f-kube-api-access-zsktd\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:51.304072 master-0 kubenswrapper[31411]: I0224 02:37:51.304007 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/264be425-2328-436a-9a2d-0215e640276c-kube-api-access-8tvw5" (OuterVolumeSpecName: "kube-api-access-8tvw5") pod "264be425-2328-436a-9a2d-0215e640276c" (UID: "264be425-2328-436a-9a2d-0215e640276c"). InnerVolumeSpecName "kube-api-access-8tvw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:37:51.306466 master-0 kubenswrapper[31411]: I0224 02:37:51.306418 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3695612d-87a3-4401-9303-05ae933a9f78-scripts" (OuterVolumeSpecName: "scripts") pod "3695612d-87a3-4401-9303-05ae933a9f78" (UID: "3695612d-87a3-4401-9303-05ae933a9f78"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:51.307259 master-0 kubenswrapper[31411]: I0224 02:37:51.306856 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85b32976-b3fc-498f-b15b-690cee8bfc95-kube-api-access-c6j65" (OuterVolumeSpecName: "kube-api-access-c6j65") pod "85b32976-b3fc-498f-b15b-690cee8bfc95" (UID: "85b32976-b3fc-498f-b15b-690cee8bfc95"). InnerVolumeSpecName "kube-api-access-c6j65". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:37:51.307396 master-0 kubenswrapper[31411]: I0224 02:37:51.307347 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/3695612d-87a3-4401-9303-05ae933a9f78-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "3695612d-87a3-4401-9303-05ae933a9f78" (UID: "3695612d-87a3-4401-9303-05ae933a9f78"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 24 02:37:51.309003 master-0 kubenswrapper[31411]: I0224 02:37:51.308943 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e4defeb-f6b0-46db-9acc-6df2d2490988-kube-api-access-rfckq" (OuterVolumeSpecName: "kube-api-access-rfckq") pod "0e4defeb-f6b0-46db-9acc-6df2d2490988" (UID: "0e4defeb-f6b0-46db-9acc-6df2d2490988"). InnerVolumeSpecName "kube-api-access-rfckq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:37:51.310227 master-0 kubenswrapper[31411]: I0224 02:37:51.310181 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3695612d-87a3-4401-9303-05ae933a9f78-kube-api-access-2bvdr" (OuterVolumeSpecName: "kube-api-access-2bvdr") pod "3695612d-87a3-4401-9303-05ae933a9f78" (UID: "3695612d-87a3-4401-9303-05ae933a9f78"). InnerVolumeSpecName "kube-api-access-2bvdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:37:51.348352 master-0 kubenswrapper[31411]: I0224 02:37:51.348293 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7331-account-create-update-4cdxr" event={"ID":"264be425-2328-436a-9a2d-0215e640276c","Type":"ContainerDied","Data":"a57cb8b32405bbdb8a2f1bf41e61d55f9e7dede930df7d38c69829f9fc31d2d8"} Feb 24 02:37:51.348590 master-0 kubenswrapper[31411]: I0224 02:37:51.348360 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a57cb8b32405bbdb8a2f1bf41e61d55f9e7dede930df7d38c69829f9fc31d2d8" Feb 24 02:37:51.348590 master-0 kubenswrapper[31411]: I0224 02:37:51.348358 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7331-account-create-update-4cdxr" Feb 24 02:37:51.351171 master-0 kubenswrapper[31411]: I0224 02:37:51.351121 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xrhk2" event={"ID":"80c28d7c-77ba-477e-9b90-8432c2c7b48f","Type":"ContainerDied","Data":"46aaee8b06bd79130a707c2aaf60707ce33a41a6a4b4bfa3273759592b1f2a0f"} Feb 24 02:37:51.351235 master-0 kubenswrapper[31411]: I0224 02:37:51.351179 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46aaee8b06bd79130a707c2aaf60707ce33a41a6a4b4bfa3273759592b1f2a0f" Feb 24 02:37:51.351279 master-0 kubenswrapper[31411]: I0224 02:37:51.351264 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xrhk2" Feb 24 02:37:51.354248 master-0 kubenswrapper[31411]: I0224 02:37:51.354210 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pf58r" event={"ID":"b8c676cf-386b-455c-b9f8-b88a3a34a136","Type":"ContainerDied","Data":"d74c62343886793796f91cdf7e5e8f9388fc9e71864eeacff4fa6e265e03b348"} Feb 24 02:37:51.354248 master-0 kubenswrapper[31411]: I0224 02:37:51.354245 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d74c62343886793796f91cdf7e5e8f9388fc9e71864eeacff4fa6e265e03b348" Feb 24 02:37:51.354343 master-0 kubenswrapper[31411]: I0224 02:37:51.354224 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pf58r" Feb 24 02:37:51.356834 master-0 kubenswrapper[31411]: I0224 02:37:51.356785 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-d8zwm" event={"ID":"0e4defeb-f6b0-46db-9acc-6df2d2490988","Type":"ContainerDied","Data":"c8e4c6703ae7f976629c9672fab48af5a8505ebe0ee72c1d7a8cad7e83f838ea"} Feb 24 02:37:51.356899 master-0 kubenswrapper[31411]: I0224 02:37:51.356836 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8e4c6703ae7f976629c9672fab48af5a8505ebe0ee72c1d7a8cad7e83f838ea" Feb 24 02:37:51.356941 master-0 kubenswrapper[31411]: I0224 02:37:51.356926 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d8zwm" Feb 24 02:37:51.360808 master-0 kubenswrapper[31411]: I0224 02:37:51.360735 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3695612d-87a3-4401-9303-05ae933a9f78-config" (OuterVolumeSpecName: "config") pod "3695612d-87a3-4401-9303-05ae933a9f78" (UID: "3695612d-87a3-4401-9303-05ae933a9f78"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:51.364450 master-0 kubenswrapper[31411]: I0224 02:37:51.364412 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-8hw9n" event={"ID":"3695612d-87a3-4401-9303-05ae933a9f78","Type":"ContainerDied","Data":"0584c53169051c3b8f3c6fe97050a745191402c66fef44ca5b9f56f8edd8f23c"} Feb 24 02:37:51.364450 master-0 kubenswrapper[31411]: I0224 02:37:51.364448 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0584c53169051c3b8f3c6fe97050a745191402c66fef44ca5b9f56f8edd8f23c" Feb 24 02:37:51.364549 master-0 kubenswrapper[31411]: I0224 02:37:51.364510 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-8hw9n" Feb 24 02:37:51.367758 master-0 kubenswrapper[31411]: I0224 02:37:51.367724 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3695612d-87a3-4401-9303-05ae933a9f78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3695612d-87a3-4401-9303-05ae933a9f78" (UID: "3695612d-87a3-4401-9303-05ae933a9f78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:37:51.369904 master-0 kubenswrapper[31411]: I0224 02:37:51.369861 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8705a-default-external-api-0" event={"ID":"9860c0f4-4e67-4271-8e5f-e69506201f8b","Type":"ContainerStarted","Data":"2263dcfc50c55644569e380ab452909811ca39936d0a525456e01cd1f591464a"} Feb 24 02:37:51.371963 master-0 kubenswrapper[31411]: I0224 02:37:51.371928 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d8b9-account-create-update-kq9f4" event={"ID":"85b32976-b3fc-498f-b15b-690cee8bfc95","Type":"ContainerDied","Data":"bfd808febe62f84818865ee571b761e239a8cf107711594e5d31854c8c4f7ab4"} Feb 24 02:37:51.371963 master-0 kubenswrapper[31411]: I0224 02:37:51.371957 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfd808febe62f84818865ee571b761e239a8cf107711594e5d31854c8c4f7ab4" Feb 24 02:37:51.372054 master-0 kubenswrapper[31411]: I0224 02:37:51.372004 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d8b9-account-create-update-kq9f4" Feb 24 02:37:51.378800 master-0 kubenswrapper[31411]: I0224 02:37:51.377124 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-05a5-account-create-update-bt8vb" event={"ID":"8c1ff294-924a-46b5-9107-84238d30135f","Type":"ContainerDied","Data":"7ea88c51b82789c2612c6b6897f22dd1fb751ee3f061f5d3ac924a7c53d478ee"} Feb 24 02:37:51.378800 master-0 kubenswrapper[31411]: I0224 02:37:51.377171 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ea88c51b82789c2612c6b6897f22dd1fb751ee3f061f5d3ac924a7c53d478ee" Feb 24 02:37:51.378800 master-0 kubenswrapper[31411]: I0224 02:37:51.377252 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-05a5-account-create-update-bt8vb" Feb 24 02:37:51.404804 master-0 kubenswrapper[31411]: I0224 02:37:51.404763 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bvdr\" (UniqueName: \"kubernetes.io/projected/3695612d-87a3-4401-9303-05ae933a9f78-kube-api-access-2bvdr\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:51.404804 master-0 kubenswrapper[31411]: I0224 02:37:51.404798 31411 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3695612d-87a3-4401-9303-05ae933a9f78-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:51.404897 master-0 kubenswrapper[31411]: I0224 02:37:51.404810 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tvw5\" (UniqueName: \"kubernetes.io/projected/264be425-2328-436a-9a2d-0215e640276c-kube-api-access-8tvw5\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:51.404897 master-0 kubenswrapper[31411]: I0224 02:37:51.404820 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfckq\" (UniqueName: \"kubernetes.io/projected/0e4defeb-f6b0-46db-9acc-6df2d2490988-kube-api-access-rfckq\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:51.404897 master-0 kubenswrapper[31411]: I0224 02:37:51.404856 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6j65\" (UniqueName: \"kubernetes.io/projected/85b32976-b3fc-498f-b15b-690cee8bfc95-kube-api-access-c6j65\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:51.404897 master-0 kubenswrapper[31411]: I0224 02:37:51.404870 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3695612d-87a3-4401-9303-05ae933a9f78-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:51.404897 master-0 kubenswrapper[31411]: I0224 02:37:51.404880 31411 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3695612d-87a3-4401-9303-05ae933a9f78-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:51.404897 master-0 kubenswrapper[31411]: I0224 02:37:51.404889 31411 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3695612d-87a3-4401-9303-05ae933a9f78-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 24 02:37:52.392993 master-0 kubenswrapper[31411]: I0224 02:37:52.392920 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8705a-default-external-api-0" event={"ID":"9860c0f4-4e67-4271-8e5f-e69506201f8b","Type":"ContainerStarted","Data":"7893a29fb0d98f81260a0f7454e76c40cb9e726adc0b34556a569927f2c205fd"} Feb 24 02:37:52.392993 master-0 kubenswrapper[31411]: I0224 02:37:52.392989 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8705a-default-external-api-0" event={"ID":"9860c0f4-4e67-4271-8e5f-e69506201f8b","Type":"ContainerStarted","Data":"c9c862b3faa495012624edd5f27080c798aeb03f2be69685091af6af368fb3b3"} Feb 24 02:37:52.400753 master-0 kubenswrapper[31411]: I0224 02:37:52.400610 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8","Type":"ContainerStarted","Data":"36d13af0ded8a709ec8573150c0887be64ada18f8cbdaf589c8d3ca0547565c0"} Feb 24 02:37:52.454323 master-0 kubenswrapper[31411]: I0224 02:37:52.454145 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-8705a-default-external-api-0" podStartSLOduration=8.454115072 podStartE2EDuration="8.454115072s" podCreationTimestamp="2026-02-24 02:37:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:37:52.431642112 +0000 UTC m=+1015.648839988" watchObservedRunningTime="2026-02-24 02:37:52.454115072 +0000 UTC m=+1015.671312948" Feb 24 02:37:53.185712 master-0 kubenswrapper[31411]: I0224 02:37:53.185602 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cc6c67c77-h5cpc"] Feb 24 02:37:53.188845 master-0 kubenswrapper[31411]: E0224 02:37:53.188798 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8c676cf-386b-455c-b9f8-b88a3a34a136" containerName="mariadb-database-create" Feb 24 02:37:53.188845 master-0 kubenswrapper[31411]: I0224 02:37:53.188828 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8c676cf-386b-455c-b9f8-b88a3a34a136" containerName="mariadb-database-create" Feb 24 02:37:53.190390 master-0 kubenswrapper[31411]: E0224 02:37:53.188853 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80c28d7c-77ba-477e-9b90-8432c2c7b48f" containerName="mariadb-database-create" Feb 24 02:37:53.190390 master-0 kubenswrapper[31411]: I0224 02:37:53.188862 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="80c28d7c-77ba-477e-9b90-8432c2c7b48f" containerName="mariadb-database-create" Feb 24 02:37:53.190390 master-0 kubenswrapper[31411]: E0224 02:37:53.188879 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3695612d-87a3-4401-9303-05ae933a9f78" containerName="ironic-inspector-db-sync" Feb 24 02:37:53.190390 master-0 kubenswrapper[31411]: I0224 02:37:53.188886 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="3695612d-87a3-4401-9303-05ae933a9f78" containerName="ironic-inspector-db-sync" Feb 24 02:37:53.190390 master-0 kubenswrapper[31411]: E0224 02:37:53.188899 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c1ff294-924a-46b5-9107-84238d30135f" containerName="mariadb-account-create-update" Feb 24 02:37:53.190390 master-0 kubenswrapper[31411]: I0224 02:37:53.188905 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c1ff294-924a-46b5-9107-84238d30135f" containerName="mariadb-account-create-update" Feb 24 02:37:53.190390 master-0 kubenswrapper[31411]: E0224 02:37:53.188925 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85b32976-b3fc-498f-b15b-690cee8bfc95" containerName="mariadb-account-create-update" Feb 24 02:37:53.190390 master-0 kubenswrapper[31411]: I0224 02:37:53.188933 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="85b32976-b3fc-498f-b15b-690cee8bfc95" containerName="mariadb-account-create-update" Feb 24 02:37:53.190390 master-0 kubenswrapper[31411]: E0224 02:37:53.188948 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e4defeb-f6b0-46db-9acc-6df2d2490988" containerName="mariadb-database-create" Feb 24 02:37:53.190390 master-0 kubenswrapper[31411]: I0224 02:37:53.188957 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e4defeb-f6b0-46db-9acc-6df2d2490988" containerName="mariadb-database-create" Feb 24 02:37:53.190390 master-0 kubenswrapper[31411]: E0224 02:37:53.188979 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="264be425-2328-436a-9a2d-0215e640276c" containerName="mariadb-account-create-update" Feb 24 02:37:53.190390 master-0 kubenswrapper[31411]: I0224 02:37:53.188986 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="264be425-2328-436a-9a2d-0215e640276c" containerName="mariadb-account-create-update" Feb 24 02:37:53.190390 master-0 kubenswrapper[31411]: I0224 02:37:53.189280 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="3695612d-87a3-4401-9303-05ae933a9f78" containerName="ironic-inspector-db-sync" Feb 24 02:37:53.190390 master-0 kubenswrapper[31411]: I0224 02:37:53.189308 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="85b32976-b3fc-498f-b15b-690cee8bfc95" containerName="mariadb-account-create-update" Feb 24 02:37:53.190390 master-0 kubenswrapper[31411]: I0224 02:37:53.189340 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="264be425-2328-436a-9a2d-0215e640276c" containerName="mariadb-account-create-update" Feb 24 02:37:53.190390 master-0 kubenswrapper[31411]: I0224 02:37:53.189348 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c1ff294-924a-46b5-9107-84238d30135f" containerName="mariadb-account-create-update" Feb 24 02:37:53.190390 master-0 kubenswrapper[31411]: I0224 02:37:53.189365 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="80c28d7c-77ba-477e-9b90-8432c2c7b48f" containerName="mariadb-database-create" Feb 24 02:37:53.190390 master-0 kubenswrapper[31411]: I0224 02:37:53.189584 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e4defeb-f6b0-46db-9acc-6df2d2490988" containerName="mariadb-database-create" Feb 24 02:37:53.190390 master-0 kubenswrapper[31411]: I0224 02:37:53.189599 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8c676cf-386b-455c-b9f8-b88a3a34a136" containerName="mariadb-database-create" Feb 24 02:37:53.194223 master-0 kubenswrapper[31411]: I0224 02:37:53.191132 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" Feb 24 02:37:53.217917 master-0 kubenswrapper[31411]: I0224 02:37:53.217859 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cc6c67c77-h5cpc"] Feb 24 02:37:53.324191 master-0 kubenswrapper[31411]: I0224 02:37:53.323284 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Feb 24 02:37:53.343266 master-0 kubenswrapper[31411]: I0224 02:37:53.341764 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 24 02:37:53.344050 master-0 kubenswrapper[31411]: I0224 02:37:53.343982 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Feb 24 02:37:53.344545 master-0 kubenswrapper[31411]: I0224 02:37:53.344524 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Feb 24 02:37:53.346402 master-0 kubenswrapper[31411]: I0224 02:37:53.346111 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Feb 24 02:37:53.350012 master-0 kubenswrapper[31411]: I0224 02:37:53.349965 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 24 02:37:53.398620 master-0 kubenswrapper[31411]: I0224 02:37:53.398540 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5073b6d9-022e-431f-92a1-9e4dbb1a2707-config\") pod \"dnsmasq-dns-7cc6c67c77-h5cpc\" (UID: \"5073b6d9-022e-431f-92a1-9e4dbb1a2707\") " pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" Feb 24 02:37:53.399259 master-0 kubenswrapper[31411]: I0224 02:37:53.398640 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5073b6d9-022e-431f-92a1-9e4dbb1a2707-ovsdbserver-nb\") pod \"dnsmasq-dns-7cc6c67c77-h5cpc\" (UID: \"5073b6d9-022e-431f-92a1-9e4dbb1a2707\") " pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" Feb 24 02:37:53.399259 master-0 kubenswrapper[31411]: I0224 02:37:53.398683 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5073b6d9-022e-431f-92a1-9e4dbb1a2707-ovsdbserver-sb\") pod \"dnsmasq-dns-7cc6c67c77-h5cpc\" (UID: \"5073b6d9-022e-431f-92a1-9e4dbb1a2707\") " pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" Feb 24 02:37:53.399259 master-0 kubenswrapper[31411]: I0224 02:37:53.398786 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx4db\" (UniqueName: \"kubernetes.io/projected/5073b6d9-022e-431f-92a1-9e4dbb1a2707-kube-api-access-qx4db\") pod \"dnsmasq-dns-7cc6c67c77-h5cpc\" (UID: \"5073b6d9-022e-431f-92a1-9e4dbb1a2707\") " pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" Feb 24 02:37:53.399259 master-0 kubenswrapper[31411]: I0224 02:37:53.398849 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5073b6d9-022e-431f-92a1-9e4dbb1a2707-dns-swift-storage-0\") pod \"dnsmasq-dns-7cc6c67c77-h5cpc\" (UID: \"5073b6d9-022e-431f-92a1-9e4dbb1a2707\") " pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" Feb 24 02:37:53.399259 master-0 kubenswrapper[31411]: I0224 02:37:53.398870 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5073b6d9-022e-431f-92a1-9e4dbb1a2707-dns-svc\") pod \"dnsmasq-dns-7cc6c67c77-h5cpc\" (UID: \"5073b6d9-022e-431f-92a1-9e4dbb1a2707\") " pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" Feb 24 02:37:53.501217 master-0 kubenswrapper[31411]: I0224 02:37:53.501152 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/f74869c5-cc4c-4327-8bb2-c70f78537698-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"f74869c5-cc4c-4327-8bb2-c70f78537698\") " pod="openstack/ironic-inspector-0" Feb 24 02:37:53.501486 master-0 kubenswrapper[31411]: I0224 02:37:53.501253 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5073b6d9-022e-431f-92a1-9e4dbb1a2707-config\") pod \"dnsmasq-dns-7cc6c67c77-h5cpc\" (UID: \"5073b6d9-022e-431f-92a1-9e4dbb1a2707\") " pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" Feb 24 02:37:53.501486 master-0 kubenswrapper[31411]: I0224 02:37:53.501286 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5073b6d9-022e-431f-92a1-9e4dbb1a2707-ovsdbserver-nb\") pod \"dnsmasq-dns-7cc6c67c77-h5cpc\" (UID: \"5073b6d9-022e-431f-92a1-9e4dbb1a2707\") " pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" Feb 24 02:37:53.501486 master-0 kubenswrapper[31411]: I0224 02:37:53.501317 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5073b6d9-022e-431f-92a1-9e4dbb1a2707-ovsdbserver-sb\") pod \"dnsmasq-dns-7cc6c67c77-h5cpc\" (UID: \"5073b6d9-022e-431f-92a1-9e4dbb1a2707\") " pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" Feb 24 02:37:53.501486 master-0 kubenswrapper[31411]: I0224 02:37:53.501406 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx4db\" (UniqueName: \"kubernetes.io/projected/5073b6d9-022e-431f-92a1-9e4dbb1a2707-kube-api-access-qx4db\") pod \"dnsmasq-dns-7cc6c67c77-h5cpc\" (UID: \"5073b6d9-022e-431f-92a1-9e4dbb1a2707\") " pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" Feb 24 02:37:53.501486 master-0 kubenswrapper[31411]: I0224 02:37:53.501436 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f74869c5-cc4c-4327-8bb2-c70f78537698-scripts\") pod \"ironic-inspector-0\" (UID: \"f74869c5-cc4c-4327-8bb2-c70f78537698\") " pod="openstack/ironic-inspector-0" Feb 24 02:37:53.501486 master-0 kubenswrapper[31411]: I0224 02:37:53.501467 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f74869c5-cc4c-4327-8bb2-c70f78537698-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"f74869c5-cc4c-4327-8bb2-c70f78537698\") " pod="openstack/ironic-inspector-0" Feb 24 02:37:53.501691 master-0 kubenswrapper[31411]: I0224 02:37:53.501496 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f74869c5-cc4c-4327-8bb2-c70f78537698-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"f74869c5-cc4c-4327-8bb2-c70f78537698\") " pod="openstack/ironic-inspector-0" Feb 24 02:37:53.501691 master-0 kubenswrapper[31411]: I0224 02:37:53.501517 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqqjr\" (UniqueName: \"kubernetes.io/projected/f74869c5-cc4c-4327-8bb2-c70f78537698-kube-api-access-rqqjr\") pod \"ironic-inspector-0\" (UID: \"f74869c5-cc4c-4327-8bb2-c70f78537698\") " pod="openstack/ironic-inspector-0" Feb 24 02:37:53.501691 master-0 kubenswrapper[31411]: I0224 02:37:53.501541 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/f74869c5-cc4c-4327-8bb2-c70f78537698-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"f74869c5-cc4c-4327-8bb2-c70f78537698\") " pod="openstack/ironic-inspector-0" Feb 24 02:37:53.501691 master-0 kubenswrapper[31411]: I0224 02:37:53.501562 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5073b6d9-022e-431f-92a1-9e4dbb1a2707-dns-swift-storage-0\") pod \"dnsmasq-dns-7cc6c67c77-h5cpc\" (UID: \"5073b6d9-022e-431f-92a1-9e4dbb1a2707\") " pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" Feb 24 02:37:53.501691 master-0 kubenswrapper[31411]: I0224 02:37:53.501598 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5073b6d9-022e-431f-92a1-9e4dbb1a2707-dns-svc\") pod \"dnsmasq-dns-7cc6c67c77-h5cpc\" (UID: \"5073b6d9-022e-431f-92a1-9e4dbb1a2707\") " pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" Feb 24 02:37:53.501691 master-0 kubenswrapper[31411]: I0224 02:37:53.501641 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f74869c5-cc4c-4327-8bb2-c70f78537698-config\") pod \"ironic-inspector-0\" (UID: \"f74869c5-cc4c-4327-8bb2-c70f78537698\") " pod="openstack/ironic-inspector-0" Feb 24 02:37:53.502421 master-0 kubenswrapper[31411]: I0224 02:37:53.502366 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5073b6d9-022e-431f-92a1-9e4dbb1a2707-config\") pod \"dnsmasq-dns-7cc6c67c77-h5cpc\" (UID: \"5073b6d9-022e-431f-92a1-9e4dbb1a2707\") " pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" Feb 24 02:37:53.502526 master-0 kubenswrapper[31411]: I0224 02:37:53.502496 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5073b6d9-022e-431f-92a1-9e4dbb1a2707-ovsdbserver-nb\") pod \"dnsmasq-dns-7cc6c67c77-h5cpc\" (UID: \"5073b6d9-022e-431f-92a1-9e4dbb1a2707\") " pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" Feb 24 02:37:53.503133 master-0 kubenswrapper[31411]: I0224 02:37:53.503097 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5073b6d9-022e-431f-92a1-9e4dbb1a2707-ovsdbserver-sb\") pod \"dnsmasq-dns-7cc6c67c77-h5cpc\" (UID: \"5073b6d9-022e-431f-92a1-9e4dbb1a2707\") " pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" Feb 24 02:37:53.503312 master-0 kubenswrapper[31411]: I0224 02:37:53.503280 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5073b6d9-022e-431f-92a1-9e4dbb1a2707-dns-swift-storage-0\") pod \"dnsmasq-dns-7cc6c67c77-h5cpc\" (UID: \"5073b6d9-022e-431f-92a1-9e4dbb1a2707\") " pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" Feb 24 02:37:53.504535 master-0 kubenswrapper[31411]: I0224 02:37:53.504407 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5073b6d9-022e-431f-92a1-9e4dbb1a2707-dns-svc\") pod \"dnsmasq-dns-7cc6c67c77-h5cpc\" (UID: \"5073b6d9-022e-431f-92a1-9e4dbb1a2707\") " pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" Feb 24 02:37:53.524372 master-0 kubenswrapper[31411]: I0224 02:37:53.524069 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx4db\" (UniqueName: \"kubernetes.io/projected/5073b6d9-022e-431f-92a1-9e4dbb1a2707-kube-api-access-qx4db\") pod \"dnsmasq-dns-7cc6c67c77-h5cpc\" (UID: \"5073b6d9-022e-431f-92a1-9e4dbb1a2707\") " pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" Feb 24 02:37:53.528884 master-0 kubenswrapper[31411]: I0224 02:37:53.528814 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" Feb 24 02:37:53.582316 master-0 kubenswrapper[31411]: I0224 02:37:53.581262 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:53.582316 master-0 kubenswrapper[31411]: I0224 02:37:53.581363 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:53.604447 master-0 kubenswrapper[31411]: I0224 02:37:53.604302 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/f74869c5-cc4c-4327-8bb2-c70f78537698-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"f74869c5-cc4c-4327-8bb2-c70f78537698\") " pod="openstack/ironic-inspector-0" Feb 24 02:37:53.610092 master-0 kubenswrapper[31411]: I0224 02:37:53.610006 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f74869c5-cc4c-4327-8bb2-c70f78537698-scripts\") pod \"ironic-inspector-0\" (UID: \"f74869c5-cc4c-4327-8bb2-c70f78537698\") " pod="openstack/ironic-inspector-0" Feb 24 02:37:53.610092 master-0 kubenswrapper[31411]: I0224 02:37:53.610065 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f74869c5-cc4c-4327-8bb2-c70f78537698-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"f74869c5-cc4c-4327-8bb2-c70f78537698\") " pod="openstack/ironic-inspector-0" Feb 24 02:37:53.610200 master-0 kubenswrapper[31411]: I0224 02:37:53.610102 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f74869c5-cc4c-4327-8bb2-c70f78537698-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"f74869c5-cc4c-4327-8bb2-c70f78537698\") " pod="openstack/ironic-inspector-0" Feb 24 02:37:53.610200 master-0 kubenswrapper[31411]: I0224 02:37:53.610124 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqqjr\" (UniqueName: \"kubernetes.io/projected/f74869c5-cc4c-4327-8bb2-c70f78537698-kube-api-access-rqqjr\") pod \"ironic-inspector-0\" (UID: \"f74869c5-cc4c-4327-8bb2-c70f78537698\") " pod="openstack/ironic-inspector-0" Feb 24 02:37:53.610200 master-0 kubenswrapper[31411]: I0224 02:37:53.610149 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/f74869c5-cc4c-4327-8bb2-c70f78537698-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"f74869c5-cc4c-4327-8bb2-c70f78537698\") " pod="openstack/ironic-inspector-0" Feb 24 02:37:53.610304 master-0 kubenswrapper[31411]: I0224 02:37:53.610233 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f74869c5-cc4c-4327-8bb2-c70f78537698-config\") pod \"ironic-inspector-0\" (UID: \"f74869c5-cc4c-4327-8bb2-c70f78537698\") " pod="openstack/ironic-inspector-0" Feb 24 02:37:53.610805 master-0 kubenswrapper[31411]: I0224 02:37:53.610673 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/f74869c5-cc4c-4327-8bb2-c70f78537698-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"f74869c5-cc4c-4327-8bb2-c70f78537698\") " pod="openstack/ironic-inspector-0" Feb 24 02:37:53.611525 master-0 kubenswrapper[31411]: I0224 02:37:53.611427 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/f74869c5-cc4c-4327-8bb2-c70f78537698-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"f74869c5-cc4c-4327-8bb2-c70f78537698\") " pod="openstack/ironic-inspector-0" Feb 24 02:37:53.617156 master-0 kubenswrapper[31411]: I0224 02:37:53.617102 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f74869c5-cc4c-4327-8bb2-c70f78537698-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"f74869c5-cc4c-4327-8bb2-c70f78537698\") " pod="openstack/ironic-inspector-0" Feb 24 02:37:53.617791 master-0 kubenswrapper[31411]: I0224 02:37:53.617733 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f74869c5-cc4c-4327-8bb2-c70f78537698-scripts\") pod \"ironic-inspector-0\" (UID: \"f74869c5-cc4c-4327-8bb2-c70f78537698\") " pod="openstack/ironic-inspector-0" Feb 24 02:37:53.618664 master-0 kubenswrapper[31411]: I0224 02:37:53.618419 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f74869c5-cc4c-4327-8bb2-c70f78537698-config\") pod \"ironic-inspector-0\" (UID: \"f74869c5-cc4c-4327-8bb2-c70f78537698\") " pod="openstack/ironic-inspector-0" Feb 24 02:37:53.618855 master-0 kubenswrapper[31411]: I0224 02:37:53.618805 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f74869c5-cc4c-4327-8bb2-c70f78537698-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"f74869c5-cc4c-4327-8bb2-c70f78537698\") " pod="openstack/ironic-inspector-0" Feb 24 02:37:53.639475 master-0 kubenswrapper[31411]: I0224 02:37:53.639423 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqqjr\" (UniqueName: \"kubernetes.io/projected/f74869c5-cc4c-4327-8bb2-c70f78537698-kube-api-access-rqqjr\") pod \"ironic-inspector-0\" (UID: \"f74869c5-cc4c-4327-8bb2-c70f78537698\") " pod="openstack/ironic-inspector-0" Feb 24 02:37:53.649287 master-0 kubenswrapper[31411]: I0224 02:37:53.649250 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:53.649407 master-0 kubenswrapper[31411]: I0224 02:37:53.649337 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:53.669452 master-0 kubenswrapper[31411]: I0224 02:37:53.668729 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 24 02:37:54.061668 master-0 kubenswrapper[31411]: W0224 02:37:54.061063 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5073b6d9_022e_431f_92a1_9e4dbb1a2707.slice/crio-edb1b8f3743f7e4a8fab33ec572cb1003e53b0ee15dbbaa6b98c1bdbc275d304 WatchSource:0}: Error finding container edb1b8f3743f7e4a8fab33ec572cb1003e53b0ee15dbbaa6b98c1bdbc275d304: Status 404 returned error can't find the container with id edb1b8f3743f7e4a8fab33ec572cb1003e53b0ee15dbbaa6b98c1bdbc275d304 Feb 24 02:37:54.074984 master-0 kubenswrapper[31411]: I0224 02:37:54.074945 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cc6c67c77-h5cpc"] Feb 24 02:37:54.427034 master-0 kubenswrapper[31411]: I0224 02:37:54.426706 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 24 02:37:54.445143 master-0 kubenswrapper[31411]: I0224 02:37:54.444662 31411 generic.go:334] "Generic (PLEG): container finished" podID="5073b6d9-022e-431f-92a1-9e4dbb1a2707" containerID="62834fe40abab078b78c2b05f8d8521a2d8b172de94e3d02a4335dc840fb3857" exitCode=0 Feb 24 02:37:54.445143 master-0 kubenswrapper[31411]: I0224 02:37:54.444713 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" event={"ID":"5073b6d9-022e-431f-92a1-9e4dbb1a2707","Type":"ContainerDied","Data":"62834fe40abab078b78c2b05f8d8521a2d8b172de94e3d02a4335dc840fb3857"} Feb 24 02:37:54.445143 master-0 kubenswrapper[31411]: I0224 02:37:54.444791 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" event={"ID":"5073b6d9-022e-431f-92a1-9e4dbb1a2707","Type":"ContainerStarted","Data":"edb1b8f3743f7e4a8fab33ec572cb1003e53b0ee15dbbaa6b98c1bdbc275d304"} Feb 24 02:37:54.447276 master-0 kubenswrapper[31411]: I0224 02:37:54.445694 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:54.447276 master-0 kubenswrapper[31411]: I0224 02:37:54.445721 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:55.459094 master-0 kubenswrapper[31411]: I0224 02:37:55.459035 31411 generic.go:334] "Generic (PLEG): container finished" podID="f74869c5-cc4c-4327-8bb2-c70f78537698" containerID="58ce87e7811af742014c6b7089b1deeda6b85768e7dbf2fb64d234d00a06e734" exitCode=0 Feb 24 02:37:55.459747 master-0 kubenswrapper[31411]: I0224 02:37:55.459159 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f74869c5-cc4c-4327-8bb2-c70f78537698","Type":"ContainerDied","Data":"58ce87e7811af742014c6b7089b1deeda6b85768e7dbf2fb64d234d00a06e734"} Feb 24 02:37:55.459747 master-0 kubenswrapper[31411]: I0224 02:37:55.459237 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f74869c5-cc4c-4327-8bb2-c70f78537698","Type":"ContainerStarted","Data":"68bf202bc24db62f73938570dad3eb6dedb741c7ffe2c7eb29b148723f28dfc2"} Feb 24 02:37:55.463172 master-0 kubenswrapper[31411]: I0224 02:37:55.463032 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" event={"ID":"5073b6d9-022e-431f-92a1-9e4dbb1a2707","Type":"ContainerStarted","Data":"26258ca057eef9536895385b36d82a0b998133dc91eeb54a32a901e1263f87fd"} Feb 24 02:37:55.463172 master-0 kubenswrapper[31411]: I0224 02:37:55.463149 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" Feb 24 02:37:55.466927 master-0 kubenswrapper[31411]: I0224 02:37:55.466873 31411 generic.go:334] "Generic (PLEG): container finished" podID="d289f4ce-9a2f-4d57-bf7b-414618c7c4e8" containerID="36d13af0ded8a709ec8573150c0887be64ada18f8cbdaf589c8d3ca0547565c0" exitCode=0 Feb 24 02:37:55.467013 master-0 kubenswrapper[31411]: I0224 02:37:55.466949 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8","Type":"ContainerDied","Data":"36d13af0ded8a709ec8573150c0887be64ada18f8cbdaf589c8d3ca0547565c0"} Feb 24 02:37:55.531845 master-0 kubenswrapper[31411]: I0224 02:37:55.531736 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" podStartSLOduration=2.531706325 podStartE2EDuration="2.531706325s" podCreationTimestamp="2026-02-24 02:37:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:37:55.52583174 +0000 UTC m=+1018.743029626" watchObservedRunningTime="2026-02-24 02:37:55.531706325 +0000 UTC m=+1018.748904181" Feb 24 02:37:56.001324 master-0 kubenswrapper[31411]: I0224 02:37:56.001221 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:56.001324 master-0 kubenswrapper[31411]: I0224 02:37:56.001294 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:56.053544 master-0 kubenswrapper[31411]: I0224 02:37:56.053471 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:56.117406 master-0 kubenswrapper[31411]: I0224 02:37:56.117353 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:56.478330 master-0 kubenswrapper[31411]: I0224 02:37:56.478232 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:56.478330 master-0 kubenswrapper[31411]: I0224 02:37:56.478342 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:56.506679 master-0 kubenswrapper[31411]: I0224 02:37:56.506427 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:56.506679 master-0 kubenswrapper[31411]: I0224 02:37:56.506642 31411 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 02:37:56.670608 master-0 kubenswrapper[31411]: I0224 02:37:56.669073 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-8705a-default-internal-api-0" Feb 24 02:37:57.321540 master-0 kubenswrapper[31411]: I0224 02:37:57.321305 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vdhjz"] Feb 24 02:37:57.326665 master-0 kubenswrapper[31411]: I0224 02:37:57.326632 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vdhjz" Feb 24 02:37:57.329532 master-0 kubenswrapper[31411]: I0224 02:37:57.329374 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 24 02:37:57.329734 master-0 kubenswrapper[31411]: I0224 02:37:57.329615 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 24 02:37:57.481380 master-0 kubenswrapper[31411]: I0224 02:37:57.481301 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9-config-data\") pod \"nova-cell0-conductor-db-sync-vdhjz\" (UID: \"d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9\") " pod="openstack/nova-cell0-conductor-db-sync-vdhjz" Feb 24 02:37:57.481999 master-0 kubenswrapper[31411]: I0224 02:37:57.481747 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pb6d\" (UniqueName: \"kubernetes.io/projected/d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9-kube-api-access-9pb6d\") pod \"nova-cell0-conductor-db-sync-vdhjz\" (UID: \"d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9\") " pod="openstack/nova-cell0-conductor-db-sync-vdhjz" Feb 24 02:37:57.481999 master-0 kubenswrapper[31411]: I0224 02:37:57.481854 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9-scripts\") pod \"nova-cell0-conductor-db-sync-vdhjz\" (UID: \"d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9\") " pod="openstack/nova-cell0-conductor-db-sync-vdhjz" Feb 24 02:37:57.481999 master-0 kubenswrapper[31411]: I0224 02:37:57.481934 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vdhjz\" (UID: \"d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9\") " pod="openstack/nova-cell0-conductor-db-sync-vdhjz" Feb 24 02:37:57.589772 master-0 kubenswrapper[31411]: I0224 02:37:57.585820 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9-scripts\") pod \"nova-cell0-conductor-db-sync-vdhjz\" (UID: \"d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9\") " pod="openstack/nova-cell0-conductor-db-sync-vdhjz" Feb 24 02:37:57.589772 master-0 kubenswrapper[31411]: I0224 02:37:57.585891 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vdhjz\" (UID: \"d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9\") " pod="openstack/nova-cell0-conductor-db-sync-vdhjz" Feb 24 02:37:57.589772 master-0 kubenswrapper[31411]: I0224 02:37:57.586038 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9-config-data\") pod \"nova-cell0-conductor-db-sync-vdhjz\" (UID: \"d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9\") " pod="openstack/nova-cell0-conductor-db-sync-vdhjz" Feb 24 02:37:57.589772 master-0 kubenswrapper[31411]: I0224 02:37:57.586131 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pb6d\" (UniqueName: \"kubernetes.io/projected/d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9-kube-api-access-9pb6d\") pod \"nova-cell0-conductor-db-sync-vdhjz\" (UID: \"d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9\") " pod="openstack/nova-cell0-conductor-db-sync-vdhjz" Feb 24 02:37:57.592962 master-0 kubenswrapper[31411]: I0224 02:37:57.592914 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9-scripts\") pod \"nova-cell0-conductor-db-sync-vdhjz\" (UID: \"d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9\") " pod="openstack/nova-cell0-conductor-db-sync-vdhjz" Feb 24 02:37:57.617526 master-0 kubenswrapper[31411]: I0224 02:37:57.613267 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9-config-data\") pod \"nova-cell0-conductor-db-sync-vdhjz\" (UID: \"d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9\") " pod="openstack/nova-cell0-conductor-db-sync-vdhjz" Feb 24 02:37:57.618648 master-0 kubenswrapper[31411]: I0224 02:37:57.618551 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vdhjz"] Feb 24 02:37:57.619072 master-0 kubenswrapper[31411]: I0224 02:37:57.619014 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vdhjz\" (UID: \"d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9\") " pod="openstack/nova-cell0-conductor-db-sync-vdhjz" Feb 24 02:37:57.678336 master-0 kubenswrapper[31411]: I0224 02:37:57.678284 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pb6d\" (UniqueName: \"kubernetes.io/projected/d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9-kube-api-access-9pb6d\") pod \"nova-cell0-conductor-db-sync-vdhjz\" (UID: \"d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9\") " pod="openstack/nova-cell0-conductor-db-sync-vdhjz" Feb 24 02:37:57.764667 master-0 kubenswrapper[31411]: I0224 02:37:57.757931 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Feb 24 02:37:57.976420 master-0 kubenswrapper[31411]: I0224 02:37:57.976089 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vdhjz" Feb 24 02:37:58.727074 master-0 kubenswrapper[31411]: I0224 02:37:58.727026 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vdhjz"] Feb 24 02:37:58.755676 master-0 kubenswrapper[31411]: W0224 02:37:58.752643 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9d4fad3_73a1_40b5_8af2_abb7bfee4bb9.slice/crio-a6f5f7245e9684c42971ef90fc2f9320589230fdecc8d13952f68c1f41cb6f3d WatchSource:0}: Error finding container a6f5f7245e9684c42971ef90fc2f9320589230fdecc8d13952f68c1f41cb6f3d: Status 404 returned error can't find the container with id a6f5f7245e9684c42971ef90fc2f9320589230fdecc8d13952f68c1f41cb6f3d Feb 24 02:37:58.823170 master-0 kubenswrapper[31411]: I0224 02:37:58.822787 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:37:59.628928 master-0 kubenswrapper[31411]: I0224 02:37:59.628726 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vdhjz" event={"ID":"d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9","Type":"ContainerStarted","Data":"a6f5f7245e9684c42971ef90fc2f9320589230fdecc8d13952f68c1f41cb6f3d"} Feb 24 02:38:00.027305 master-0 kubenswrapper[31411]: I0224 02:38:00.027221 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-8705a-default-external-api-0" Feb 24 02:38:01.667365 master-0 kubenswrapper[31411]: I0224 02:38:01.667310 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Feb 24 02:38:02.679857 master-0 kubenswrapper[31411]: I0224 02:38:02.679763 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f74869c5-cc4c-4327-8bb2-c70f78537698","Type":"ContainerStarted","Data":"ab9ed2c9e17e13459b7291e1ee4657e6a5f5ac13c1c36fb2cbd7770973a24e7b"} Feb 24 02:38:02.680969 master-0 kubenswrapper[31411]: I0224 02:38:02.680853 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-inspector-0" podUID="f74869c5-cc4c-4327-8bb2-c70f78537698" containerName="inspector-pxe-init" containerID="cri-o://ab9ed2c9e17e13459b7291e1ee4657e6a5f5ac13c1c36fb2cbd7770973a24e7b" gracePeriod=60 Feb 24 02:38:02.685378 master-0 kubenswrapper[31411]: I0224 02:38:02.685312 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8","Type":"ContainerStarted","Data":"ceab83b81645ba25888f1449863cb97287e5ea0eae1279b2c30abfc6fb906b06"} Feb 24 02:38:03.482712 master-0 kubenswrapper[31411]: I0224 02:38:03.482603 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 24 02:38:03.532124 master-0 kubenswrapper[31411]: I0224 02:38:03.532048 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" Feb 24 02:38:03.563789 master-0 kubenswrapper[31411]: I0224 02:38:03.563737 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f74869c5-cc4c-4327-8bb2-c70f78537698-combined-ca-bundle\") pod \"f74869c5-cc4c-4327-8bb2-c70f78537698\" (UID: \"f74869c5-cc4c-4327-8bb2-c70f78537698\") " Feb 24 02:38:03.564025 master-0 kubenswrapper[31411]: I0224 02:38:03.563820 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f74869c5-cc4c-4327-8bb2-c70f78537698-scripts\") pod \"f74869c5-cc4c-4327-8bb2-c70f78537698\" (UID: \"f74869c5-cc4c-4327-8bb2-c70f78537698\") " Feb 24 02:38:03.564025 master-0 kubenswrapper[31411]: I0224 02:38:03.563868 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/f74869c5-cc4c-4327-8bb2-c70f78537698-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"f74869c5-cc4c-4327-8bb2-c70f78537698\" (UID: \"f74869c5-cc4c-4327-8bb2-c70f78537698\") " Feb 24 02:38:03.564025 master-0 kubenswrapper[31411]: I0224 02:38:03.563935 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f74869c5-cc4c-4327-8bb2-c70f78537698-config\") pod \"f74869c5-cc4c-4327-8bb2-c70f78537698\" (UID: \"f74869c5-cc4c-4327-8bb2-c70f78537698\") " Feb 24 02:38:03.564025 master-0 kubenswrapper[31411]: I0224 02:38:03.563970 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqqjr\" (UniqueName: \"kubernetes.io/projected/f74869c5-cc4c-4327-8bb2-c70f78537698-kube-api-access-rqqjr\") pod \"f74869c5-cc4c-4327-8bb2-c70f78537698\" (UID: \"f74869c5-cc4c-4327-8bb2-c70f78537698\") " Feb 24 02:38:03.564209 master-0 kubenswrapper[31411]: I0224 02:38:03.564092 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/f74869c5-cc4c-4327-8bb2-c70f78537698-var-lib-ironic\") pod \"f74869c5-cc4c-4327-8bb2-c70f78537698\" (UID: \"f74869c5-cc4c-4327-8bb2-c70f78537698\") " Feb 24 02:38:03.564372 master-0 kubenswrapper[31411]: I0224 02:38:03.564346 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f74869c5-cc4c-4327-8bb2-c70f78537698-etc-podinfo\") pod \"f74869c5-cc4c-4327-8bb2-c70f78537698\" (UID: \"f74869c5-cc4c-4327-8bb2-c70f78537698\") " Feb 24 02:38:03.565160 master-0 kubenswrapper[31411]: I0224 02:38:03.565088 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f74869c5-cc4c-4327-8bb2-c70f78537698-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "f74869c5-cc4c-4327-8bb2-c70f78537698" (UID: "f74869c5-cc4c-4327-8bb2-c70f78537698"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:38:03.565656 master-0 kubenswrapper[31411]: I0224 02:38:03.565616 31411 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/f74869c5-cc4c-4327-8bb2-c70f78537698-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:03.573711 master-0 kubenswrapper[31411]: I0224 02:38:03.573658 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f74869c5-cc4c-4327-8bb2-c70f78537698-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "f74869c5-cc4c-4327-8bb2-c70f78537698" (UID: "f74869c5-cc4c-4327-8bb2-c70f78537698"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:38:03.573985 master-0 kubenswrapper[31411]: I0224 02:38:03.573888 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f74869c5-cc4c-4327-8bb2-c70f78537698-kube-api-access-rqqjr" (OuterVolumeSpecName: "kube-api-access-rqqjr") pod "f74869c5-cc4c-4327-8bb2-c70f78537698" (UID: "f74869c5-cc4c-4327-8bb2-c70f78537698"). InnerVolumeSpecName "kube-api-access-rqqjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:38:03.584633 master-0 kubenswrapper[31411]: I0224 02:38:03.583683 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f74869c5-cc4c-4327-8bb2-c70f78537698-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "f74869c5-cc4c-4327-8bb2-c70f78537698" (UID: "f74869c5-cc4c-4327-8bb2-c70f78537698"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 24 02:38:03.585615 master-0 kubenswrapper[31411]: I0224 02:38:03.585507 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f74869c5-cc4c-4327-8bb2-c70f78537698-config" (OuterVolumeSpecName: "config") pod "f74869c5-cc4c-4327-8bb2-c70f78537698" (UID: "f74869c5-cc4c-4327-8bb2-c70f78537698"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:38:03.585711 master-0 kubenswrapper[31411]: I0224 02:38:03.585561 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f74869c5-cc4c-4327-8bb2-c70f78537698-scripts" (OuterVolumeSpecName: "scripts") pod "f74869c5-cc4c-4327-8bb2-c70f78537698" (UID: "f74869c5-cc4c-4327-8bb2-c70f78537698"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:38:03.652083 master-0 kubenswrapper[31411]: I0224 02:38:03.650109 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b45666449-v77b5"] Feb 24 02:38:03.652083 master-0 kubenswrapper[31411]: I0224 02:38:03.650431 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b45666449-v77b5" podUID="8bdcc60c-f7fb-43df-8b93-035a0796383f" containerName="dnsmasq-dns" containerID="cri-o://47b7cb45e1461ca1dd03c22026f38d6712770b0ea1a900da45208953543a87e3" gracePeriod=10 Feb 24 02:38:03.670807 master-0 kubenswrapper[31411]: I0224 02:38:03.670769 31411 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f74869c5-cc4c-4327-8bb2-c70f78537698-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:03.671140 master-0 kubenswrapper[31411]: I0224 02:38:03.671126 31411 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f74869c5-cc4c-4327-8bb2-c70f78537698-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:03.671233 master-0 kubenswrapper[31411]: I0224 02:38:03.671222 31411 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f74869c5-cc4c-4327-8bb2-c70f78537698-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:03.671672 master-0 kubenswrapper[31411]: I0224 02:38:03.671655 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqqjr\" (UniqueName: \"kubernetes.io/projected/f74869c5-cc4c-4327-8bb2-c70f78537698-kube-api-access-rqqjr\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:03.671815 master-0 kubenswrapper[31411]: I0224 02:38:03.671803 31411 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/f74869c5-cc4c-4327-8bb2-c70f78537698-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:03.705656 master-0 kubenswrapper[31411]: I0224 02:38:03.704848 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f74869c5-cc4c-4327-8bb2-c70f78537698-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f74869c5-cc4c-4327-8bb2-c70f78537698" (UID: "f74869c5-cc4c-4327-8bb2-c70f78537698"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:38:03.736995 master-0 kubenswrapper[31411]: I0224 02:38:03.736919 31411 generic.go:334] "Generic (PLEG): container finished" podID="f74869c5-cc4c-4327-8bb2-c70f78537698" containerID="ab9ed2c9e17e13459b7291e1ee4657e6a5f5ac13c1c36fb2cbd7770973a24e7b" exitCode=0 Feb 24 02:38:03.737469 master-0 kubenswrapper[31411]: I0224 02:38:03.737022 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f74869c5-cc4c-4327-8bb2-c70f78537698","Type":"ContainerDied","Data":"ab9ed2c9e17e13459b7291e1ee4657e6a5f5ac13c1c36fb2cbd7770973a24e7b"} Feb 24 02:38:03.737469 master-0 kubenswrapper[31411]: I0224 02:38:03.737064 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f74869c5-cc4c-4327-8bb2-c70f78537698","Type":"ContainerDied","Data":"68bf202bc24db62f73938570dad3eb6dedb741c7ffe2c7eb29b148723f28dfc2"} Feb 24 02:38:03.737469 master-0 kubenswrapper[31411]: I0224 02:38:03.737087 31411 scope.go:117] "RemoveContainer" containerID="ab9ed2c9e17e13459b7291e1ee4657e6a5f5ac13c1c36fb2cbd7770973a24e7b" Feb 24 02:38:03.737469 master-0 kubenswrapper[31411]: I0224 02:38:03.737387 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 24 02:38:03.775386 master-0 kubenswrapper[31411]: I0224 02:38:03.775334 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f74869c5-cc4c-4327-8bb2-c70f78537698-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:03.780180 master-0 kubenswrapper[31411]: I0224 02:38:03.780140 31411 scope.go:117] "RemoveContainer" containerID="58ce87e7811af742014c6b7089b1deeda6b85768e7dbf2fb64d234d00a06e734" Feb 24 02:38:03.876285 master-0 kubenswrapper[31411]: I0224 02:38:03.874917 31411 scope.go:117] "RemoveContainer" containerID="ab9ed2c9e17e13459b7291e1ee4657e6a5f5ac13c1c36fb2cbd7770973a24e7b" Feb 24 02:38:03.876285 master-0 kubenswrapper[31411]: E0224 02:38:03.876269 31411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab9ed2c9e17e13459b7291e1ee4657e6a5f5ac13c1c36fb2cbd7770973a24e7b\": container with ID starting with ab9ed2c9e17e13459b7291e1ee4657e6a5f5ac13c1c36fb2cbd7770973a24e7b not found: ID does not exist" containerID="ab9ed2c9e17e13459b7291e1ee4657e6a5f5ac13c1c36fb2cbd7770973a24e7b" Feb 24 02:38:03.876472 master-0 kubenswrapper[31411]: I0224 02:38:03.876311 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab9ed2c9e17e13459b7291e1ee4657e6a5f5ac13c1c36fb2cbd7770973a24e7b"} err="failed to get container status \"ab9ed2c9e17e13459b7291e1ee4657e6a5f5ac13c1c36fb2cbd7770973a24e7b\": rpc error: code = NotFound desc = could not find container \"ab9ed2c9e17e13459b7291e1ee4657e6a5f5ac13c1c36fb2cbd7770973a24e7b\": container with ID starting with ab9ed2c9e17e13459b7291e1ee4657e6a5f5ac13c1c36fb2cbd7770973a24e7b not found: ID does not exist" Feb 24 02:38:03.876472 master-0 kubenswrapper[31411]: I0224 02:38:03.876344 31411 scope.go:117] "RemoveContainer" containerID="58ce87e7811af742014c6b7089b1deeda6b85768e7dbf2fb64d234d00a06e734" Feb 24 02:38:03.876804 master-0 kubenswrapper[31411]: E0224 02:38:03.876775 31411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58ce87e7811af742014c6b7089b1deeda6b85768e7dbf2fb64d234d00a06e734\": container with ID starting with 58ce87e7811af742014c6b7089b1deeda6b85768e7dbf2fb64d234d00a06e734 not found: ID does not exist" containerID="58ce87e7811af742014c6b7089b1deeda6b85768e7dbf2fb64d234d00a06e734" Feb 24 02:38:03.876884 master-0 kubenswrapper[31411]: I0224 02:38:03.876807 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58ce87e7811af742014c6b7089b1deeda6b85768e7dbf2fb64d234d00a06e734"} err="failed to get container status \"58ce87e7811af742014c6b7089b1deeda6b85768e7dbf2fb64d234d00a06e734\": rpc error: code = NotFound desc = could not find container \"58ce87e7811af742014c6b7089b1deeda6b85768e7dbf2fb64d234d00a06e734\": container with ID starting with 58ce87e7811af742014c6b7089b1deeda6b85768e7dbf2fb64d234d00a06e734 not found: ID does not exist" Feb 24 02:38:03.905879 master-0 kubenswrapper[31411]: I0224 02:38:03.899879 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Feb 24 02:38:03.929437 master-0 kubenswrapper[31411]: I0224 02:38:03.920276 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-0"] Feb 24 02:38:03.948623 master-0 kubenswrapper[31411]: I0224 02:38:03.938662 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Feb 24 02:38:03.948623 master-0 kubenswrapper[31411]: E0224 02:38:03.944101 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f74869c5-cc4c-4327-8bb2-c70f78537698" containerName="ironic-python-agent-init" Feb 24 02:38:03.948623 master-0 kubenswrapper[31411]: I0224 02:38:03.944148 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="f74869c5-cc4c-4327-8bb2-c70f78537698" containerName="ironic-python-agent-init" Feb 24 02:38:03.948623 master-0 kubenswrapper[31411]: E0224 02:38:03.944199 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f74869c5-cc4c-4327-8bb2-c70f78537698" containerName="inspector-pxe-init" Feb 24 02:38:03.948623 master-0 kubenswrapper[31411]: I0224 02:38:03.944206 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="f74869c5-cc4c-4327-8bb2-c70f78537698" containerName="inspector-pxe-init" Feb 24 02:38:03.949241 master-0 kubenswrapper[31411]: I0224 02:38:03.949180 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="f74869c5-cc4c-4327-8bb2-c70f78537698" containerName="inspector-pxe-init" Feb 24 02:38:03.954763 master-0 kubenswrapper[31411]: I0224 02:38:03.954719 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 24 02:38:03.958891 master-0 kubenswrapper[31411]: I0224 02:38:03.958724 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Feb 24 02:38:03.958955 master-0 kubenswrapper[31411]: I0224 02:38:03.958923 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-public-svc" Feb 24 02:38:03.961390 master-0 kubenswrapper[31411]: I0224 02:38:03.960658 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-internal-svc" Feb 24 02:38:03.961390 master-0 kubenswrapper[31411]: I0224 02:38:03.960904 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Feb 24 02:38:03.961390 master-0 kubenswrapper[31411]: I0224 02:38:03.961061 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Feb 24 02:38:04.008410 master-0 kubenswrapper[31411]: I0224 02:38:04.008360 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 24 02:38:04.099215 master-0 kubenswrapper[31411]: I0224 02:38:04.097739 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f57ae074-7d06-4293-91d0-f5dbd04ff2f4-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"f57ae074-7d06-4293-91d0-f5dbd04ff2f4\") " pod="openstack/ironic-inspector-0" Feb 24 02:38:04.099215 master-0 kubenswrapper[31411]: I0224 02:38:04.097891 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/f57ae074-7d06-4293-91d0-f5dbd04ff2f4-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"f57ae074-7d06-4293-91d0-f5dbd04ff2f4\") " pod="openstack/ironic-inspector-0" Feb 24 02:38:04.099215 master-0 kubenswrapper[31411]: I0224 02:38:04.097998 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg6c7\" (UniqueName: \"kubernetes.io/projected/f57ae074-7d06-4293-91d0-f5dbd04ff2f4-kube-api-access-fg6c7\") pod \"ironic-inspector-0\" (UID: \"f57ae074-7d06-4293-91d0-f5dbd04ff2f4\") " pod="openstack/ironic-inspector-0" Feb 24 02:38:04.099215 master-0 kubenswrapper[31411]: I0224 02:38:04.098036 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f57ae074-7d06-4293-91d0-f5dbd04ff2f4-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"f57ae074-7d06-4293-91d0-f5dbd04ff2f4\") " pod="openstack/ironic-inspector-0" Feb 24 02:38:04.099215 master-0 kubenswrapper[31411]: I0224 02:38:04.098081 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f57ae074-7d06-4293-91d0-f5dbd04ff2f4-config\") pod \"ironic-inspector-0\" (UID: \"f57ae074-7d06-4293-91d0-f5dbd04ff2f4\") " pod="openstack/ironic-inspector-0" Feb 24 02:38:04.099215 master-0 kubenswrapper[31411]: I0224 02:38:04.098116 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f57ae074-7d06-4293-91d0-f5dbd04ff2f4-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"f57ae074-7d06-4293-91d0-f5dbd04ff2f4\") " pod="openstack/ironic-inspector-0" Feb 24 02:38:04.099215 master-0 kubenswrapper[31411]: I0224 02:38:04.098166 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/f57ae074-7d06-4293-91d0-f5dbd04ff2f4-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"f57ae074-7d06-4293-91d0-f5dbd04ff2f4\") " pod="openstack/ironic-inspector-0" Feb 24 02:38:04.099215 master-0 kubenswrapper[31411]: I0224 02:38:04.098203 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f57ae074-7d06-4293-91d0-f5dbd04ff2f4-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"f57ae074-7d06-4293-91d0-f5dbd04ff2f4\") " pod="openstack/ironic-inspector-0" Feb 24 02:38:04.099215 master-0 kubenswrapper[31411]: I0224 02:38:04.098232 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f57ae074-7d06-4293-91d0-f5dbd04ff2f4-scripts\") pod \"ironic-inspector-0\" (UID: \"f57ae074-7d06-4293-91d0-f5dbd04ff2f4\") " pod="openstack/ironic-inspector-0" Feb 24 02:38:04.201213 master-0 kubenswrapper[31411]: I0224 02:38:04.201142 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg6c7\" (UniqueName: \"kubernetes.io/projected/f57ae074-7d06-4293-91d0-f5dbd04ff2f4-kube-api-access-fg6c7\") pod \"ironic-inspector-0\" (UID: \"f57ae074-7d06-4293-91d0-f5dbd04ff2f4\") " pod="openstack/ironic-inspector-0" Feb 24 02:38:04.201457 master-0 kubenswrapper[31411]: I0224 02:38:04.201234 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f57ae074-7d06-4293-91d0-f5dbd04ff2f4-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"f57ae074-7d06-4293-91d0-f5dbd04ff2f4\") " pod="openstack/ironic-inspector-0" Feb 24 02:38:04.201543 master-0 kubenswrapper[31411]: I0224 02:38:04.201478 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f57ae074-7d06-4293-91d0-f5dbd04ff2f4-config\") pod \"ironic-inspector-0\" (UID: \"f57ae074-7d06-4293-91d0-f5dbd04ff2f4\") " pod="openstack/ironic-inspector-0" Feb 24 02:38:04.201726 master-0 kubenswrapper[31411]: I0224 02:38:04.201689 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f57ae074-7d06-4293-91d0-f5dbd04ff2f4-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"f57ae074-7d06-4293-91d0-f5dbd04ff2f4\") " pod="openstack/ironic-inspector-0" Feb 24 02:38:04.201957 master-0 kubenswrapper[31411]: I0224 02:38:04.201922 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/f57ae074-7d06-4293-91d0-f5dbd04ff2f4-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"f57ae074-7d06-4293-91d0-f5dbd04ff2f4\") " pod="openstack/ironic-inspector-0" Feb 24 02:38:04.202047 master-0 kubenswrapper[31411]: I0224 02:38:04.202028 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f57ae074-7d06-4293-91d0-f5dbd04ff2f4-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"f57ae074-7d06-4293-91d0-f5dbd04ff2f4\") " pod="openstack/ironic-inspector-0" Feb 24 02:38:04.202184 master-0 kubenswrapper[31411]: I0224 02:38:04.202136 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f57ae074-7d06-4293-91d0-f5dbd04ff2f4-scripts\") pod \"ironic-inspector-0\" (UID: \"f57ae074-7d06-4293-91d0-f5dbd04ff2f4\") " pod="openstack/ironic-inspector-0" Feb 24 02:38:04.202322 master-0 kubenswrapper[31411]: I0224 02:38:04.202297 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f57ae074-7d06-4293-91d0-f5dbd04ff2f4-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"f57ae074-7d06-4293-91d0-f5dbd04ff2f4\") " pod="openstack/ironic-inspector-0" Feb 24 02:38:04.202691 master-0 kubenswrapper[31411]: I0224 02:38:04.202663 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/f57ae074-7d06-4293-91d0-f5dbd04ff2f4-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"f57ae074-7d06-4293-91d0-f5dbd04ff2f4\") " pod="openstack/ironic-inspector-0" Feb 24 02:38:04.205655 master-0 kubenswrapper[31411]: I0224 02:38:04.205396 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/f57ae074-7d06-4293-91d0-f5dbd04ff2f4-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"f57ae074-7d06-4293-91d0-f5dbd04ff2f4\") " pod="openstack/ironic-inspector-0" Feb 24 02:38:04.205851 master-0 kubenswrapper[31411]: I0224 02:38:04.205787 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/f57ae074-7d06-4293-91d0-f5dbd04ff2f4-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"f57ae074-7d06-4293-91d0-f5dbd04ff2f4\") " pod="openstack/ironic-inspector-0" Feb 24 02:38:04.207957 master-0 kubenswrapper[31411]: I0224 02:38:04.207764 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f57ae074-7d06-4293-91d0-f5dbd04ff2f4-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"f57ae074-7d06-4293-91d0-f5dbd04ff2f4\") " pod="openstack/ironic-inspector-0" Feb 24 02:38:04.208985 master-0 kubenswrapper[31411]: I0224 02:38:04.208934 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f57ae074-7d06-4293-91d0-f5dbd04ff2f4-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"f57ae074-7d06-4293-91d0-f5dbd04ff2f4\") " pod="openstack/ironic-inspector-0" Feb 24 02:38:04.210131 master-0 kubenswrapper[31411]: I0224 02:38:04.210093 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f57ae074-7d06-4293-91d0-f5dbd04ff2f4-config\") pod \"ironic-inspector-0\" (UID: \"f57ae074-7d06-4293-91d0-f5dbd04ff2f4\") " pod="openstack/ironic-inspector-0" Feb 24 02:38:04.216926 master-0 kubenswrapper[31411]: I0224 02:38:04.215693 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f57ae074-7d06-4293-91d0-f5dbd04ff2f4-scripts\") pod \"ironic-inspector-0\" (UID: \"f57ae074-7d06-4293-91d0-f5dbd04ff2f4\") " pod="openstack/ironic-inspector-0" Feb 24 02:38:04.216926 master-0 kubenswrapper[31411]: I0224 02:38:04.216252 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f57ae074-7d06-4293-91d0-f5dbd04ff2f4-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"f57ae074-7d06-4293-91d0-f5dbd04ff2f4\") " pod="openstack/ironic-inspector-0" Feb 24 02:38:04.220803 master-0 kubenswrapper[31411]: I0224 02:38:04.220451 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg6c7\" (UniqueName: \"kubernetes.io/projected/f57ae074-7d06-4293-91d0-f5dbd04ff2f4-kube-api-access-fg6c7\") pod \"ironic-inspector-0\" (UID: \"f57ae074-7d06-4293-91d0-f5dbd04ff2f4\") " pod="openstack/ironic-inspector-0" Feb 24 02:38:04.226525 master-0 kubenswrapper[31411]: I0224 02:38:04.226342 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f57ae074-7d06-4293-91d0-f5dbd04ff2f4-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"f57ae074-7d06-4293-91d0-f5dbd04ff2f4\") " pod="openstack/ironic-inspector-0" Feb 24 02:38:04.291752 master-0 kubenswrapper[31411]: I0224 02:38:04.291549 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 24 02:38:04.436760 master-0 kubenswrapper[31411]: I0224 02:38:04.435131 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b45666449-v77b5" Feb 24 02:38:04.618637 master-0 kubenswrapper[31411]: I0224 02:38:04.618565 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bdcc60c-f7fb-43df-8b93-035a0796383f-config\") pod \"8bdcc60c-f7fb-43df-8b93-035a0796383f\" (UID: \"8bdcc60c-f7fb-43df-8b93-035a0796383f\") " Feb 24 02:38:04.618930 master-0 kubenswrapper[31411]: I0224 02:38:04.618686 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8bdcc60c-f7fb-43df-8b93-035a0796383f-dns-swift-storage-0\") pod \"8bdcc60c-f7fb-43df-8b93-035a0796383f\" (UID: \"8bdcc60c-f7fb-43df-8b93-035a0796383f\") " Feb 24 02:38:04.618930 master-0 kubenswrapper[31411]: I0224 02:38:04.618763 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8bdcc60c-f7fb-43df-8b93-035a0796383f-ovsdbserver-sb\") pod \"8bdcc60c-f7fb-43df-8b93-035a0796383f\" (UID: \"8bdcc60c-f7fb-43df-8b93-035a0796383f\") " Feb 24 02:38:04.618930 master-0 kubenswrapper[31411]: I0224 02:38:04.618840 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8bdcc60c-f7fb-43df-8b93-035a0796383f-dns-svc\") pod \"8bdcc60c-f7fb-43df-8b93-035a0796383f\" (UID: \"8bdcc60c-f7fb-43df-8b93-035a0796383f\") " Feb 24 02:38:04.618930 master-0 kubenswrapper[31411]: I0224 02:38:04.618867 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8bdcc60c-f7fb-43df-8b93-035a0796383f-ovsdbserver-nb\") pod \"8bdcc60c-f7fb-43df-8b93-035a0796383f\" (UID: \"8bdcc60c-f7fb-43df-8b93-035a0796383f\") " Feb 24 02:38:04.619165 master-0 kubenswrapper[31411]: I0224 02:38:04.619130 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87vct\" (UniqueName: \"kubernetes.io/projected/8bdcc60c-f7fb-43df-8b93-035a0796383f-kube-api-access-87vct\") pod \"8bdcc60c-f7fb-43df-8b93-035a0796383f\" (UID: \"8bdcc60c-f7fb-43df-8b93-035a0796383f\") " Feb 24 02:38:04.624624 master-0 kubenswrapper[31411]: I0224 02:38:04.624585 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bdcc60c-f7fb-43df-8b93-035a0796383f-kube-api-access-87vct" (OuterVolumeSpecName: "kube-api-access-87vct") pod "8bdcc60c-f7fb-43df-8b93-035a0796383f" (UID: "8bdcc60c-f7fb-43df-8b93-035a0796383f"). InnerVolumeSpecName "kube-api-access-87vct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:38:04.704907 master-0 kubenswrapper[31411]: I0224 02:38:04.704834 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bdcc60c-f7fb-43df-8b93-035a0796383f-config" (OuterVolumeSpecName: "config") pod "8bdcc60c-f7fb-43df-8b93-035a0796383f" (UID: "8bdcc60c-f7fb-43df-8b93-035a0796383f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:38:04.710060 master-0 kubenswrapper[31411]: I0224 02:38:04.710020 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bdcc60c-f7fb-43df-8b93-035a0796383f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8bdcc60c-f7fb-43df-8b93-035a0796383f" (UID: "8bdcc60c-f7fb-43df-8b93-035a0796383f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:38:04.712099 master-0 kubenswrapper[31411]: I0224 02:38:04.712031 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bdcc60c-f7fb-43df-8b93-035a0796383f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8bdcc60c-f7fb-43df-8b93-035a0796383f" (UID: "8bdcc60c-f7fb-43df-8b93-035a0796383f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:38:04.712595 master-0 kubenswrapper[31411]: I0224 02:38:04.712526 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bdcc60c-f7fb-43df-8b93-035a0796383f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8bdcc60c-f7fb-43df-8b93-035a0796383f" (UID: "8bdcc60c-f7fb-43df-8b93-035a0796383f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:38:04.723659 master-0 kubenswrapper[31411]: I0224 02:38:04.723566 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87vct\" (UniqueName: \"kubernetes.io/projected/8bdcc60c-f7fb-43df-8b93-035a0796383f-kube-api-access-87vct\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:04.723659 master-0 kubenswrapper[31411]: I0224 02:38:04.723653 31411 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bdcc60c-f7fb-43df-8b93-035a0796383f-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:04.723768 master-0 kubenswrapper[31411]: I0224 02:38:04.723666 31411 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8bdcc60c-f7fb-43df-8b93-035a0796383f-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:04.723768 master-0 kubenswrapper[31411]: I0224 02:38:04.723677 31411 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8bdcc60c-f7fb-43df-8b93-035a0796383f-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:04.723768 master-0 kubenswrapper[31411]: I0224 02:38:04.723691 31411 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8bdcc60c-f7fb-43df-8b93-035a0796383f-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:04.726204 master-0 kubenswrapper[31411]: I0224 02:38:04.726136 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bdcc60c-f7fb-43df-8b93-035a0796383f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8bdcc60c-f7fb-43df-8b93-035a0796383f" (UID: "8bdcc60c-f7fb-43df-8b93-035a0796383f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:38:04.757933 master-0 kubenswrapper[31411]: I0224 02:38:04.757822 31411 generic.go:334] "Generic (PLEG): container finished" podID="8bdcc60c-f7fb-43df-8b93-035a0796383f" containerID="47b7cb45e1461ca1dd03c22026f38d6712770b0ea1a900da45208953543a87e3" exitCode=0 Feb 24 02:38:04.757933 master-0 kubenswrapper[31411]: I0224 02:38:04.757913 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b45666449-v77b5" event={"ID":"8bdcc60c-f7fb-43df-8b93-035a0796383f","Type":"ContainerDied","Data":"47b7cb45e1461ca1dd03c22026f38d6712770b0ea1a900da45208953543a87e3"} Feb 24 02:38:04.758029 master-0 kubenswrapper[31411]: I0224 02:38:04.757961 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b45666449-v77b5" event={"ID":"8bdcc60c-f7fb-43df-8b93-035a0796383f","Type":"ContainerDied","Data":"da6f59ba4f50fa2fb89d2d8f92f8093cc3c586c66462fdcaecd150d8a45a50ab"} Feb 24 02:38:04.758029 master-0 kubenswrapper[31411]: I0224 02:38:04.757987 31411 scope.go:117] "RemoveContainer" containerID="47b7cb45e1461ca1dd03c22026f38d6712770b0ea1a900da45208953543a87e3" Feb 24 02:38:04.758219 master-0 kubenswrapper[31411]: I0224 02:38:04.758190 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b45666449-v77b5" Feb 24 02:38:04.785409 master-0 kubenswrapper[31411]: I0224 02:38:04.785357 31411 scope.go:117] "RemoveContainer" containerID="90a1b8dc9793313cec4187727ff7e331f607fe03465d604355fe3354891290e7" Feb 24 02:38:04.806503 master-0 kubenswrapper[31411]: I0224 02:38:04.806238 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b45666449-v77b5"] Feb 24 02:38:04.827986 master-0 kubenswrapper[31411]: I0224 02:38:04.827636 31411 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8bdcc60c-f7fb-43df-8b93-035a0796383f-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:04.835778 master-0 kubenswrapper[31411]: I0224 02:38:04.833479 31411 scope.go:117] "RemoveContainer" containerID="47b7cb45e1461ca1dd03c22026f38d6712770b0ea1a900da45208953543a87e3" Feb 24 02:38:04.839654 master-0 kubenswrapper[31411]: E0224 02:38:04.836909 31411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47b7cb45e1461ca1dd03c22026f38d6712770b0ea1a900da45208953543a87e3\": container with ID starting with 47b7cb45e1461ca1dd03c22026f38d6712770b0ea1a900da45208953543a87e3 not found: ID does not exist" containerID="47b7cb45e1461ca1dd03c22026f38d6712770b0ea1a900da45208953543a87e3" Feb 24 02:38:04.839654 master-0 kubenswrapper[31411]: I0224 02:38:04.836979 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47b7cb45e1461ca1dd03c22026f38d6712770b0ea1a900da45208953543a87e3"} err="failed to get container status \"47b7cb45e1461ca1dd03c22026f38d6712770b0ea1a900da45208953543a87e3\": rpc error: code = NotFound desc = could not find container \"47b7cb45e1461ca1dd03c22026f38d6712770b0ea1a900da45208953543a87e3\": container with ID starting with 47b7cb45e1461ca1dd03c22026f38d6712770b0ea1a900da45208953543a87e3 not found: ID does not exist" Feb 24 02:38:04.839654 master-0 kubenswrapper[31411]: I0224 02:38:04.837017 31411 scope.go:117] "RemoveContainer" containerID="90a1b8dc9793313cec4187727ff7e331f607fe03465d604355fe3354891290e7" Feb 24 02:38:04.839654 master-0 kubenswrapper[31411]: E0224 02:38:04.837548 31411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90a1b8dc9793313cec4187727ff7e331f607fe03465d604355fe3354891290e7\": container with ID starting with 90a1b8dc9793313cec4187727ff7e331f607fe03465d604355fe3354891290e7 not found: ID does not exist" containerID="90a1b8dc9793313cec4187727ff7e331f607fe03465d604355fe3354891290e7" Feb 24 02:38:04.839654 master-0 kubenswrapper[31411]: I0224 02:38:04.837619 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90a1b8dc9793313cec4187727ff7e331f607fe03465d604355fe3354891290e7"} err="failed to get container status \"90a1b8dc9793313cec4187727ff7e331f607fe03465d604355fe3354891290e7\": rpc error: code = NotFound desc = could not find container \"90a1b8dc9793313cec4187727ff7e331f607fe03465d604355fe3354891290e7\": container with ID starting with 90a1b8dc9793313cec4187727ff7e331f607fe03465d604355fe3354891290e7 not found: ID does not exist" Feb 24 02:38:04.839654 master-0 kubenswrapper[31411]: I0224 02:38:04.837700 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b45666449-v77b5"] Feb 24 02:38:04.938063 master-0 kubenswrapper[31411]: I0224 02:38:04.938002 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 24 02:38:05.115002 master-0 kubenswrapper[31411]: I0224 02:38:05.114534 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bdcc60c-f7fb-43df-8b93-035a0796383f" path="/var/lib/kubelet/pods/8bdcc60c-f7fb-43df-8b93-035a0796383f/volumes" Feb 24 02:38:05.115474 master-0 kubenswrapper[31411]: I0224 02:38:05.115438 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f74869c5-cc4c-4327-8bb2-c70f78537698" path="/var/lib/kubelet/pods/f74869c5-cc4c-4327-8bb2-c70f78537698/volumes" Feb 24 02:38:10.342412 master-0 kubenswrapper[31411]: W0224 02:38:10.342317 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf57ae074_7d06_4293_91d0_f5dbd04ff2f4.slice/crio-f36c8291c46c4a234c176f7ec836c33f83a330292535e163614114670a9a0c1f WatchSource:0}: Error finding container f36c8291c46c4a234c176f7ec836c33f83a330292535e163614114670a9a0c1f: Status 404 returned error can't find the container with id f36c8291c46c4a234c176f7ec836c33f83a330292535e163614114670a9a0c1f Feb 24 02:38:10.866628 master-0 kubenswrapper[31411]: I0224 02:38:10.866442 31411 generic.go:334] "Generic (PLEG): container finished" podID="f57ae074-7d06-4293-91d0-f5dbd04ff2f4" containerID="c9b7716d34f617ee5f4489a372acec06687be354030335c7fd703d8f21e3662b" exitCode=0 Feb 24 02:38:10.866997 master-0 kubenswrapper[31411]: I0224 02:38:10.866544 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f57ae074-7d06-4293-91d0-f5dbd04ff2f4","Type":"ContainerDied","Data":"c9b7716d34f617ee5f4489a372acec06687be354030335c7fd703d8f21e3662b"} Feb 24 02:38:10.867110 master-0 kubenswrapper[31411]: I0224 02:38:10.867091 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f57ae074-7d06-4293-91d0-f5dbd04ff2f4","Type":"ContainerStarted","Data":"f36c8291c46c4a234c176f7ec836c33f83a330292535e163614114670a9a0c1f"} Feb 24 02:38:10.870269 master-0 kubenswrapper[31411]: I0224 02:38:10.870244 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vdhjz" event={"ID":"d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9","Type":"ContainerStarted","Data":"24ed2b6d864e061dc427eae421ffa48e8027e239f97e5f29b3e2b21204ad97ff"} Feb 24 02:38:11.026701 master-0 kubenswrapper[31411]: I0224 02:38:11.026591 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-vdhjz" podStartSLOduration=3.370544178 podStartE2EDuration="15.026554788s" podCreationTimestamp="2026-02-24 02:37:56 +0000 UTC" firstStartedPulling="2026-02-24 02:37:58.770073315 +0000 UTC m=+1021.987271161" lastFinishedPulling="2026-02-24 02:38:10.426083915 +0000 UTC m=+1033.643281771" observedRunningTime="2026-02-24 02:38:11.016936158 +0000 UTC m=+1034.234134014" watchObservedRunningTime="2026-02-24 02:38:11.026554788 +0000 UTC m=+1034.243752644" Feb 24 02:38:11.890132 master-0 kubenswrapper[31411]: I0224 02:38:11.890026 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f57ae074-7d06-4293-91d0-f5dbd04ff2f4","Type":"ContainerStarted","Data":"23b76c585b4f4f4a5bb6195dfe9fc66aae331e8f812dba099eb65381fb4d4dc6"} Feb 24 02:38:13.398517 master-0 kubenswrapper[31411]: I0224 02:38:13.398373 31411 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod65df10fc-36c4-4eab-aaf3-962a5294face"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod65df10fc-36c4-4eab-aaf3-962a5294face] : Timed out while waiting for systemd to remove kubepods-besteffort-pod65df10fc_36c4_4eab_aaf3_962a5294face.slice" Feb 24 02:38:14.998483 master-0 kubenswrapper[31411]: I0224 02:38:14.996284 31411 generic.go:334] "Generic (PLEG): container finished" podID="f57ae074-7d06-4293-91d0-f5dbd04ff2f4" containerID="23b76c585b4f4f4a5bb6195dfe9fc66aae331e8f812dba099eb65381fb4d4dc6" exitCode=0 Feb 24 02:38:14.998483 master-0 kubenswrapper[31411]: I0224 02:38:14.996377 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f57ae074-7d06-4293-91d0-f5dbd04ff2f4","Type":"ContainerDied","Data":"23b76c585b4f4f4a5bb6195dfe9fc66aae331e8f812dba099eb65381fb4d4dc6"} Feb 24 02:38:18.066647 master-0 kubenswrapper[31411]: I0224 02:38:18.066537 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f57ae074-7d06-4293-91d0-f5dbd04ff2f4","Type":"ContainerStarted","Data":"4c6354e1d761acb3af01b9db9a12099038d4753c6c214758baac8c585cb0e5a9"} Feb 24 02:38:19.085166 master-0 kubenswrapper[31411]: I0224 02:38:19.084184 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f57ae074-7d06-4293-91d0-f5dbd04ff2f4","Type":"ContainerStarted","Data":"c125684f50501ad0ec1544f4d416beab99107a8e38e0e6a701fb583e1ef69d99"} Feb 24 02:38:20.112741 master-0 kubenswrapper[31411]: I0224 02:38:20.112564 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f57ae074-7d06-4293-91d0-f5dbd04ff2f4","Type":"ContainerStarted","Data":"ea850b5d6c68b2294401e0eacad4f91d7798f2388b08b725d21cace92ea42d27"} Feb 24 02:38:20.112741 master-0 kubenswrapper[31411]: I0224 02:38:20.112642 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f57ae074-7d06-4293-91d0-f5dbd04ff2f4","Type":"ContainerStarted","Data":"424d00d84a045a1408ba45bcdc7fb498a80a17e69694968d17ea14f33ea635dc"} Feb 24 02:38:21.140955 master-0 kubenswrapper[31411]: I0224 02:38:21.140836 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f57ae074-7d06-4293-91d0-f5dbd04ff2f4","Type":"ContainerStarted","Data":"035757bc4b1d3cfbbca8871a2d561c2c844b001942c5b1a73d7c81e45ba14d3c"} Feb 24 02:38:21.142021 master-0 kubenswrapper[31411]: I0224 02:38:21.141112 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 24 02:38:21.420853 master-0 kubenswrapper[31411]: I0224 02:38:21.402142 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-0" podStartSLOduration=18.402112513 podStartE2EDuration="18.402112513s" podCreationTimestamp="2026-02-24 02:38:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:38:21.385223719 +0000 UTC m=+1044.602421645" watchObservedRunningTime="2026-02-24 02:38:21.402112513 +0000 UTC m=+1044.619310399" Feb 24 02:38:22.158266 master-0 kubenswrapper[31411]: I0224 02:38:22.158140 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 24 02:38:23.244249 master-0 kubenswrapper[31411]: I0224 02:38:23.244154 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 24 02:38:24.194502 master-0 kubenswrapper[31411]: I0224 02:38:24.194388 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 24 02:38:24.299646 master-0 kubenswrapper[31411]: I0224 02:38:24.295702 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 24 02:38:24.299646 master-0 kubenswrapper[31411]: I0224 02:38:24.295790 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Feb 24 02:38:24.299646 master-0 kubenswrapper[31411]: I0224 02:38:24.295815 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 24 02:38:24.299646 master-0 kubenswrapper[31411]: I0224 02:38:24.295826 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Feb 24 02:38:24.328924 master-0 kubenswrapper[31411]: I0224 02:38:24.328876 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Feb 24 02:38:24.333669 master-0 kubenswrapper[31411]: I0224 02:38:24.333645 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Feb 24 02:38:25.222119 master-0 kubenswrapper[31411]: I0224 02:38:25.222035 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 24 02:38:25.228083 master-0 kubenswrapper[31411]: I0224 02:38:25.228020 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 24 02:38:29.307962 master-0 kubenswrapper[31411]: I0224 02:38:29.307878 31411 generic.go:334] "Generic (PLEG): container finished" podID="d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9" containerID="24ed2b6d864e061dc427eae421ffa48e8027e239f97e5f29b3e2b21204ad97ff" exitCode=0 Feb 24 02:38:29.308751 master-0 kubenswrapper[31411]: I0224 02:38:29.307979 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vdhjz" event={"ID":"d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9","Type":"ContainerDied","Data":"24ed2b6d864e061dc427eae421ffa48e8027e239f97e5f29b3e2b21204ad97ff"} Feb 24 02:38:30.917102 master-0 kubenswrapper[31411]: I0224 02:38:30.917012 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vdhjz" Feb 24 02:38:31.038246 master-0 kubenswrapper[31411]: I0224 02:38:31.038085 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9-config-data\") pod \"d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9\" (UID: \"d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9\") " Feb 24 02:38:31.038687 master-0 kubenswrapper[31411]: I0224 02:38:31.038254 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pb6d\" (UniqueName: \"kubernetes.io/projected/d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9-kube-api-access-9pb6d\") pod \"d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9\" (UID: \"d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9\") " Feb 24 02:38:31.038687 master-0 kubenswrapper[31411]: I0224 02:38:31.038593 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9-scripts\") pod \"d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9\" (UID: \"d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9\") " Feb 24 02:38:31.042027 master-0 kubenswrapper[31411]: I0224 02:38:31.038724 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9-combined-ca-bundle\") pod \"d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9\" (UID: \"d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9\") " Feb 24 02:38:31.042886 master-0 kubenswrapper[31411]: I0224 02:38:31.042709 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9-kube-api-access-9pb6d" (OuterVolumeSpecName: "kube-api-access-9pb6d") pod "d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9" (UID: "d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9"). InnerVolumeSpecName "kube-api-access-9pb6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:38:31.044887 master-0 kubenswrapper[31411]: I0224 02:38:31.044839 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9-scripts" (OuterVolumeSpecName: "scripts") pod "d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9" (UID: "d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:38:31.083609 master-0 kubenswrapper[31411]: I0224 02:38:31.080433 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9" (UID: "d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:38:31.083609 master-0 kubenswrapper[31411]: I0224 02:38:31.083321 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9-config-data" (OuterVolumeSpecName: "config-data") pod "d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9" (UID: "d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:38:31.143546 master-0 kubenswrapper[31411]: I0224 02:38:31.143483 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pb6d\" (UniqueName: \"kubernetes.io/projected/d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9-kube-api-access-9pb6d\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:31.143546 master-0 kubenswrapper[31411]: I0224 02:38:31.143541 31411 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:31.143813 master-0 kubenswrapper[31411]: I0224 02:38:31.143561 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:31.143813 master-0 kubenswrapper[31411]: I0224 02:38:31.143597 31411 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:31.343447 master-0 kubenswrapper[31411]: I0224 02:38:31.343257 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vdhjz" event={"ID":"d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9","Type":"ContainerDied","Data":"a6f5f7245e9684c42971ef90fc2f9320589230fdecc8d13952f68c1f41cb6f3d"} Feb 24 02:38:31.343447 master-0 kubenswrapper[31411]: I0224 02:38:31.343342 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6f5f7245e9684c42971ef90fc2f9320589230fdecc8d13952f68c1f41cb6f3d" Feb 24 02:38:31.343447 master-0 kubenswrapper[31411]: I0224 02:38:31.343392 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vdhjz" Feb 24 02:38:31.680532 master-0 kubenswrapper[31411]: I0224 02:38:31.680337 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 24 02:38:31.680980 master-0 kubenswrapper[31411]: E0224 02:38:31.680935 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bdcc60c-f7fb-43df-8b93-035a0796383f" containerName="dnsmasq-dns" Feb 24 02:38:31.680980 master-0 kubenswrapper[31411]: I0224 02:38:31.680962 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bdcc60c-f7fb-43df-8b93-035a0796383f" containerName="dnsmasq-dns" Feb 24 02:38:31.681135 master-0 kubenswrapper[31411]: E0224 02:38:31.681028 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bdcc60c-f7fb-43df-8b93-035a0796383f" containerName="init" Feb 24 02:38:31.681135 master-0 kubenswrapper[31411]: I0224 02:38:31.681038 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bdcc60c-f7fb-43df-8b93-035a0796383f" containerName="init" Feb 24 02:38:31.681135 master-0 kubenswrapper[31411]: E0224 02:38:31.681084 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9" containerName="nova-cell0-conductor-db-sync" Feb 24 02:38:31.681135 master-0 kubenswrapper[31411]: I0224 02:38:31.681093 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9" containerName="nova-cell0-conductor-db-sync" Feb 24 02:38:31.681413 master-0 kubenswrapper[31411]: I0224 02:38:31.681387 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bdcc60c-f7fb-43df-8b93-035a0796383f" containerName="dnsmasq-dns" Feb 24 02:38:31.681496 master-0 kubenswrapper[31411]: I0224 02:38:31.681425 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9" containerName="nova-cell0-conductor-db-sync" Feb 24 02:38:31.682352 master-0 kubenswrapper[31411]: I0224 02:38:31.682306 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 24 02:38:31.697296 master-0 kubenswrapper[31411]: I0224 02:38:31.697210 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 24 02:38:31.703249 master-0 kubenswrapper[31411]: I0224 02:38:31.703188 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 24 02:38:31.875269 master-0 kubenswrapper[31411]: I0224 02:38:31.875057 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d00e0e-a4bc-45ca-bf97-ee71a47cff31-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"a2d00e0e-a4bc-45ca-bf97-ee71a47cff31\") " pod="openstack/nova-cell0-conductor-0" Feb 24 02:38:31.875813 master-0 kubenswrapper[31411]: I0224 02:38:31.875439 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccdm2\" (UniqueName: \"kubernetes.io/projected/a2d00e0e-a4bc-45ca-bf97-ee71a47cff31-kube-api-access-ccdm2\") pod \"nova-cell0-conductor-0\" (UID: \"a2d00e0e-a4bc-45ca-bf97-ee71a47cff31\") " pod="openstack/nova-cell0-conductor-0" Feb 24 02:38:31.875813 master-0 kubenswrapper[31411]: I0224 02:38:31.875659 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d00e0e-a4bc-45ca-bf97-ee71a47cff31-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"a2d00e0e-a4bc-45ca-bf97-ee71a47cff31\") " pod="openstack/nova-cell0-conductor-0" Feb 24 02:38:31.978155 master-0 kubenswrapper[31411]: I0224 02:38:31.978064 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d00e0e-a4bc-45ca-bf97-ee71a47cff31-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"a2d00e0e-a4bc-45ca-bf97-ee71a47cff31\") " pod="openstack/nova-cell0-conductor-0" Feb 24 02:38:31.979134 master-0 kubenswrapper[31411]: I0224 02:38:31.978321 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d00e0e-a4bc-45ca-bf97-ee71a47cff31-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"a2d00e0e-a4bc-45ca-bf97-ee71a47cff31\") " pod="openstack/nova-cell0-conductor-0" Feb 24 02:38:31.979134 master-0 kubenswrapper[31411]: I0224 02:38:31.978450 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccdm2\" (UniqueName: \"kubernetes.io/projected/a2d00e0e-a4bc-45ca-bf97-ee71a47cff31-kube-api-access-ccdm2\") pod \"nova-cell0-conductor-0\" (UID: \"a2d00e0e-a4bc-45ca-bf97-ee71a47cff31\") " pod="openstack/nova-cell0-conductor-0" Feb 24 02:38:31.985655 master-0 kubenswrapper[31411]: I0224 02:38:31.985534 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d00e0e-a4bc-45ca-bf97-ee71a47cff31-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"a2d00e0e-a4bc-45ca-bf97-ee71a47cff31\") " pod="openstack/nova-cell0-conductor-0" Feb 24 02:38:31.985841 master-0 kubenswrapper[31411]: I0224 02:38:31.985801 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d00e0e-a4bc-45ca-bf97-ee71a47cff31-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"a2d00e0e-a4bc-45ca-bf97-ee71a47cff31\") " pod="openstack/nova-cell0-conductor-0" Feb 24 02:38:32.011734 master-0 kubenswrapper[31411]: I0224 02:38:32.010818 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccdm2\" (UniqueName: \"kubernetes.io/projected/a2d00e0e-a4bc-45ca-bf97-ee71a47cff31-kube-api-access-ccdm2\") pod \"nova-cell0-conductor-0\" (UID: \"a2d00e0e-a4bc-45ca-bf97-ee71a47cff31\") " pod="openstack/nova-cell0-conductor-0" Feb 24 02:38:32.018280 master-0 kubenswrapper[31411]: I0224 02:38:32.018169 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 24 02:38:32.603169 master-0 kubenswrapper[31411]: I0224 02:38:32.603087 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 24 02:38:32.603830 master-0 kubenswrapper[31411]: W0224 02:38:32.603728 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2d00e0e_a4bc_45ca_bf97_ee71a47cff31.slice/crio-896aac5bcdf24be3fff7ac2ec84041532cbd903d37eb8fbcaaaabdac3e12d852 WatchSource:0}: Error finding container 896aac5bcdf24be3fff7ac2ec84041532cbd903d37eb8fbcaaaabdac3e12d852: Status 404 returned error can't find the container with id 896aac5bcdf24be3fff7ac2ec84041532cbd903d37eb8fbcaaaabdac3e12d852 Feb 24 02:38:33.391184 master-0 kubenswrapper[31411]: I0224 02:38:33.391034 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"a2d00e0e-a4bc-45ca-bf97-ee71a47cff31","Type":"ContainerStarted","Data":"72a221e6cd9a77ad67ce2dbb772ee0e21d8225363d442c63569a4ba6c90b5e60"} Feb 24 02:38:33.394280 master-0 kubenswrapper[31411]: I0224 02:38:33.394234 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"a2d00e0e-a4bc-45ca-bf97-ee71a47cff31","Type":"ContainerStarted","Data":"896aac5bcdf24be3fff7ac2ec84041532cbd903d37eb8fbcaaaabdac3e12d852"} Feb 24 02:38:33.394493 master-0 kubenswrapper[31411]: I0224 02:38:33.394467 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 24 02:38:33.448002 master-0 kubenswrapper[31411]: I0224 02:38:33.447824 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.447799086 podStartE2EDuration="2.447799086s" podCreationTimestamp="2026-02-24 02:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:38:33.434112663 +0000 UTC m=+1056.651310549" watchObservedRunningTime="2026-02-24 02:38:33.447799086 +0000 UTC m=+1056.664996932" Feb 24 02:38:37.083985 master-0 kubenswrapper[31411]: I0224 02:38:37.083909 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 24 02:38:37.638614 master-0 kubenswrapper[31411]: I0224 02:38:37.636446 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-mz9lj"] Feb 24 02:38:37.641795 master-0 kubenswrapper[31411]: I0224 02:38:37.639428 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mz9lj" Feb 24 02:38:37.646642 master-0 kubenswrapper[31411]: I0224 02:38:37.646607 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 24 02:38:37.647097 master-0 kubenswrapper[31411]: I0224 02:38:37.647060 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 24 02:38:37.651484 master-0 kubenswrapper[31411]: I0224 02:38:37.651452 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mz9lj"] Feb 24 02:38:37.659908 master-0 kubenswrapper[31411]: I0224 02:38:37.659852 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hvnn\" (UniqueName: \"kubernetes.io/projected/23b93a03-505a-41a6-94b6-9344778d91be-kube-api-access-5hvnn\") pod \"nova-cell0-cell-mapping-mz9lj\" (UID: \"23b93a03-505a-41a6-94b6-9344778d91be\") " pod="openstack/nova-cell0-cell-mapping-mz9lj" Feb 24 02:38:37.660104 master-0 kubenswrapper[31411]: I0224 02:38:37.659946 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23b93a03-505a-41a6-94b6-9344778d91be-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mz9lj\" (UID: \"23b93a03-505a-41a6-94b6-9344778d91be\") " pod="openstack/nova-cell0-cell-mapping-mz9lj" Feb 24 02:38:37.660104 master-0 kubenswrapper[31411]: I0224 02:38:37.660066 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23b93a03-505a-41a6-94b6-9344778d91be-config-data\") pod \"nova-cell0-cell-mapping-mz9lj\" (UID: \"23b93a03-505a-41a6-94b6-9344778d91be\") " pod="openstack/nova-cell0-cell-mapping-mz9lj" Feb 24 02:38:37.660266 master-0 kubenswrapper[31411]: I0224 02:38:37.660233 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23b93a03-505a-41a6-94b6-9344778d91be-scripts\") pod \"nova-cell0-cell-mapping-mz9lj\" (UID: \"23b93a03-505a-41a6-94b6-9344778d91be\") " pod="openstack/nova-cell0-cell-mapping-mz9lj" Feb 24 02:38:37.796598 master-0 kubenswrapper[31411]: I0224 02:38:37.795955 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23b93a03-505a-41a6-94b6-9344778d91be-config-data\") pod \"nova-cell0-cell-mapping-mz9lj\" (UID: \"23b93a03-505a-41a6-94b6-9344778d91be\") " pod="openstack/nova-cell0-cell-mapping-mz9lj" Feb 24 02:38:37.796598 master-0 kubenswrapper[31411]: I0224 02:38:37.796443 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23b93a03-505a-41a6-94b6-9344778d91be-scripts\") pod \"nova-cell0-cell-mapping-mz9lj\" (UID: \"23b93a03-505a-41a6-94b6-9344778d91be\") " pod="openstack/nova-cell0-cell-mapping-mz9lj" Feb 24 02:38:37.796915 master-0 kubenswrapper[31411]: I0224 02:38:37.796613 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hvnn\" (UniqueName: \"kubernetes.io/projected/23b93a03-505a-41a6-94b6-9344778d91be-kube-api-access-5hvnn\") pod \"nova-cell0-cell-mapping-mz9lj\" (UID: \"23b93a03-505a-41a6-94b6-9344778d91be\") " pod="openstack/nova-cell0-cell-mapping-mz9lj" Feb 24 02:38:37.796915 master-0 kubenswrapper[31411]: I0224 02:38:37.796720 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23b93a03-505a-41a6-94b6-9344778d91be-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mz9lj\" (UID: \"23b93a03-505a-41a6-94b6-9344778d91be\") " pod="openstack/nova-cell0-cell-mapping-mz9lj" Feb 24 02:38:37.808527 master-0 kubenswrapper[31411]: I0224 02:38:37.804606 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23b93a03-505a-41a6-94b6-9344778d91be-scripts\") pod \"nova-cell0-cell-mapping-mz9lj\" (UID: \"23b93a03-505a-41a6-94b6-9344778d91be\") " pod="openstack/nova-cell0-cell-mapping-mz9lj" Feb 24 02:38:37.808527 master-0 kubenswrapper[31411]: I0224 02:38:37.807163 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Feb 24 02:38:37.812704 master-0 kubenswrapper[31411]: I0224 02:38:37.811311 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23b93a03-505a-41a6-94b6-9344778d91be-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mz9lj\" (UID: \"23b93a03-505a-41a6-94b6-9344778d91be\") " pod="openstack/nova-cell0-cell-mapping-mz9lj" Feb 24 02:38:37.812704 master-0 kubenswrapper[31411]: I0224 02:38:37.812408 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 24 02:38:37.834678 master-0 kubenswrapper[31411]: I0224 02:38:37.830695 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-ironic-compute-config-data" Feb 24 02:38:37.842337 master-0 kubenswrapper[31411]: I0224 02:38:37.842121 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23b93a03-505a-41a6-94b6-9344778d91be-config-data\") pod \"nova-cell0-cell-mapping-mz9lj\" (UID: \"23b93a03-505a-41a6-94b6-9344778d91be\") " pod="openstack/nova-cell0-cell-mapping-mz9lj" Feb 24 02:38:37.853708 master-0 kubenswrapper[31411]: I0224 02:38:37.853666 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hvnn\" (UniqueName: \"kubernetes.io/projected/23b93a03-505a-41a6-94b6-9344778d91be-kube-api-access-5hvnn\") pod \"nova-cell0-cell-mapping-mz9lj\" (UID: \"23b93a03-505a-41a6-94b6-9344778d91be\") " pod="openstack/nova-cell0-cell-mapping-mz9lj" Feb 24 02:38:37.863301 master-0 kubenswrapper[31411]: I0224 02:38:37.863231 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Feb 24 02:38:37.991661 master-0 kubenswrapper[31411]: I0224 02:38:37.988684 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 02:38:37.997459 master-0 kubenswrapper[31411]: I0224 02:38:37.992746 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 24 02:38:38.013402 master-0 kubenswrapper[31411]: I0224 02:38:38.009601 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 24 02:38:38.013402 master-0 kubenswrapper[31411]: I0224 02:38:38.010764 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 02:38:38.013402 master-0 kubenswrapper[31411]: I0224 02:38:38.011857 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mz9lj" Feb 24 02:38:38.013402 master-0 kubenswrapper[31411]: I0224 02:38:38.012795 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8tn5\" (UniqueName: \"kubernetes.io/projected/74a07828-16f1-4b69-bfbd-a0e76519ce98-kube-api-access-k8tn5\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"74a07828-16f1-4b69-bfbd-a0e76519ce98\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 24 02:38:38.013402 master-0 kubenswrapper[31411]: I0224 02:38:38.012864 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a07828-16f1-4b69-bfbd-a0e76519ce98-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"74a07828-16f1-4b69-bfbd-a0e76519ce98\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 24 02:38:38.013402 master-0 kubenswrapper[31411]: I0224 02:38:38.012920 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a07828-16f1-4b69-bfbd-a0e76519ce98-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"74a07828-16f1-4b69-bfbd-a0e76519ce98\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 24 02:38:38.116651 master-0 kubenswrapper[31411]: I0224 02:38:38.113735 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 24 02:38:38.128590 master-0 kubenswrapper[31411]: I0224 02:38:38.128400 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a07828-16f1-4b69-bfbd-a0e76519ce98-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"74a07828-16f1-4b69-bfbd-a0e76519ce98\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 24 02:38:38.128755 master-0 kubenswrapper[31411]: I0224 02:38:38.128685 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a07828-16f1-4b69-bfbd-a0e76519ce98-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"74a07828-16f1-4b69-bfbd-a0e76519ce98\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 24 02:38:38.145592 master-0 kubenswrapper[31411]: I0224 02:38:38.128803 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43c3681e-f1f6-4953-8b9c-fc9b08618f5f-config-data\") pod \"nova-scheduler-0\" (UID: \"43c3681e-f1f6-4953-8b9c-fc9b08618f5f\") " pod="openstack/nova-scheduler-0" Feb 24 02:38:38.145592 master-0 kubenswrapper[31411]: I0224 02:38:38.129594 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c3681e-f1f6-4953-8b9c-fc9b08618f5f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"43c3681e-f1f6-4953-8b9c-fc9b08618f5f\") " pod="openstack/nova-scheduler-0" Feb 24 02:38:38.145592 master-0 kubenswrapper[31411]: I0224 02:38:38.129630 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8tn5\" (UniqueName: \"kubernetes.io/projected/74a07828-16f1-4b69-bfbd-a0e76519ce98-kube-api-access-k8tn5\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"74a07828-16f1-4b69-bfbd-a0e76519ce98\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 24 02:38:38.145592 master-0 kubenswrapper[31411]: I0224 02:38:38.129654 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2gj8\" (UniqueName: \"kubernetes.io/projected/43c3681e-f1f6-4953-8b9c-fc9b08618f5f-kube-api-access-s2gj8\") pod \"nova-scheduler-0\" (UID: \"43c3681e-f1f6-4953-8b9c-fc9b08618f5f\") " pod="openstack/nova-scheduler-0" Feb 24 02:38:38.145592 master-0 kubenswrapper[31411]: I0224 02:38:38.138703 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a07828-16f1-4b69-bfbd-a0e76519ce98-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"74a07828-16f1-4b69-bfbd-a0e76519ce98\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 24 02:38:38.145592 master-0 kubenswrapper[31411]: I0224 02:38:38.141691 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 02:38:38.158135 master-0 kubenswrapper[31411]: I0224 02:38:38.153317 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 24 02:38:38.158135 master-0 kubenswrapper[31411]: I0224 02:38:38.155637 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a07828-16f1-4b69-bfbd-a0e76519ce98-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"74a07828-16f1-4b69-bfbd-a0e76519ce98\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 24 02:38:38.162602 master-0 kubenswrapper[31411]: I0224 02:38:38.158932 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 02:38:38.192683 master-0 kubenswrapper[31411]: I0224 02:38:38.191868 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8tn5\" (UniqueName: \"kubernetes.io/projected/74a07828-16f1-4b69-bfbd-a0e76519ce98-kube-api-access-k8tn5\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"74a07828-16f1-4b69-bfbd-a0e76519ce98\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 24 02:38:38.241598 master-0 kubenswrapper[31411]: I0224 02:38:38.234356 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d483bb01-7ee4-4f34-a40b-15bf34e365bc-logs\") pod \"nova-metadata-0\" (UID: \"d483bb01-7ee4-4f34-a40b-15bf34e365bc\") " pod="openstack/nova-metadata-0" Feb 24 02:38:38.241598 master-0 kubenswrapper[31411]: I0224 02:38:38.234463 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43c3681e-f1f6-4953-8b9c-fc9b08618f5f-config-data\") pod \"nova-scheduler-0\" (UID: \"43c3681e-f1f6-4953-8b9c-fc9b08618f5f\") " pod="openstack/nova-scheduler-0" Feb 24 02:38:38.241598 master-0 kubenswrapper[31411]: I0224 02:38:38.240250 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 24 02:38:38.245590 master-0 kubenswrapper[31411]: I0224 02:38:38.234520 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d483bb01-7ee4-4f34-a40b-15bf34e365bc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d483bb01-7ee4-4f34-a40b-15bf34e365bc\") " pod="openstack/nova-metadata-0" Feb 24 02:38:38.245590 master-0 kubenswrapper[31411]: I0224 02:38:38.242372 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx4bv\" (UniqueName: \"kubernetes.io/projected/d483bb01-7ee4-4f34-a40b-15bf34e365bc-kube-api-access-vx4bv\") pod \"nova-metadata-0\" (UID: \"d483bb01-7ee4-4f34-a40b-15bf34e365bc\") " pod="openstack/nova-metadata-0" Feb 24 02:38:38.245590 master-0 kubenswrapper[31411]: I0224 02:38:38.242801 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d483bb01-7ee4-4f34-a40b-15bf34e365bc-config-data\") pod \"nova-metadata-0\" (UID: \"d483bb01-7ee4-4f34-a40b-15bf34e365bc\") " pod="openstack/nova-metadata-0" Feb 24 02:38:38.245590 master-0 kubenswrapper[31411]: I0224 02:38:38.243005 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c3681e-f1f6-4953-8b9c-fc9b08618f5f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"43c3681e-f1f6-4953-8b9c-fc9b08618f5f\") " pod="openstack/nova-scheduler-0" Feb 24 02:38:38.245590 master-0 kubenswrapper[31411]: I0224 02:38:38.243054 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2gj8\" (UniqueName: \"kubernetes.io/projected/43c3681e-f1f6-4953-8b9c-fc9b08618f5f-kube-api-access-s2gj8\") pod \"nova-scheduler-0\" (UID: \"43c3681e-f1f6-4953-8b9c-fc9b08618f5f\") " pod="openstack/nova-scheduler-0" Feb 24 02:38:38.327606 master-0 kubenswrapper[31411]: I0224 02:38:38.285645 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c3681e-f1f6-4953-8b9c-fc9b08618f5f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"43c3681e-f1f6-4953-8b9c-fc9b08618f5f\") " pod="openstack/nova-scheduler-0" Feb 24 02:38:38.327606 master-0 kubenswrapper[31411]: I0224 02:38:38.286876 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43c3681e-f1f6-4953-8b9c-fc9b08618f5f-config-data\") pod \"nova-scheduler-0\" (UID: \"43c3681e-f1f6-4953-8b9c-fc9b08618f5f\") " pod="openstack/nova-scheduler-0" Feb 24 02:38:38.327606 master-0 kubenswrapper[31411]: I0224 02:38:38.312065 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2gj8\" (UniqueName: \"kubernetes.io/projected/43c3681e-f1f6-4953-8b9c-fc9b08618f5f-kube-api-access-s2gj8\") pod \"nova-scheduler-0\" (UID: \"43c3681e-f1f6-4953-8b9c-fc9b08618f5f\") " pod="openstack/nova-scheduler-0" Feb 24 02:38:38.421076 master-0 kubenswrapper[31411]: I0224 02:38:38.351855 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d483bb01-7ee4-4f34-a40b-15bf34e365bc-logs\") pod \"nova-metadata-0\" (UID: \"d483bb01-7ee4-4f34-a40b-15bf34e365bc\") " pod="openstack/nova-metadata-0" Feb 24 02:38:38.421076 master-0 kubenswrapper[31411]: I0224 02:38:38.351957 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d483bb01-7ee4-4f34-a40b-15bf34e365bc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d483bb01-7ee4-4f34-a40b-15bf34e365bc\") " pod="openstack/nova-metadata-0" Feb 24 02:38:38.421076 master-0 kubenswrapper[31411]: I0224 02:38:38.351979 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx4bv\" (UniqueName: \"kubernetes.io/projected/d483bb01-7ee4-4f34-a40b-15bf34e365bc-kube-api-access-vx4bv\") pod \"nova-metadata-0\" (UID: \"d483bb01-7ee4-4f34-a40b-15bf34e365bc\") " pod="openstack/nova-metadata-0" Feb 24 02:38:38.421076 master-0 kubenswrapper[31411]: I0224 02:38:38.352048 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d483bb01-7ee4-4f34-a40b-15bf34e365bc-config-data\") pod \"nova-metadata-0\" (UID: \"d483bb01-7ee4-4f34-a40b-15bf34e365bc\") " pod="openstack/nova-metadata-0" Feb 24 02:38:38.421076 master-0 kubenswrapper[31411]: I0224 02:38:38.352920 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d483bb01-7ee4-4f34-a40b-15bf34e365bc-logs\") pod \"nova-metadata-0\" (UID: \"d483bb01-7ee4-4f34-a40b-15bf34e365bc\") " pod="openstack/nova-metadata-0" Feb 24 02:38:38.421076 master-0 kubenswrapper[31411]: I0224 02:38:38.355976 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 24 02:38:38.421076 master-0 kubenswrapper[31411]: I0224 02:38:38.367419 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d483bb01-7ee4-4f34-a40b-15bf34e365bc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d483bb01-7ee4-4f34-a40b-15bf34e365bc\") " pod="openstack/nova-metadata-0" Feb 24 02:38:38.421076 master-0 kubenswrapper[31411]: I0224 02:38:38.367591 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d483bb01-7ee4-4f34-a40b-15bf34e365bc-config-data\") pod \"nova-metadata-0\" (UID: \"d483bb01-7ee4-4f34-a40b-15bf34e365bc\") " pod="openstack/nova-metadata-0" Feb 24 02:38:38.440832 master-0 kubenswrapper[31411]: I0224 02:38:38.440505 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 24 02:38:38.443800 master-0 kubenswrapper[31411]: I0224 02:38:38.443641 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 02:38:38.448209 master-0 kubenswrapper[31411]: I0224 02:38:38.446864 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 24 02:38:38.465528 master-0 kubenswrapper[31411]: I0224 02:38:38.465462 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6fd9d5d9-zff6h"] Feb 24 02:38:38.469993 master-0 kubenswrapper[31411]: I0224 02:38:38.469946 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6fd9d5d9-zff6h" Feb 24 02:38:38.493054 master-0 kubenswrapper[31411]: I0224 02:38:38.485225 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 24 02:38:38.522370 master-0 kubenswrapper[31411]: I0224 02:38:38.522314 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 24 02:38:38.524912 master-0 kubenswrapper[31411]: I0224 02:38:38.524697 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:38:38.529658 master-0 kubenswrapper[31411]: I0224 02:38:38.527948 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 24 02:38:38.543733 master-0 kubenswrapper[31411]: I0224 02:38:38.543674 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6fd9d5d9-zff6h"] Feb 24 02:38:38.556711 master-0 kubenswrapper[31411]: I0224 02:38:38.556326 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4m86\" (UniqueName: \"kubernetes.io/projected/29d4983d-bc42-45b7-ae25-f7c24cdd8512-kube-api-access-b4m86\") pod \"nova-api-0\" (UID: \"29d4983d-bc42-45b7-ae25-f7c24cdd8512\") " pod="openstack/nova-api-0" Feb 24 02:38:38.556711 master-0 kubenswrapper[31411]: I0224 02:38:38.556390 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29d4983d-bc42-45b7-ae25-f7c24cdd8512-logs\") pod \"nova-api-0\" (UID: \"29d4983d-bc42-45b7-ae25-f7c24cdd8512\") " pod="openstack/nova-api-0" Feb 24 02:38:38.559603 master-0 kubenswrapper[31411]: I0224 02:38:38.559552 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29d4983d-bc42-45b7-ae25-f7c24cdd8512-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"29d4983d-bc42-45b7-ae25-f7c24cdd8512\") " pod="openstack/nova-api-0" Feb 24 02:38:38.560150 master-0 kubenswrapper[31411]: I0224 02:38:38.560105 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29d4983d-bc42-45b7-ae25-f7c24cdd8512-config-data\") pod \"nova-api-0\" (UID: \"29d4983d-bc42-45b7-ae25-f7c24cdd8512\") " pod="openstack/nova-api-0" Feb 24 02:38:38.663341 master-0 kubenswrapper[31411]: I0224 02:38:38.663289 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6fd9d5d9-zff6h\" (UID: \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\") " pod="openstack/dnsmasq-dns-6f6fd9d5d9-zff6h" Feb 24 02:38:38.663341 master-0 kubenswrapper[31411]: I0224 02:38:38.663369 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22b0f7d-7993-44dd-a35a-d2481099ae64-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f22b0f7d-7993-44dd-a35a-d2481099ae64\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:38:38.663653 master-0 kubenswrapper[31411]: I0224 02:38:38.663436 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-dns-svc\") pod \"dnsmasq-dns-6f6fd9d5d9-zff6h\" (UID: \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\") " pod="openstack/dnsmasq-dns-6f6fd9d5d9-zff6h" Feb 24 02:38:38.663653 master-0 kubenswrapper[31411]: I0224 02:38:38.663472 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6fd9d5d9-zff6h\" (UID: \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\") " pod="openstack/dnsmasq-dns-6f6fd9d5d9-zff6h" Feb 24 02:38:38.663653 master-0 kubenswrapper[31411]: I0224 02:38:38.663511 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd9sr\" (UniqueName: \"kubernetes.io/projected/f22b0f7d-7993-44dd-a35a-d2481099ae64-kube-api-access-pd9sr\") pod \"nova-cell1-novncproxy-0\" (UID: \"f22b0f7d-7993-44dd-a35a-d2481099ae64\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:38:38.663653 master-0 kubenswrapper[31411]: I0224 02:38:38.663539 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4m86\" (UniqueName: \"kubernetes.io/projected/29d4983d-bc42-45b7-ae25-f7c24cdd8512-kube-api-access-b4m86\") pod \"nova-api-0\" (UID: \"29d4983d-bc42-45b7-ae25-f7c24cdd8512\") " pod="openstack/nova-api-0" Feb 24 02:38:38.663653 master-0 kubenswrapper[31411]: I0224 02:38:38.663565 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29d4983d-bc42-45b7-ae25-f7c24cdd8512-logs\") pod \"nova-api-0\" (UID: \"29d4983d-bc42-45b7-ae25-f7c24cdd8512\") " pod="openstack/nova-api-0" Feb 24 02:38:38.663653 master-0 kubenswrapper[31411]: I0224 02:38:38.663603 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29d4983d-bc42-45b7-ae25-f7c24cdd8512-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"29d4983d-bc42-45b7-ae25-f7c24cdd8512\") " pod="openstack/nova-api-0" Feb 24 02:38:38.663854 master-0 kubenswrapper[31411]: I0224 02:38:38.663700 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6fd9d5d9-zff6h\" (UID: \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\") " pod="openstack/dnsmasq-dns-6f6fd9d5d9-zff6h" Feb 24 02:38:38.663854 master-0 kubenswrapper[31411]: I0224 02:38:38.663794 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22b0f7d-7993-44dd-a35a-d2481099ae64-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f22b0f7d-7993-44dd-a35a-d2481099ae64\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:38:38.664170 master-0 kubenswrapper[31411]: I0224 02:38:38.664136 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-config\") pod \"dnsmasq-dns-6f6fd9d5d9-zff6h\" (UID: \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\") " pod="openstack/dnsmasq-dns-6f6fd9d5d9-zff6h" Feb 24 02:38:38.664385 master-0 kubenswrapper[31411]: I0224 02:38:38.664358 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29d4983d-bc42-45b7-ae25-f7c24cdd8512-config-data\") pod \"nova-api-0\" (UID: \"29d4983d-bc42-45b7-ae25-f7c24cdd8512\") " pod="openstack/nova-api-0" Feb 24 02:38:38.664544 master-0 kubenswrapper[31411]: I0224 02:38:38.664517 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9xlj\" (UniqueName: \"kubernetes.io/projected/d432fd91-0f08-43b3-8201-65f8d4e7efe8-kube-api-access-v9xlj\") pod \"dnsmasq-dns-6f6fd9d5d9-zff6h\" (UID: \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\") " pod="openstack/dnsmasq-dns-6f6fd9d5d9-zff6h" Feb 24 02:38:38.666422 master-0 kubenswrapper[31411]: I0224 02:38:38.665548 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29d4983d-bc42-45b7-ae25-f7c24cdd8512-logs\") pod \"nova-api-0\" (UID: \"29d4983d-bc42-45b7-ae25-f7c24cdd8512\") " pod="openstack/nova-api-0" Feb 24 02:38:38.669843 master-0 kubenswrapper[31411]: I0224 02:38:38.669739 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29d4983d-bc42-45b7-ae25-f7c24cdd8512-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"29d4983d-bc42-45b7-ae25-f7c24cdd8512\") " pod="openstack/nova-api-0" Feb 24 02:38:38.671490 master-0 kubenswrapper[31411]: I0224 02:38:38.670705 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29d4983d-bc42-45b7-ae25-f7c24cdd8512-config-data\") pod \"nova-api-0\" (UID: \"29d4983d-bc42-45b7-ae25-f7c24cdd8512\") " pod="openstack/nova-api-0" Feb 24 02:38:38.770467 master-0 kubenswrapper[31411]: I0224 02:38:38.770370 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22b0f7d-7993-44dd-a35a-d2481099ae64-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f22b0f7d-7993-44dd-a35a-d2481099ae64\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:38:38.770753 master-0 kubenswrapper[31411]: I0224 02:38:38.770557 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-dns-svc\") pod \"dnsmasq-dns-6f6fd9d5d9-zff6h\" (UID: \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\") " pod="openstack/dnsmasq-dns-6f6fd9d5d9-zff6h" Feb 24 02:38:38.770753 master-0 kubenswrapper[31411]: I0224 02:38:38.770619 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6fd9d5d9-zff6h\" (UID: \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\") " pod="openstack/dnsmasq-dns-6f6fd9d5d9-zff6h" Feb 24 02:38:38.770753 master-0 kubenswrapper[31411]: I0224 02:38:38.770677 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd9sr\" (UniqueName: \"kubernetes.io/projected/f22b0f7d-7993-44dd-a35a-d2481099ae64-kube-api-access-pd9sr\") pod \"nova-cell1-novncproxy-0\" (UID: \"f22b0f7d-7993-44dd-a35a-d2481099ae64\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:38:38.771144 master-0 kubenswrapper[31411]: I0224 02:38:38.771110 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6fd9d5d9-zff6h\" (UID: \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\") " pod="openstack/dnsmasq-dns-6f6fd9d5d9-zff6h" Feb 24 02:38:38.771195 master-0 kubenswrapper[31411]: I0224 02:38:38.771159 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22b0f7d-7993-44dd-a35a-d2481099ae64-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f22b0f7d-7993-44dd-a35a-d2481099ae64\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:38:38.771332 master-0 kubenswrapper[31411]: I0224 02:38:38.771304 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-config\") pod \"dnsmasq-dns-6f6fd9d5d9-zff6h\" (UID: \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\") " pod="openstack/dnsmasq-dns-6f6fd9d5d9-zff6h" Feb 24 02:38:38.771533 master-0 kubenswrapper[31411]: I0224 02:38:38.771468 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9xlj\" (UniqueName: \"kubernetes.io/projected/d432fd91-0f08-43b3-8201-65f8d4e7efe8-kube-api-access-v9xlj\") pod \"dnsmasq-dns-6f6fd9d5d9-zff6h\" (UID: \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\") " pod="openstack/dnsmasq-dns-6f6fd9d5d9-zff6h" Feb 24 02:38:38.771631 master-0 kubenswrapper[31411]: I0224 02:38:38.771599 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6fd9d5d9-zff6h\" (UID: \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\") " pod="openstack/dnsmasq-dns-6f6fd9d5d9-zff6h" Feb 24 02:38:38.772682 master-0 kubenswrapper[31411]: I0224 02:38:38.772647 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-config\") pod \"dnsmasq-dns-6f6fd9d5d9-zff6h\" (UID: \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\") " pod="openstack/dnsmasq-dns-6f6fd9d5d9-zff6h" Feb 24 02:38:38.773556 master-0 kubenswrapper[31411]: I0224 02:38:38.773522 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6fd9d5d9-zff6h\" (UID: \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\") " pod="openstack/dnsmasq-dns-6f6fd9d5d9-zff6h" Feb 24 02:38:38.773640 master-0 kubenswrapper[31411]: I0224 02:38:38.773603 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6fd9d5d9-zff6h\" (UID: \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\") " pod="openstack/dnsmasq-dns-6f6fd9d5d9-zff6h" Feb 24 02:38:38.773962 master-0 kubenswrapper[31411]: I0224 02:38:38.773931 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6fd9d5d9-zff6h\" (UID: \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\") " pod="openstack/dnsmasq-dns-6f6fd9d5d9-zff6h" Feb 24 02:38:38.777945 master-0 kubenswrapper[31411]: I0224 02:38:38.777897 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22b0f7d-7993-44dd-a35a-d2481099ae64-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f22b0f7d-7993-44dd-a35a-d2481099ae64\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:38:38.778158 master-0 kubenswrapper[31411]: I0224 02:38:38.778124 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-dns-svc\") pod \"dnsmasq-dns-6f6fd9d5d9-zff6h\" (UID: \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\") " pod="openstack/dnsmasq-dns-6f6fd9d5d9-zff6h" Feb 24 02:38:38.778999 master-0 kubenswrapper[31411]: I0224 02:38:38.778901 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22b0f7d-7993-44dd-a35a-d2481099ae64-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f22b0f7d-7993-44dd-a35a-d2481099ae64\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:38:38.989602 master-0 kubenswrapper[31411]: I0224 02:38:38.989268 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4m86\" (UniqueName: \"kubernetes.io/projected/29d4983d-bc42-45b7-ae25-f7c24cdd8512-kube-api-access-b4m86\") pod \"nova-api-0\" (UID: \"29d4983d-bc42-45b7-ae25-f7c24cdd8512\") " pod="openstack/nova-api-0" Feb 24 02:38:38.991926 master-0 kubenswrapper[31411]: I0224 02:38:38.991883 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9xlj\" (UniqueName: \"kubernetes.io/projected/d432fd91-0f08-43b3-8201-65f8d4e7efe8-kube-api-access-v9xlj\") pod \"dnsmasq-dns-6f6fd9d5d9-zff6h\" (UID: \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\") " pod="openstack/dnsmasq-dns-6f6fd9d5d9-zff6h" Feb 24 02:38:38.993474 master-0 kubenswrapper[31411]: I0224 02:38:38.993443 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx4bv\" (UniqueName: \"kubernetes.io/projected/d483bb01-7ee4-4f34-a40b-15bf34e365bc-kube-api-access-vx4bv\") pod \"nova-metadata-0\" (UID: \"d483bb01-7ee4-4f34-a40b-15bf34e365bc\") " pod="openstack/nova-metadata-0" Feb 24 02:38:39.009109 master-0 kubenswrapper[31411]: I0224 02:38:39.007682 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd9sr\" (UniqueName: \"kubernetes.io/projected/f22b0f7d-7993-44dd-a35a-d2481099ae64-kube-api-access-pd9sr\") pod \"nova-cell1-novncproxy-0\" (UID: \"f22b0f7d-7993-44dd-a35a-d2481099ae64\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:38:39.033717 master-0 kubenswrapper[31411]: I0224 02:38:39.033670 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 24 02:38:39.058166 master-0 kubenswrapper[31411]: I0224 02:38:39.057438 31411 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 02:38:39.083628 master-0 kubenswrapper[31411]: I0224 02:38:39.077242 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 02:38:39.087746 master-0 kubenswrapper[31411]: W0224 02:38:39.084108 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43c3681e_f1f6_4953_8b9c_fc9b08618f5f.slice/crio-0d601bdf26c3543d6c0e9f08b39adf897f702a9a79bfde25b0f059ee15ef7c1c WatchSource:0}: Error finding container 0d601bdf26c3543d6c0e9f08b39adf897f702a9a79bfde25b0f059ee15ef7c1c: Status 404 returned error can't find the container with id 0d601bdf26c3543d6c0e9f08b39adf897f702a9a79bfde25b0f059ee15ef7c1c Feb 24 02:38:39.094429 master-0 kubenswrapper[31411]: I0224 02:38:39.088697 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Feb 24 02:38:39.103831 master-0 kubenswrapper[31411]: I0224 02:38:39.097519 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6fd9d5d9-zff6h" Feb 24 02:38:39.129596 master-0 kubenswrapper[31411]: I0224 02:38:39.123957 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mz9lj"] Feb 24 02:38:39.183527 master-0 kubenswrapper[31411]: I0224 02:38:39.183369 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 02:38:39.191535 master-0 kubenswrapper[31411]: I0224 02:38:39.191377 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 02:38:39.200429 master-0 kubenswrapper[31411]: I0224 02:38:39.200011 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:38:39.518619 master-0 kubenswrapper[31411]: I0224 02:38:39.516534 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mz9lj" event={"ID":"23b93a03-505a-41a6-94b6-9344778d91be","Type":"ContainerStarted","Data":"32da3892179cccca9e16d1e01d1b1762e3f078ab921801f1ce77728bb81e8507"} Feb 24 02:38:39.518619 master-0 kubenswrapper[31411]: I0224 02:38:39.516609 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mz9lj" event={"ID":"23b93a03-505a-41a6-94b6-9344778d91be","Type":"ContainerStarted","Data":"cc0989e58940568fba59b433759f7dc52b913e7f972c4cdcb1248af22d4ab46f"} Feb 24 02:38:39.533958 master-0 kubenswrapper[31411]: I0224 02:38:39.533862 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"74a07828-16f1-4b69-bfbd-a0e76519ce98","Type":"ContainerStarted","Data":"2a9d08e7ba17a9ac4a9d91e0bdada9be7c7e405bcc40ae993d0a2bd4b206d2b3"} Feb 24 02:38:39.537811 master-0 kubenswrapper[31411]: I0224 02:38:39.537145 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"43c3681e-f1f6-4953-8b9c-fc9b08618f5f","Type":"ContainerStarted","Data":"0d601bdf26c3543d6c0e9f08b39adf897f702a9a79bfde25b0f059ee15ef7c1c"} Feb 24 02:38:39.751588 master-0 kubenswrapper[31411]: I0224 02:38:39.751360 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-mz9lj" podStartSLOduration=2.751337649 podStartE2EDuration="2.751337649s" podCreationTimestamp="2026-02-24 02:38:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:38:39.575289527 +0000 UTC m=+1062.792487363" watchObservedRunningTime="2026-02-24 02:38:39.751337649 +0000 UTC m=+1062.968535485" Feb 24 02:38:39.770515 master-0 kubenswrapper[31411]: I0224 02:38:39.770473 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 24 02:38:39.858614 master-0 kubenswrapper[31411]: I0224 02:38:39.857843 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7jt69"] Feb 24 02:38:39.870696 master-0 kubenswrapper[31411]: I0224 02:38:39.870635 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7jt69" Feb 24 02:38:39.874258 master-0 kubenswrapper[31411]: I0224 02:38:39.873444 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 24 02:38:39.874258 master-0 kubenswrapper[31411]: I0224 02:38:39.874055 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 24 02:38:39.874258 master-0 kubenswrapper[31411]: W0224 02:38:39.874193 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd483bb01_7ee4_4f34_a40b_15bf34e365bc.slice/crio-fdbc4d69dfe3321b4d2fc0de6ed9219440f3227203db73309c77b1dc103e5a5a WatchSource:0}: Error finding container fdbc4d69dfe3321b4d2fc0de6ed9219440f3227203db73309c77b1dc103e5a5a: Status 404 returned error can't find the container with id fdbc4d69dfe3321b4d2fc0de6ed9219440f3227203db73309c77b1dc103e5a5a Feb 24 02:38:39.879930 master-0 kubenswrapper[31411]: I0224 02:38:39.878055 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7jt69"] Feb 24 02:38:39.893828 master-0 kubenswrapper[31411]: I0224 02:38:39.893751 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 02:38:39.971022 master-0 kubenswrapper[31411]: I0224 02:38:39.970932 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83d58022-3d1c-4bab-ba2d-ca5a54d511db-scripts\") pod \"nova-cell1-conductor-db-sync-7jt69\" (UID: \"83d58022-3d1c-4bab-ba2d-ca5a54d511db\") " pod="openstack/nova-cell1-conductor-db-sync-7jt69" Feb 24 02:38:39.971504 master-0 kubenswrapper[31411]: I0224 02:38:39.971465 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83d58022-3d1c-4bab-ba2d-ca5a54d511db-config-data\") pod \"nova-cell1-conductor-db-sync-7jt69\" (UID: \"83d58022-3d1c-4bab-ba2d-ca5a54d511db\") " pod="openstack/nova-cell1-conductor-db-sync-7jt69" Feb 24 02:38:39.971932 master-0 kubenswrapper[31411]: I0224 02:38:39.971878 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dlxv\" (UniqueName: \"kubernetes.io/projected/83d58022-3d1c-4bab-ba2d-ca5a54d511db-kube-api-access-5dlxv\") pod \"nova-cell1-conductor-db-sync-7jt69\" (UID: \"83d58022-3d1c-4bab-ba2d-ca5a54d511db\") " pod="openstack/nova-cell1-conductor-db-sync-7jt69" Feb 24 02:38:39.972002 master-0 kubenswrapper[31411]: I0224 02:38:39.971912 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d58022-3d1c-4bab-ba2d-ca5a54d511db-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7jt69\" (UID: \"83d58022-3d1c-4bab-ba2d-ca5a54d511db\") " pod="openstack/nova-cell1-conductor-db-sync-7jt69" Feb 24 02:38:40.039151 master-0 kubenswrapper[31411]: I0224 02:38:40.039019 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6fd9d5d9-zff6h"] Feb 24 02:38:40.077409 master-0 kubenswrapper[31411]: I0224 02:38:40.077309 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dlxv\" (UniqueName: \"kubernetes.io/projected/83d58022-3d1c-4bab-ba2d-ca5a54d511db-kube-api-access-5dlxv\") pod \"nova-cell1-conductor-db-sync-7jt69\" (UID: \"83d58022-3d1c-4bab-ba2d-ca5a54d511db\") " pod="openstack/nova-cell1-conductor-db-sync-7jt69" Feb 24 02:38:40.078086 master-0 kubenswrapper[31411]: I0224 02:38:40.077417 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d58022-3d1c-4bab-ba2d-ca5a54d511db-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7jt69\" (UID: \"83d58022-3d1c-4bab-ba2d-ca5a54d511db\") " pod="openstack/nova-cell1-conductor-db-sync-7jt69" Feb 24 02:38:40.078086 master-0 kubenswrapper[31411]: I0224 02:38:40.077560 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83d58022-3d1c-4bab-ba2d-ca5a54d511db-scripts\") pod \"nova-cell1-conductor-db-sync-7jt69\" (UID: \"83d58022-3d1c-4bab-ba2d-ca5a54d511db\") " pod="openstack/nova-cell1-conductor-db-sync-7jt69" Feb 24 02:38:40.078086 master-0 kubenswrapper[31411]: I0224 02:38:40.077970 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83d58022-3d1c-4bab-ba2d-ca5a54d511db-config-data\") pod \"nova-cell1-conductor-db-sync-7jt69\" (UID: \"83d58022-3d1c-4bab-ba2d-ca5a54d511db\") " pod="openstack/nova-cell1-conductor-db-sync-7jt69" Feb 24 02:38:40.083590 master-0 kubenswrapper[31411]: I0224 02:38:40.083371 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d58022-3d1c-4bab-ba2d-ca5a54d511db-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7jt69\" (UID: \"83d58022-3d1c-4bab-ba2d-ca5a54d511db\") " pod="openstack/nova-cell1-conductor-db-sync-7jt69" Feb 24 02:38:40.084016 master-0 kubenswrapper[31411]: I0224 02:38:40.083978 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83d58022-3d1c-4bab-ba2d-ca5a54d511db-config-data\") pod \"nova-cell1-conductor-db-sync-7jt69\" (UID: \"83d58022-3d1c-4bab-ba2d-ca5a54d511db\") " pod="openstack/nova-cell1-conductor-db-sync-7jt69" Feb 24 02:38:40.091650 master-0 kubenswrapper[31411]: I0224 02:38:40.090511 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83d58022-3d1c-4bab-ba2d-ca5a54d511db-scripts\") pod \"nova-cell1-conductor-db-sync-7jt69\" (UID: \"83d58022-3d1c-4bab-ba2d-ca5a54d511db\") " pod="openstack/nova-cell1-conductor-db-sync-7jt69" Feb 24 02:38:40.104566 master-0 kubenswrapper[31411]: I0224 02:38:40.104376 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dlxv\" (UniqueName: \"kubernetes.io/projected/83d58022-3d1c-4bab-ba2d-ca5a54d511db-kube-api-access-5dlxv\") pod \"nova-cell1-conductor-db-sync-7jt69\" (UID: \"83d58022-3d1c-4bab-ba2d-ca5a54d511db\") " pod="openstack/nova-cell1-conductor-db-sync-7jt69" Feb 24 02:38:40.196836 master-0 kubenswrapper[31411]: I0224 02:38:40.196798 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 24 02:38:40.235050 master-0 kubenswrapper[31411]: I0224 02:38:40.234964 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7jt69" Feb 24 02:38:40.565460 master-0 kubenswrapper[31411]: I0224 02:38:40.565400 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f22b0f7d-7993-44dd-a35a-d2481099ae64","Type":"ContainerStarted","Data":"80654eec1f9cb0a32f902ab48489a095d24845d0555177bc314d6830c9725a87"} Feb 24 02:38:40.568660 master-0 kubenswrapper[31411]: I0224 02:38:40.568586 31411 generic.go:334] "Generic (PLEG): container finished" podID="d432fd91-0f08-43b3-8201-65f8d4e7efe8" containerID="acf7c71f5752a3d46eae2f230998f1510de297f03fdd4fc0d2087b03506352d3" exitCode=0 Feb 24 02:38:40.568965 master-0 kubenswrapper[31411]: I0224 02:38:40.568924 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6fd9d5d9-zff6h" event={"ID":"d432fd91-0f08-43b3-8201-65f8d4e7efe8","Type":"ContainerDied","Data":"acf7c71f5752a3d46eae2f230998f1510de297f03fdd4fc0d2087b03506352d3"} Feb 24 02:38:40.569146 master-0 kubenswrapper[31411]: I0224 02:38:40.569115 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6fd9d5d9-zff6h" event={"ID":"d432fd91-0f08-43b3-8201-65f8d4e7efe8","Type":"ContainerStarted","Data":"45326ccb0211773e324f73244d8a906c31aa58835a98f1feda3ba56614a40ca3"} Feb 24 02:38:40.571335 master-0 kubenswrapper[31411]: I0224 02:38:40.571301 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29d4983d-bc42-45b7-ae25-f7c24cdd8512","Type":"ContainerStarted","Data":"bc692cd43f3ec446b25a7926a6ac1601bbdbd534f995866ede2c582240262229"} Feb 24 02:38:40.573538 master-0 kubenswrapper[31411]: I0224 02:38:40.573508 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d483bb01-7ee4-4f34-a40b-15bf34e365bc","Type":"ContainerStarted","Data":"fdbc4d69dfe3321b4d2fc0de6ed9219440f3227203db73309c77b1dc103e5a5a"} Feb 24 02:38:40.748762 master-0 kubenswrapper[31411]: I0224 02:38:40.748673 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7jt69"] Feb 24 02:38:40.932745 master-0 kubenswrapper[31411]: W0224 02:38:40.932678 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83d58022_3d1c_4bab_ba2d_ca5a54d511db.slice/crio-bfb3f148d41731c880b1f2531d98a8d7eeafe49b1859d5ba49534164a8c5d1ad WatchSource:0}: Error finding container bfb3f148d41731c880b1f2531d98a8d7eeafe49b1859d5ba49534164a8c5d1ad: Status 404 returned error can't find the container with id bfb3f148d41731c880b1f2531d98a8d7eeafe49b1859d5ba49534164a8c5d1ad Feb 24 02:38:41.593360 master-0 kubenswrapper[31411]: I0224 02:38:41.593218 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7jt69" event={"ID":"83d58022-3d1c-4bab-ba2d-ca5a54d511db","Type":"ContainerStarted","Data":"67f54de3f30c2b2affe27ec8d04a452901c3a9e1ea584b7774eac9304f416f7c"} Feb 24 02:38:41.594225 master-0 kubenswrapper[31411]: I0224 02:38:41.594150 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7jt69" event={"ID":"83d58022-3d1c-4bab-ba2d-ca5a54d511db","Type":"ContainerStarted","Data":"bfb3f148d41731c880b1f2531d98a8d7eeafe49b1859d5ba49534164a8c5d1ad"} Feb 24 02:38:41.598922 master-0 kubenswrapper[31411]: I0224 02:38:41.598894 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6fd9d5d9-zff6h" event={"ID":"d432fd91-0f08-43b3-8201-65f8d4e7efe8","Type":"ContainerStarted","Data":"8ce04f2df867785b0371b6428fe78a5e62680d50bb459d1c42d88b284e3d4a4a"} Feb 24 02:38:41.599709 master-0 kubenswrapper[31411]: I0224 02:38:41.599683 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6fd9d5d9-zff6h" Feb 24 02:38:41.638360 master-0 kubenswrapper[31411]: I0224 02:38:41.637956 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-7jt69" podStartSLOduration=2.637934248 podStartE2EDuration="2.637934248s" podCreationTimestamp="2026-02-24 02:38:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:38:41.636064806 +0000 UTC m=+1064.853262672" watchObservedRunningTime="2026-02-24 02:38:41.637934248 +0000 UTC m=+1064.855132084" Feb 24 02:38:41.706497 master-0 kubenswrapper[31411]: I0224 02:38:41.706402 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6fd9d5d9-zff6h" podStartSLOduration=3.706377527 podStartE2EDuration="3.706377527s" podCreationTimestamp="2026-02-24 02:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:38:41.68484836 +0000 UTC m=+1064.902046206" watchObservedRunningTime="2026-02-24 02:38:41.706377527 +0000 UTC m=+1064.923575373" Feb 24 02:38:42.651871 master-0 kubenswrapper[31411]: I0224 02:38:42.648824 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 02:38:42.672529 master-0 kubenswrapper[31411]: I0224 02:38:42.672457 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 24 02:38:43.670673 master-0 kubenswrapper[31411]: I0224 02:38:43.669780 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"43c3681e-f1f6-4953-8b9c-fc9b08618f5f","Type":"ContainerStarted","Data":"39af9503df23261c39f03f809ae380c68ac3e1b7857e4f4ca37ea5a60c937cd3"} Feb 24 02:38:43.673129 master-0 kubenswrapper[31411]: I0224 02:38:43.673082 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29d4983d-bc42-45b7-ae25-f7c24cdd8512","Type":"ContainerStarted","Data":"cd04292271f4cb1dff16081824ad4dfaf6996ff0309a6865903853f2b34aa592"} Feb 24 02:38:43.675249 master-0 kubenswrapper[31411]: I0224 02:38:43.675184 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d483bb01-7ee4-4f34-a40b-15bf34e365bc","Type":"ContainerStarted","Data":"f64f93b42c13cdafa357676e1c873e80d5e365fb1c1331a5c97d6989a6621540"} Feb 24 02:38:43.677754 master-0 kubenswrapper[31411]: I0224 02:38:43.677675 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f22b0f7d-7993-44dd-a35a-d2481099ae64","Type":"ContainerStarted","Data":"69707733ec9e5fa74055743f47f3681472495d56d54a84abc91f63f23da9cfee"} Feb 24 02:38:44.698466 master-0 kubenswrapper[31411]: I0224 02:38:44.698331 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d483bb01-7ee4-4f34-a40b-15bf34e365bc","Type":"ContainerStarted","Data":"3482d706d5780273a958ae3b61197a17264c7f391eea34f3f06019a34b09554f"} Feb 24 02:38:44.699246 master-0 kubenswrapper[31411]: I0224 02:38:44.698416 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d483bb01-7ee4-4f34-a40b-15bf34e365bc" containerName="nova-metadata-log" containerID="cri-o://f64f93b42c13cdafa357676e1c873e80d5e365fb1c1331a5c97d6989a6621540" gracePeriod=30 Feb 24 02:38:44.699246 master-0 kubenswrapper[31411]: I0224 02:38:44.698648 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d483bb01-7ee4-4f34-a40b-15bf34e365bc" containerName="nova-metadata-metadata" containerID="cri-o://3482d706d5780273a958ae3b61197a17264c7f391eea34f3f06019a34b09554f" gracePeriod=30 Feb 24 02:38:44.699246 master-0 kubenswrapper[31411]: I0224 02:38:44.698713 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="f22b0f7d-7993-44dd-a35a-d2481099ae64" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://69707733ec9e5fa74055743f47f3681472495d56d54a84abc91f63f23da9cfee" gracePeriod=30 Feb 24 02:38:45.716918 master-0 kubenswrapper[31411]: I0224 02:38:45.716847 31411 generic.go:334] "Generic (PLEG): container finished" podID="d483bb01-7ee4-4f34-a40b-15bf34e365bc" containerID="3482d706d5780273a958ae3b61197a17264c7f391eea34f3f06019a34b09554f" exitCode=0 Feb 24 02:38:45.716918 master-0 kubenswrapper[31411]: I0224 02:38:45.716900 31411 generic.go:334] "Generic (PLEG): container finished" podID="d483bb01-7ee4-4f34-a40b-15bf34e365bc" containerID="f64f93b42c13cdafa357676e1c873e80d5e365fb1c1331a5c97d6989a6621540" exitCode=143 Feb 24 02:38:45.717845 master-0 kubenswrapper[31411]: I0224 02:38:45.716898 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d483bb01-7ee4-4f34-a40b-15bf34e365bc","Type":"ContainerDied","Data":"3482d706d5780273a958ae3b61197a17264c7f391eea34f3f06019a34b09554f"} Feb 24 02:38:45.717845 master-0 kubenswrapper[31411]: I0224 02:38:45.716980 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d483bb01-7ee4-4f34-a40b-15bf34e365bc","Type":"ContainerDied","Data":"f64f93b42c13cdafa357676e1c873e80d5e365fb1c1331a5c97d6989a6621540"} Feb 24 02:38:45.720406 master-0 kubenswrapper[31411]: I0224 02:38:45.720355 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29d4983d-bc42-45b7-ae25-f7c24cdd8512","Type":"ContainerStarted","Data":"cd4e16d849e2d2d47b143bc6c53da7b1c4101799b4668a0e8f0418503dd2eb11"} Feb 24 02:38:46.588200 master-0 kubenswrapper[31411]: I0224 02:38:46.587952 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=5.6905672880000004 podStartE2EDuration="8.587917318s" podCreationTimestamp="2026-02-24 02:38:38 +0000 UTC" firstStartedPulling="2026-02-24 02:38:40.1942711 +0000 UTC m=+1063.411468946" lastFinishedPulling="2026-02-24 02:38:43.09162113 +0000 UTC m=+1066.308818976" observedRunningTime="2026-02-24 02:38:45.97012551 +0000 UTC m=+1069.187323346" watchObservedRunningTime="2026-02-24 02:38:46.587917318 +0000 UTC m=+1069.805115194" Feb 24 02:38:46.594856 master-0 kubenswrapper[31411]: I0224 02:38:46.594766 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=5.381436607 podStartE2EDuration="8.59474668s" podCreationTimestamp="2026-02-24 02:38:38 +0000 UTC" firstStartedPulling="2026-02-24 02:38:39.878717509 +0000 UTC m=+1063.095915355" lastFinishedPulling="2026-02-24 02:38:43.092027582 +0000 UTC m=+1066.309225428" observedRunningTime="2026-02-24 02:38:46.568596773 +0000 UTC m=+1069.785794629" watchObservedRunningTime="2026-02-24 02:38:46.59474668 +0000 UTC m=+1069.811944566" Feb 24 02:38:46.726566 master-0 kubenswrapper[31411]: I0224 02:38:46.724644 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=5.405114204 podStartE2EDuration="8.724623s" podCreationTimestamp="2026-02-24 02:38:38 +0000 UTC" firstStartedPulling="2026-02-24 02:38:39.780782539 +0000 UTC m=+1062.997980385" lastFinishedPulling="2026-02-24 02:38:43.100291335 +0000 UTC m=+1066.317489181" observedRunningTime="2026-02-24 02:38:46.706203951 +0000 UTC m=+1069.923401927" watchObservedRunningTime="2026-02-24 02:38:46.724623 +0000 UTC m=+1069.941820846" Feb 24 02:38:46.845144 master-0 kubenswrapper[31411]: I0224 02:38:46.844493 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=6.415931035 podStartE2EDuration="9.844464427s" podCreationTimestamp="2026-02-24 02:38:37 +0000 UTC" firstStartedPulling="2026-02-24 02:38:39.109743206 +0000 UTC m=+1062.326941052" lastFinishedPulling="2026-02-24 02:38:42.538276598 +0000 UTC m=+1065.755474444" observedRunningTime="2026-02-24 02:38:46.831695337 +0000 UTC m=+1070.048893203" watchObservedRunningTime="2026-02-24 02:38:46.844464427 +0000 UTC m=+1070.061662303" Feb 24 02:38:47.773689 master-0 kubenswrapper[31411]: I0224 02:38:47.773607 31411 generic.go:334] "Generic (PLEG): container finished" podID="23b93a03-505a-41a6-94b6-9344778d91be" containerID="32da3892179cccca9e16d1e01d1b1762e3f078ab921801f1ce77728bb81e8507" exitCode=0 Feb 24 02:38:47.773689 master-0 kubenswrapper[31411]: I0224 02:38:47.773681 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mz9lj" event={"ID":"23b93a03-505a-41a6-94b6-9344778d91be","Type":"ContainerDied","Data":"32da3892179cccca9e16d1e01d1b1762e3f078ab921801f1ce77728bb81e8507"} Feb 24 02:38:48.357226 master-0 kubenswrapper[31411]: I0224 02:38:48.357166 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 24 02:38:48.357507 master-0 kubenswrapper[31411]: I0224 02:38:48.357322 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 24 02:38:48.403985 master-0 kubenswrapper[31411]: I0224 02:38:48.403917 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 24 02:38:48.841189 master-0 kubenswrapper[31411]: I0224 02:38:48.841118 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 24 02:38:49.080783 master-0 kubenswrapper[31411]: I0224 02:38:49.079332 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 24 02:38:49.080783 master-0 kubenswrapper[31411]: I0224 02:38:49.079381 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 24 02:38:49.109367 master-0 kubenswrapper[31411]: I0224 02:38:49.109302 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6fd9d5d9-zff6h" Feb 24 02:38:49.192806 master-0 kubenswrapper[31411]: I0224 02:38:49.192749 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 24 02:38:49.193127 master-0 kubenswrapper[31411]: I0224 02:38:49.193115 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 24 02:38:49.200674 master-0 kubenswrapper[31411]: I0224 02:38:49.200630 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:38:49.209978 master-0 kubenswrapper[31411]: I0224 02:38:49.209908 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cc6c67c77-h5cpc"] Feb 24 02:38:49.217329 master-0 kubenswrapper[31411]: I0224 02:38:49.210205 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" podUID="5073b6d9-022e-431f-92a1-9e4dbb1a2707" containerName="dnsmasq-dns" containerID="cri-o://26258ca057eef9536895385b36d82a0b998133dc91eeb54a32a901e1263f87fd" gracePeriod=10 Feb 24 02:38:49.808176 master-0 kubenswrapper[31411]: I0224 02:38:49.808026 31411 generic.go:334] "Generic (PLEG): container finished" podID="5073b6d9-022e-431f-92a1-9e4dbb1a2707" containerID="26258ca057eef9536895385b36d82a0b998133dc91eeb54a32a901e1263f87fd" exitCode=0 Feb 24 02:38:49.808410 master-0 kubenswrapper[31411]: I0224 02:38:49.808267 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" event={"ID":"5073b6d9-022e-431f-92a1-9e4dbb1a2707","Type":"ContainerDied","Data":"26258ca057eef9536895385b36d82a0b998133dc91eeb54a32a901e1263f87fd"} Feb 24 02:38:50.167995 master-0 kubenswrapper[31411]: I0224 02:38:50.161944 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="29d4983d-bc42-45b7-ae25-f7c24cdd8512" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.1.2:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 02:38:50.167995 master-0 kubenswrapper[31411]: I0224 02:38:50.162096 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="29d4983d-bc42-45b7-ae25-f7c24cdd8512" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.1.2:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 02:38:53.485398 master-0 kubenswrapper[31411]: I0224 02:38:53.485330 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 02:38:53.504920 master-0 kubenswrapper[31411]: I0224 02:38:53.504824 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mz9lj" Feb 24 02:38:53.611649 master-0 kubenswrapper[31411]: I0224 02:38:53.611491 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23b93a03-505a-41a6-94b6-9344778d91be-config-data\") pod \"23b93a03-505a-41a6-94b6-9344778d91be\" (UID: \"23b93a03-505a-41a6-94b6-9344778d91be\") " Feb 24 02:38:53.611649 master-0 kubenswrapper[31411]: I0224 02:38:53.611616 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx4bv\" (UniqueName: \"kubernetes.io/projected/d483bb01-7ee4-4f34-a40b-15bf34e365bc-kube-api-access-vx4bv\") pod \"d483bb01-7ee4-4f34-a40b-15bf34e365bc\" (UID: \"d483bb01-7ee4-4f34-a40b-15bf34e365bc\") " Feb 24 02:38:53.611649 master-0 kubenswrapper[31411]: I0224 02:38:53.611644 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d483bb01-7ee4-4f34-a40b-15bf34e365bc-combined-ca-bundle\") pod \"d483bb01-7ee4-4f34-a40b-15bf34e365bc\" (UID: \"d483bb01-7ee4-4f34-a40b-15bf34e365bc\") " Feb 24 02:38:53.611989 master-0 kubenswrapper[31411]: I0224 02:38:53.611732 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23b93a03-505a-41a6-94b6-9344778d91be-combined-ca-bundle\") pod \"23b93a03-505a-41a6-94b6-9344778d91be\" (UID: \"23b93a03-505a-41a6-94b6-9344778d91be\") " Feb 24 02:38:53.612408 master-0 kubenswrapper[31411]: I0224 02:38:53.612378 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23b93a03-505a-41a6-94b6-9344778d91be-scripts\") pod \"23b93a03-505a-41a6-94b6-9344778d91be\" (UID: \"23b93a03-505a-41a6-94b6-9344778d91be\") " Feb 24 02:38:53.612513 master-0 kubenswrapper[31411]: I0224 02:38:53.612461 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d483bb01-7ee4-4f34-a40b-15bf34e365bc-logs\") pod \"d483bb01-7ee4-4f34-a40b-15bf34e365bc\" (UID: \"d483bb01-7ee4-4f34-a40b-15bf34e365bc\") " Feb 24 02:38:53.612513 master-0 kubenswrapper[31411]: I0224 02:38:53.612506 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hvnn\" (UniqueName: \"kubernetes.io/projected/23b93a03-505a-41a6-94b6-9344778d91be-kube-api-access-5hvnn\") pod \"23b93a03-505a-41a6-94b6-9344778d91be\" (UID: \"23b93a03-505a-41a6-94b6-9344778d91be\") " Feb 24 02:38:53.612736 master-0 kubenswrapper[31411]: I0224 02:38:53.612705 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d483bb01-7ee4-4f34-a40b-15bf34e365bc-config-data\") pod \"d483bb01-7ee4-4f34-a40b-15bf34e365bc\" (UID: \"d483bb01-7ee4-4f34-a40b-15bf34e365bc\") " Feb 24 02:38:53.613329 master-0 kubenswrapper[31411]: I0224 02:38:53.613258 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d483bb01-7ee4-4f34-a40b-15bf34e365bc-logs" (OuterVolumeSpecName: "logs") pod "d483bb01-7ee4-4f34-a40b-15bf34e365bc" (UID: "d483bb01-7ee4-4f34-a40b-15bf34e365bc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:38:53.617206 master-0 kubenswrapper[31411]: I0224 02:38:53.617034 31411 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d483bb01-7ee4-4f34-a40b-15bf34e365bc-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:53.619132 master-0 kubenswrapper[31411]: I0224 02:38:53.619080 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23b93a03-505a-41a6-94b6-9344778d91be-scripts" (OuterVolumeSpecName: "scripts") pod "23b93a03-505a-41a6-94b6-9344778d91be" (UID: "23b93a03-505a-41a6-94b6-9344778d91be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:38:53.624120 master-0 kubenswrapper[31411]: I0224 02:38:53.624054 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23b93a03-505a-41a6-94b6-9344778d91be-kube-api-access-5hvnn" (OuterVolumeSpecName: "kube-api-access-5hvnn") pod "23b93a03-505a-41a6-94b6-9344778d91be" (UID: "23b93a03-505a-41a6-94b6-9344778d91be"). InnerVolumeSpecName "kube-api-access-5hvnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:38:53.633854 master-0 kubenswrapper[31411]: I0224 02:38:53.633784 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d483bb01-7ee4-4f34-a40b-15bf34e365bc-kube-api-access-vx4bv" (OuterVolumeSpecName: "kube-api-access-vx4bv") pod "d483bb01-7ee4-4f34-a40b-15bf34e365bc" (UID: "d483bb01-7ee4-4f34-a40b-15bf34e365bc"). InnerVolumeSpecName "kube-api-access-vx4bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:38:53.649459 master-0 kubenswrapper[31411]: I0224 02:38:53.649078 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23b93a03-505a-41a6-94b6-9344778d91be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23b93a03-505a-41a6-94b6-9344778d91be" (UID: "23b93a03-505a-41a6-94b6-9344778d91be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:38:53.659202 master-0 kubenswrapper[31411]: I0224 02:38:53.659119 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d483bb01-7ee4-4f34-a40b-15bf34e365bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d483bb01-7ee4-4f34-a40b-15bf34e365bc" (UID: "d483bb01-7ee4-4f34-a40b-15bf34e365bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:38:53.663929 master-0 kubenswrapper[31411]: I0224 02:38:53.663895 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d483bb01-7ee4-4f34-a40b-15bf34e365bc-config-data" (OuterVolumeSpecName: "config-data") pod "d483bb01-7ee4-4f34-a40b-15bf34e365bc" (UID: "d483bb01-7ee4-4f34-a40b-15bf34e365bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:38:53.670143 master-0 kubenswrapper[31411]: I0224 02:38:53.670072 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23b93a03-505a-41a6-94b6-9344778d91be-config-data" (OuterVolumeSpecName: "config-data") pod "23b93a03-505a-41a6-94b6-9344778d91be" (UID: "23b93a03-505a-41a6-94b6-9344778d91be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:38:53.719645 master-0 kubenswrapper[31411]: I0224 02:38:53.719588 31411 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d483bb01-7ee4-4f34-a40b-15bf34e365bc-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:53.719645 master-0 kubenswrapper[31411]: I0224 02:38:53.719641 31411 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23b93a03-505a-41a6-94b6-9344778d91be-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:53.719808 master-0 kubenswrapper[31411]: I0224 02:38:53.719657 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vx4bv\" (UniqueName: \"kubernetes.io/projected/d483bb01-7ee4-4f34-a40b-15bf34e365bc-kube-api-access-vx4bv\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:53.719808 master-0 kubenswrapper[31411]: I0224 02:38:53.719669 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d483bb01-7ee4-4f34-a40b-15bf34e365bc-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:53.719808 master-0 kubenswrapper[31411]: I0224 02:38:53.719679 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23b93a03-505a-41a6-94b6-9344778d91be-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:53.719808 master-0 kubenswrapper[31411]: I0224 02:38:53.719687 31411 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23b93a03-505a-41a6-94b6-9344778d91be-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:53.719808 master-0 kubenswrapper[31411]: I0224 02:38:53.719695 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hvnn\" (UniqueName: \"kubernetes.io/projected/23b93a03-505a-41a6-94b6-9344778d91be-kube-api-access-5hvnn\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:53.814639 master-0 kubenswrapper[31411]: I0224 02:38:53.813474 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" Feb 24 02:38:53.867386 master-0 kubenswrapper[31411]: I0224 02:38:53.867217 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mz9lj" event={"ID":"23b93a03-505a-41a6-94b6-9344778d91be","Type":"ContainerDied","Data":"cc0989e58940568fba59b433759f7dc52b913e7f972c4cdcb1248af22d4ab46f"} Feb 24 02:38:53.867386 master-0 kubenswrapper[31411]: I0224 02:38:53.867339 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc0989e58940568fba59b433759f7dc52b913e7f972c4cdcb1248af22d4ab46f" Feb 24 02:38:53.867386 master-0 kubenswrapper[31411]: I0224 02:38:53.867235 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mz9lj" Feb 24 02:38:53.870700 master-0 kubenswrapper[31411]: I0224 02:38:53.870632 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" event={"ID":"5073b6d9-022e-431f-92a1-9e4dbb1a2707","Type":"ContainerDied","Data":"edb1b8f3743f7e4a8fab33ec572cb1003e53b0ee15dbbaa6b98c1bdbc275d304"} Feb 24 02:38:53.870767 master-0 kubenswrapper[31411]: I0224 02:38:53.870721 31411 scope.go:117] "RemoveContainer" containerID="26258ca057eef9536895385b36d82a0b998133dc91eeb54a32a901e1263f87fd" Feb 24 02:38:53.870824 master-0 kubenswrapper[31411]: I0224 02:38:53.870748 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" Feb 24 02:38:53.874290 master-0 kubenswrapper[31411]: I0224 02:38:53.874024 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d483bb01-7ee4-4f34-a40b-15bf34e365bc","Type":"ContainerDied","Data":"fdbc4d69dfe3321b4d2fc0de6ed9219440f3227203db73309c77b1dc103e5a5a"} Feb 24 02:38:53.874290 master-0 kubenswrapper[31411]: I0224 02:38:53.874071 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 02:38:53.875537 master-0 kubenswrapper[31411]: I0224 02:38:53.875500 31411 generic.go:334] "Generic (PLEG): container finished" podID="83d58022-3d1c-4bab-ba2d-ca5a54d511db" containerID="67f54de3f30c2b2affe27ec8d04a452901c3a9e1ea584b7774eac9304f416f7c" exitCode=0 Feb 24 02:38:53.875537 master-0 kubenswrapper[31411]: I0224 02:38:53.875533 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7jt69" event={"ID":"83d58022-3d1c-4bab-ba2d-ca5a54d511db","Type":"ContainerDied","Data":"67f54de3f30c2b2affe27ec8d04a452901c3a9e1ea584b7774eac9304f416f7c"} Feb 24 02:38:53.902272 master-0 kubenswrapper[31411]: I0224 02:38:53.901631 31411 scope.go:117] "RemoveContainer" containerID="62834fe40abab078b78c2b05f8d8521a2d8b172de94e3d02a4335dc840fb3857" Feb 24 02:38:53.929755 master-0 kubenswrapper[31411]: I0224 02:38:53.925829 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5073b6d9-022e-431f-92a1-9e4dbb1a2707-ovsdbserver-sb\") pod \"5073b6d9-022e-431f-92a1-9e4dbb1a2707\" (UID: \"5073b6d9-022e-431f-92a1-9e4dbb1a2707\") " Feb 24 02:38:53.929755 master-0 kubenswrapper[31411]: I0224 02:38:53.926120 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5073b6d9-022e-431f-92a1-9e4dbb1a2707-config\") pod \"5073b6d9-022e-431f-92a1-9e4dbb1a2707\" (UID: \"5073b6d9-022e-431f-92a1-9e4dbb1a2707\") " Feb 24 02:38:53.929755 master-0 kubenswrapper[31411]: I0224 02:38:53.926209 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5073b6d9-022e-431f-92a1-9e4dbb1a2707-dns-svc\") pod \"5073b6d9-022e-431f-92a1-9e4dbb1a2707\" (UID: \"5073b6d9-022e-431f-92a1-9e4dbb1a2707\") " Feb 24 02:38:53.929755 master-0 kubenswrapper[31411]: I0224 02:38:53.926256 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx4db\" (UniqueName: \"kubernetes.io/projected/5073b6d9-022e-431f-92a1-9e4dbb1a2707-kube-api-access-qx4db\") pod \"5073b6d9-022e-431f-92a1-9e4dbb1a2707\" (UID: \"5073b6d9-022e-431f-92a1-9e4dbb1a2707\") " Feb 24 02:38:53.929755 master-0 kubenswrapper[31411]: I0224 02:38:53.926292 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5073b6d9-022e-431f-92a1-9e4dbb1a2707-ovsdbserver-nb\") pod \"5073b6d9-022e-431f-92a1-9e4dbb1a2707\" (UID: \"5073b6d9-022e-431f-92a1-9e4dbb1a2707\") " Feb 24 02:38:53.929755 master-0 kubenswrapper[31411]: I0224 02:38:53.926337 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5073b6d9-022e-431f-92a1-9e4dbb1a2707-dns-swift-storage-0\") pod \"5073b6d9-022e-431f-92a1-9e4dbb1a2707\" (UID: \"5073b6d9-022e-431f-92a1-9e4dbb1a2707\") " Feb 24 02:38:53.952109 master-0 kubenswrapper[31411]: I0224 02:38:53.952058 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5073b6d9-022e-431f-92a1-9e4dbb1a2707-kube-api-access-qx4db" (OuterVolumeSpecName: "kube-api-access-qx4db") pod "5073b6d9-022e-431f-92a1-9e4dbb1a2707" (UID: "5073b6d9-022e-431f-92a1-9e4dbb1a2707"). InnerVolumeSpecName "kube-api-access-qx4db". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:38:53.952317 master-0 kubenswrapper[31411]: I0224 02:38:53.952288 31411 scope.go:117] "RemoveContainer" containerID="3482d706d5780273a958ae3b61197a17264c7f391eea34f3f06019a34b09554f" Feb 24 02:38:53.991745 master-0 kubenswrapper[31411]: I0224 02:38:53.991655 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 02:38:54.006680 master-0 kubenswrapper[31411]: I0224 02:38:54.006535 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 02:38:54.009229 master-0 kubenswrapper[31411]: I0224 02:38:54.009159 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5073b6d9-022e-431f-92a1-9e4dbb1a2707-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5073b6d9-022e-431f-92a1-9e4dbb1a2707" (UID: "5073b6d9-022e-431f-92a1-9e4dbb1a2707"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:38:54.027732 master-0 kubenswrapper[31411]: I0224 02:38:54.024888 31411 scope.go:117] "RemoveContainer" containerID="f64f93b42c13cdafa357676e1c873e80d5e365fb1c1331a5c97d6989a6621540" Feb 24 02:38:54.027732 master-0 kubenswrapper[31411]: I0224 02:38:54.025083 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 24 02:38:54.027732 master-0 kubenswrapper[31411]: E0224 02:38:54.026189 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d483bb01-7ee4-4f34-a40b-15bf34e365bc" containerName="nova-metadata-metadata" Feb 24 02:38:54.027732 master-0 kubenswrapper[31411]: I0224 02:38:54.026211 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="d483bb01-7ee4-4f34-a40b-15bf34e365bc" containerName="nova-metadata-metadata" Feb 24 02:38:54.027732 master-0 kubenswrapper[31411]: E0224 02:38:54.026236 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5073b6d9-022e-431f-92a1-9e4dbb1a2707" containerName="dnsmasq-dns" Feb 24 02:38:54.027732 master-0 kubenswrapper[31411]: I0224 02:38:54.026243 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="5073b6d9-022e-431f-92a1-9e4dbb1a2707" containerName="dnsmasq-dns" Feb 24 02:38:54.027732 master-0 kubenswrapper[31411]: E0224 02:38:54.026276 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d483bb01-7ee4-4f34-a40b-15bf34e365bc" containerName="nova-metadata-log" Feb 24 02:38:54.027732 master-0 kubenswrapper[31411]: I0224 02:38:54.026284 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="d483bb01-7ee4-4f34-a40b-15bf34e365bc" containerName="nova-metadata-log" Feb 24 02:38:54.027732 master-0 kubenswrapper[31411]: E0224 02:38:54.026309 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5073b6d9-022e-431f-92a1-9e4dbb1a2707" containerName="init" Feb 24 02:38:54.027732 master-0 kubenswrapper[31411]: I0224 02:38:54.026315 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="5073b6d9-022e-431f-92a1-9e4dbb1a2707" containerName="init" Feb 24 02:38:54.027732 master-0 kubenswrapper[31411]: E0224 02:38:54.026338 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b93a03-505a-41a6-94b6-9344778d91be" containerName="nova-manage" Feb 24 02:38:54.027732 master-0 kubenswrapper[31411]: I0224 02:38:54.026344 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b93a03-505a-41a6-94b6-9344778d91be" containerName="nova-manage" Feb 24 02:38:54.027732 master-0 kubenswrapper[31411]: I0224 02:38:54.026615 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="d483bb01-7ee4-4f34-a40b-15bf34e365bc" containerName="nova-metadata-log" Feb 24 02:38:54.027732 master-0 kubenswrapper[31411]: I0224 02:38:54.026632 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="d483bb01-7ee4-4f34-a40b-15bf34e365bc" containerName="nova-metadata-metadata" Feb 24 02:38:54.027732 master-0 kubenswrapper[31411]: I0224 02:38:54.026653 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="5073b6d9-022e-431f-92a1-9e4dbb1a2707" containerName="dnsmasq-dns" Feb 24 02:38:54.027732 master-0 kubenswrapper[31411]: I0224 02:38:54.026701 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="23b93a03-505a-41a6-94b6-9344778d91be" containerName="nova-manage" Feb 24 02:38:54.028355 master-0 kubenswrapper[31411]: I0224 02:38:54.028112 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 02:38:54.037096 master-0 kubenswrapper[31411]: I0224 02:38:54.037015 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 24 02:38:54.037361 master-0 kubenswrapper[31411]: I0224 02:38:54.037258 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 24 02:38:54.037998 master-0 kubenswrapper[31411]: I0224 02:38:54.037947 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5073b6d9-022e-431f-92a1-9e4dbb1a2707-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5073b6d9-022e-431f-92a1-9e4dbb1a2707" (UID: "5073b6d9-022e-431f-92a1-9e4dbb1a2707"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:38:54.037998 master-0 kubenswrapper[31411]: I0224 02:38:54.037977 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5073b6d9-022e-431f-92a1-9e4dbb1a2707-config" (OuterVolumeSpecName: "config") pod "5073b6d9-022e-431f-92a1-9e4dbb1a2707" (UID: "5073b6d9-022e-431f-92a1-9e4dbb1a2707"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:38:54.041015 master-0 kubenswrapper[31411]: I0224 02:38:54.040980 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 02:38:54.044400 master-0 kubenswrapper[31411]: I0224 02:38:54.044364 31411 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5073b6d9-022e-431f-92a1-9e4dbb1a2707-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:54.044400 master-0 kubenswrapper[31411]: I0224 02:38:54.044395 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx4db\" (UniqueName: \"kubernetes.io/projected/5073b6d9-022e-431f-92a1-9e4dbb1a2707-kube-api-access-qx4db\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:54.044494 master-0 kubenswrapper[31411]: I0224 02:38:54.044406 31411 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5073b6d9-022e-431f-92a1-9e4dbb1a2707-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:54.044494 master-0 kubenswrapper[31411]: I0224 02:38:54.044415 31411 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5073b6d9-022e-431f-92a1-9e4dbb1a2707-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:54.068291 master-0 kubenswrapper[31411]: I0224 02:38:54.068127 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5073b6d9-022e-431f-92a1-9e4dbb1a2707-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5073b6d9-022e-431f-92a1-9e4dbb1a2707" (UID: "5073b6d9-022e-431f-92a1-9e4dbb1a2707"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:38:54.079406 master-0 kubenswrapper[31411]: I0224 02:38:54.079285 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5073b6d9-022e-431f-92a1-9e4dbb1a2707-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5073b6d9-022e-431f-92a1-9e4dbb1a2707" (UID: "5073b6d9-022e-431f-92a1-9e4dbb1a2707"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:38:54.152691 master-0 kubenswrapper[31411]: I0224 02:38:54.146394 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f376f0d6-c367-4d86-99ca-6e95487131bd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f376f0d6-c367-4d86-99ca-6e95487131bd\") " pod="openstack/nova-metadata-0" Feb 24 02:38:54.152691 master-0 kubenswrapper[31411]: I0224 02:38:54.146510 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdvxl\" (UniqueName: \"kubernetes.io/projected/f376f0d6-c367-4d86-99ca-6e95487131bd-kube-api-access-vdvxl\") pod \"nova-metadata-0\" (UID: \"f376f0d6-c367-4d86-99ca-6e95487131bd\") " pod="openstack/nova-metadata-0" Feb 24 02:38:54.152691 master-0 kubenswrapper[31411]: I0224 02:38:54.146623 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f376f0d6-c367-4d86-99ca-6e95487131bd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f376f0d6-c367-4d86-99ca-6e95487131bd\") " pod="openstack/nova-metadata-0" Feb 24 02:38:54.152691 master-0 kubenswrapper[31411]: I0224 02:38:54.146673 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f376f0d6-c367-4d86-99ca-6e95487131bd-logs\") pod \"nova-metadata-0\" (UID: \"f376f0d6-c367-4d86-99ca-6e95487131bd\") " pod="openstack/nova-metadata-0" Feb 24 02:38:54.152691 master-0 kubenswrapper[31411]: I0224 02:38:54.146777 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f376f0d6-c367-4d86-99ca-6e95487131bd-config-data\") pod \"nova-metadata-0\" (UID: \"f376f0d6-c367-4d86-99ca-6e95487131bd\") " pod="openstack/nova-metadata-0" Feb 24 02:38:54.152691 master-0 kubenswrapper[31411]: I0224 02:38:54.146878 31411 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5073b6d9-022e-431f-92a1-9e4dbb1a2707-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:54.152691 master-0 kubenswrapper[31411]: I0224 02:38:54.146900 31411 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5073b6d9-022e-431f-92a1-9e4dbb1a2707-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:54.237601 master-0 kubenswrapper[31411]: I0224 02:38:54.236564 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cc6c67c77-h5cpc"] Feb 24 02:38:54.249600 master-0 kubenswrapper[31411]: I0224 02:38:54.248795 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f376f0d6-c367-4d86-99ca-6e95487131bd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f376f0d6-c367-4d86-99ca-6e95487131bd\") " pod="openstack/nova-metadata-0" Feb 24 02:38:54.249600 master-0 kubenswrapper[31411]: I0224 02:38:54.248869 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f376f0d6-c367-4d86-99ca-6e95487131bd-logs\") pod \"nova-metadata-0\" (UID: \"f376f0d6-c367-4d86-99ca-6e95487131bd\") " pod="openstack/nova-metadata-0" Feb 24 02:38:54.249600 master-0 kubenswrapper[31411]: I0224 02:38:54.248972 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f376f0d6-c367-4d86-99ca-6e95487131bd-config-data\") pod \"nova-metadata-0\" (UID: \"f376f0d6-c367-4d86-99ca-6e95487131bd\") " pod="openstack/nova-metadata-0" Feb 24 02:38:54.249600 master-0 kubenswrapper[31411]: I0224 02:38:54.249039 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f376f0d6-c367-4d86-99ca-6e95487131bd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f376f0d6-c367-4d86-99ca-6e95487131bd\") " pod="openstack/nova-metadata-0" Feb 24 02:38:54.249600 master-0 kubenswrapper[31411]: I0224 02:38:54.249111 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdvxl\" (UniqueName: \"kubernetes.io/projected/f376f0d6-c367-4d86-99ca-6e95487131bd-kube-api-access-vdvxl\") pod \"nova-metadata-0\" (UID: \"f376f0d6-c367-4d86-99ca-6e95487131bd\") " pod="openstack/nova-metadata-0" Feb 24 02:38:54.252873 master-0 kubenswrapper[31411]: I0224 02:38:54.251801 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f376f0d6-c367-4d86-99ca-6e95487131bd-logs\") pod \"nova-metadata-0\" (UID: \"f376f0d6-c367-4d86-99ca-6e95487131bd\") " pod="openstack/nova-metadata-0" Feb 24 02:38:54.257594 master-0 kubenswrapper[31411]: I0224 02:38:54.254716 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f376f0d6-c367-4d86-99ca-6e95487131bd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f376f0d6-c367-4d86-99ca-6e95487131bd\") " pod="openstack/nova-metadata-0" Feb 24 02:38:54.257594 master-0 kubenswrapper[31411]: I0224 02:38:54.256531 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f376f0d6-c367-4d86-99ca-6e95487131bd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f376f0d6-c367-4d86-99ca-6e95487131bd\") " pod="openstack/nova-metadata-0" Feb 24 02:38:54.257594 master-0 kubenswrapper[31411]: I0224 02:38:54.256820 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f376f0d6-c367-4d86-99ca-6e95487131bd-config-data\") pod \"nova-metadata-0\" (UID: \"f376f0d6-c367-4d86-99ca-6e95487131bd\") " pod="openstack/nova-metadata-0" Feb 24 02:38:54.258649 master-0 kubenswrapper[31411]: I0224 02:38:54.258194 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cc6c67c77-h5cpc"] Feb 24 02:38:54.274024 master-0 kubenswrapper[31411]: I0224 02:38:54.273958 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdvxl\" (UniqueName: \"kubernetes.io/projected/f376f0d6-c367-4d86-99ca-6e95487131bd-kube-api-access-vdvxl\") pod \"nova-metadata-0\" (UID: \"f376f0d6-c367-4d86-99ca-6e95487131bd\") " pod="openstack/nova-metadata-0" Feb 24 02:38:54.367699 master-0 kubenswrapper[31411]: I0224 02:38:54.367620 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 02:38:54.794175 master-0 kubenswrapper[31411]: I0224 02:38:54.792035 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 24 02:38:54.794175 master-0 kubenswrapper[31411]: I0224 02:38:54.792557 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="29d4983d-bc42-45b7-ae25-f7c24cdd8512" containerName="nova-api-log" containerID="cri-o://cd04292271f4cb1dff16081824ad4dfaf6996ff0309a6865903853f2b34aa592" gracePeriod=30 Feb 24 02:38:54.794175 master-0 kubenswrapper[31411]: I0224 02:38:54.792797 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="29d4983d-bc42-45b7-ae25-f7c24cdd8512" containerName="nova-api-api" containerID="cri-o://cd4e16d849e2d2d47b143bc6c53da7b1c4101799b4668a0e8f0418503dd2eb11" gracePeriod=30 Feb 24 02:38:54.808805 master-0 kubenswrapper[31411]: I0224 02:38:54.808734 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 02:38:54.809085 master-0 kubenswrapper[31411]: I0224 02:38:54.809040 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="43c3681e-f1f6-4953-8b9c-fc9b08618f5f" containerName="nova-scheduler-scheduler" containerID="cri-o://39af9503df23261c39f03f809ae380c68ac3e1b7857e4f4ca37ea5a60c937cd3" gracePeriod=30 Feb 24 02:38:54.820951 master-0 kubenswrapper[31411]: I0224 02:38:54.820871 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 02:38:54.918103 master-0 kubenswrapper[31411]: I0224 02:38:54.917600 31411 generic.go:334] "Generic (PLEG): container finished" podID="d289f4ce-9a2f-4d57-bf7b-414618c7c4e8" containerID="ceab83b81645ba25888f1449863cb97287e5ea0eae1279b2c30abfc6fb906b06" exitCode=0 Feb 24 02:38:54.918103 master-0 kubenswrapper[31411]: I0224 02:38:54.917683 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8","Type":"ContainerDied","Data":"ceab83b81645ba25888f1449863cb97287e5ea0eae1279b2c30abfc6fb906b06"} Feb 24 02:38:54.918855 master-0 kubenswrapper[31411]: W0224 02:38:54.918787 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf376f0d6_c367_4d86_99ca_6e95487131bd.slice/crio-4d151538736b41ce83bc48e429c3f76bfd9fa6fda389d1ce55760e0906a0a4a9 WatchSource:0}: Error finding container 4d151538736b41ce83bc48e429c3f76bfd9fa6fda389d1ce55760e0906a0a4a9: Status 404 returned error can't find the container with id 4d151538736b41ce83bc48e429c3f76bfd9fa6fda389d1ce55760e0906a0a4a9 Feb 24 02:38:54.941080 master-0 kubenswrapper[31411]: I0224 02:38:54.929663 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"74a07828-16f1-4b69-bfbd-a0e76519ce98","Type":"ContainerStarted","Data":"7024b5d09dfa8719ea318a6733d5c62afd4c225829b1a01aeb419d2d9f6bdf9d"} Feb 24 02:38:54.941080 master-0 kubenswrapper[31411]: I0224 02:38:54.930012 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 24 02:38:54.960004 master-0 kubenswrapper[31411]: I0224 02:38:54.959941 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 02:38:54.999585 master-0 kubenswrapper[31411]: I0224 02:38:54.997368 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 24 02:38:55.086584 master-0 kubenswrapper[31411]: I0224 02:38:55.086389 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-compute-ironic-compute-0" podStartSLOduration=3.59694923 podStartE2EDuration="18.086361664s" podCreationTimestamp="2026-02-24 02:38:37 +0000 UTC" firstStartedPulling="2026-02-24 02:38:39.057351387 +0000 UTC m=+1062.274549233" lastFinishedPulling="2026-02-24 02:38:53.546763811 +0000 UTC m=+1076.763961667" observedRunningTime="2026-02-24 02:38:55.015883848 +0000 UTC m=+1078.233081694" watchObservedRunningTime="2026-02-24 02:38:55.086361664 +0000 UTC m=+1078.303559510" Feb 24 02:38:55.113583 master-0 kubenswrapper[31411]: I0224 02:38:55.109789 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5073b6d9-022e-431f-92a1-9e4dbb1a2707" path="/var/lib/kubelet/pods/5073b6d9-022e-431f-92a1-9e4dbb1a2707/volumes" Feb 24 02:38:55.113583 master-0 kubenswrapper[31411]: I0224 02:38:55.110539 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d483bb01-7ee4-4f34-a40b-15bf34e365bc" path="/var/lib/kubelet/pods/d483bb01-7ee4-4f34-a40b-15bf34e365bc/volumes" Feb 24 02:38:55.570754 master-0 kubenswrapper[31411]: I0224 02:38:55.570661 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7jt69" Feb 24 02:38:55.725219 master-0 kubenswrapper[31411]: I0224 02:38:55.725157 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d58022-3d1c-4bab-ba2d-ca5a54d511db-combined-ca-bundle\") pod \"83d58022-3d1c-4bab-ba2d-ca5a54d511db\" (UID: \"83d58022-3d1c-4bab-ba2d-ca5a54d511db\") " Feb 24 02:38:55.725451 master-0 kubenswrapper[31411]: I0224 02:38:55.725418 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83d58022-3d1c-4bab-ba2d-ca5a54d511db-scripts\") pod \"83d58022-3d1c-4bab-ba2d-ca5a54d511db\" (UID: \"83d58022-3d1c-4bab-ba2d-ca5a54d511db\") " Feb 24 02:38:55.725893 master-0 kubenswrapper[31411]: I0224 02:38:55.725618 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dlxv\" (UniqueName: \"kubernetes.io/projected/83d58022-3d1c-4bab-ba2d-ca5a54d511db-kube-api-access-5dlxv\") pod \"83d58022-3d1c-4bab-ba2d-ca5a54d511db\" (UID: \"83d58022-3d1c-4bab-ba2d-ca5a54d511db\") " Feb 24 02:38:55.725994 master-0 kubenswrapper[31411]: I0224 02:38:55.725938 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83d58022-3d1c-4bab-ba2d-ca5a54d511db-config-data\") pod \"83d58022-3d1c-4bab-ba2d-ca5a54d511db\" (UID: \"83d58022-3d1c-4bab-ba2d-ca5a54d511db\") " Feb 24 02:38:55.728947 master-0 kubenswrapper[31411]: I0224 02:38:55.728878 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83d58022-3d1c-4bab-ba2d-ca5a54d511db-scripts" (OuterVolumeSpecName: "scripts") pod "83d58022-3d1c-4bab-ba2d-ca5a54d511db" (UID: "83d58022-3d1c-4bab-ba2d-ca5a54d511db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:38:55.730645 master-0 kubenswrapper[31411]: I0224 02:38:55.730509 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83d58022-3d1c-4bab-ba2d-ca5a54d511db-kube-api-access-5dlxv" (OuterVolumeSpecName: "kube-api-access-5dlxv") pod "83d58022-3d1c-4bab-ba2d-ca5a54d511db" (UID: "83d58022-3d1c-4bab-ba2d-ca5a54d511db"). InnerVolumeSpecName "kube-api-access-5dlxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:38:55.777285 master-0 kubenswrapper[31411]: I0224 02:38:55.777213 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83d58022-3d1c-4bab-ba2d-ca5a54d511db-config-data" (OuterVolumeSpecName: "config-data") pod "83d58022-3d1c-4bab-ba2d-ca5a54d511db" (UID: "83d58022-3d1c-4bab-ba2d-ca5a54d511db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:38:55.786459 master-0 kubenswrapper[31411]: I0224 02:38:55.786388 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83d58022-3d1c-4bab-ba2d-ca5a54d511db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83d58022-3d1c-4bab-ba2d-ca5a54d511db" (UID: "83d58022-3d1c-4bab-ba2d-ca5a54d511db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:38:55.838732 master-0 kubenswrapper[31411]: I0224 02:38:55.830672 31411 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83d58022-3d1c-4bab-ba2d-ca5a54d511db-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:55.838732 master-0 kubenswrapper[31411]: I0224 02:38:55.830727 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dlxv\" (UniqueName: \"kubernetes.io/projected/83d58022-3d1c-4bab-ba2d-ca5a54d511db-kube-api-access-5dlxv\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:55.838732 master-0 kubenswrapper[31411]: I0224 02:38:55.830742 31411 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83d58022-3d1c-4bab-ba2d-ca5a54d511db-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:55.838732 master-0 kubenswrapper[31411]: I0224 02:38:55.830754 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d58022-3d1c-4bab-ba2d-ca5a54d511db-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:55.959154 master-0 kubenswrapper[31411]: I0224 02:38:55.959087 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8","Type":"ContainerStarted","Data":"4faf5872e806eb554f9bb5feeb26c48bff6ce51a82d031eb82d09d5584205913"} Feb 24 02:38:55.965490 master-0 kubenswrapper[31411]: I0224 02:38:55.965435 31411 generic.go:334] "Generic (PLEG): container finished" podID="29d4983d-bc42-45b7-ae25-f7c24cdd8512" containerID="cd04292271f4cb1dff16081824ad4dfaf6996ff0309a6865903853f2b34aa592" exitCode=143 Feb 24 02:38:55.965490 master-0 kubenswrapper[31411]: I0224 02:38:55.965494 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29d4983d-bc42-45b7-ae25-f7c24cdd8512","Type":"ContainerDied","Data":"cd04292271f4cb1dff16081824ad4dfaf6996ff0309a6865903853f2b34aa592"} Feb 24 02:38:55.992884 master-0 kubenswrapper[31411]: I0224 02:38:55.992834 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7jt69" Feb 24 02:38:55.993121 master-0 kubenswrapper[31411]: I0224 02:38:55.992909 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7jt69" event={"ID":"83d58022-3d1c-4bab-ba2d-ca5a54d511db","Type":"ContainerDied","Data":"bfb3f148d41731c880b1f2531d98a8d7eeafe49b1859d5ba49534164a8c5d1ad"} Feb 24 02:38:55.993121 master-0 kubenswrapper[31411]: I0224 02:38:55.992969 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfb3f148d41731c880b1f2531d98a8d7eeafe49b1859d5ba49534164a8c5d1ad" Feb 24 02:38:56.000745 master-0 kubenswrapper[31411]: I0224 02:38:56.000710 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f376f0d6-c367-4d86-99ca-6e95487131bd" containerName="nova-metadata-log" containerID="cri-o://f7505510464e66ea1b510507a55bf0b38e80216c1dc53ef2ecce3da21f516f7b" gracePeriod=30 Feb 24 02:38:56.000883 master-0 kubenswrapper[31411]: I0224 02:38:56.000794 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f376f0d6-c367-4d86-99ca-6e95487131bd","Type":"ContainerStarted","Data":"26b8c3b30bf6a253d55cc645b77067d93caa1cd7ad70f431ba92e12cfa372d72"} Feb 24 02:38:56.000883 master-0 kubenswrapper[31411]: I0224 02:38:56.000813 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f376f0d6-c367-4d86-99ca-6e95487131bd","Type":"ContainerStarted","Data":"f7505510464e66ea1b510507a55bf0b38e80216c1dc53ef2ecce3da21f516f7b"} Feb 24 02:38:56.000883 master-0 kubenswrapper[31411]: I0224 02:38:56.000825 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f376f0d6-c367-4d86-99ca-6e95487131bd","Type":"ContainerStarted","Data":"4d151538736b41ce83bc48e429c3f76bfd9fa6fda389d1ce55760e0906a0a4a9"} Feb 24 02:38:56.001278 master-0 kubenswrapper[31411]: I0224 02:38:56.001254 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f376f0d6-c367-4d86-99ca-6e95487131bd" containerName="nova-metadata-metadata" containerID="cri-o://26b8c3b30bf6a253d55cc645b77067d93caa1cd7ad70f431ba92e12cfa372d72" gracePeriod=30 Feb 24 02:38:56.074491 master-0 kubenswrapper[31411]: I0224 02:38:56.074341 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.074319852 podStartE2EDuration="3.074319852s" podCreationTimestamp="2026-02-24 02:38:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:38:56.064294829 +0000 UTC m=+1079.281492685" watchObservedRunningTime="2026-02-24 02:38:56.074319852 +0000 UTC m=+1079.291517698" Feb 24 02:38:56.106484 master-0 kubenswrapper[31411]: I0224 02:38:56.106415 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 24 02:38:56.107172 master-0 kubenswrapper[31411]: E0224 02:38:56.107140 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83d58022-3d1c-4bab-ba2d-ca5a54d511db" containerName="nova-cell1-conductor-db-sync" Feb 24 02:38:56.107172 master-0 kubenswrapper[31411]: I0224 02:38:56.107164 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="83d58022-3d1c-4bab-ba2d-ca5a54d511db" containerName="nova-cell1-conductor-db-sync" Feb 24 02:38:56.107499 master-0 kubenswrapper[31411]: I0224 02:38:56.107472 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="83d58022-3d1c-4bab-ba2d-ca5a54d511db" containerName="nova-cell1-conductor-db-sync" Feb 24 02:38:56.108473 master-0 kubenswrapper[31411]: I0224 02:38:56.108443 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 24 02:38:56.111917 master-0 kubenswrapper[31411]: I0224 02:38:56.111887 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 24 02:38:56.155128 master-0 kubenswrapper[31411]: I0224 02:38:56.154642 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 24 02:38:56.244929 master-0 kubenswrapper[31411]: I0224 02:38:56.244892 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/908cf055-4162-49bd-93cb-4d0a9add9b11-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"908cf055-4162-49bd-93cb-4d0a9add9b11\") " pod="openstack/nova-cell1-conductor-0" Feb 24 02:38:56.245088 master-0 kubenswrapper[31411]: I0224 02:38:56.245072 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vzxg\" (UniqueName: \"kubernetes.io/projected/908cf055-4162-49bd-93cb-4d0a9add9b11-kube-api-access-2vzxg\") pod \"nova-cell1-conductor-0\" (UID: \"908cf055-4162-49bd-93cb-4d0a9add9b11\") " pod="openstack/nova-cell1-conductor-0" Feb 24 02:38:56.245246 master-0 kubenswrapper[31411]: I0224 02:38:56.245230 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908cf055-4162-49bd-93cb-4d0a9add9b11-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"908cf055-4162-49bd-93cb-4d0a9add9b11\") " pod="openstack/nova-cell1-conductor-0" Feb 24 02:38:56.347794 master-0 kubenswrapper[31411]: I0224 02:38:56.347730 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/908cf055-4162-49bd-93cb-4d0a9add9b11-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"908cf055-4162-49bd-93cb-4d0a9add9b11\") " pod="openstack/nova-cell1-conductor-0" Feb 24 02:38:56.347913 master-0 kubenswrapper[31411]: I0224 02:38:56.347797 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vzxg\" (UniqueName: \"kubernetes.io/projected/908cf055-4162-49bd-93cb-4d0a9add9b11-kube-api-access-2vzxg\") pod \"nova-cell1-conductor-0\" (UID: \"908cf055-4162-49bd-93cb-4d0a9add9b11\") " pod="openstack/nova-cell1-conductor-0" Feb 24 02:38:56.347913 master-0 kubenswrapper[31411]: I0224 02:38:56.347859 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908cf055-4162-49bd-93cb-4d0a9add9b11-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"908cf055-4162-49bd-93cb-4d0a9add9b11\") " pod="openstack/nova-cell1-conductor-0" Feb 24 02:38:56.351703 master-0 kubenswrapper[31411]: I0224 02:38:56.351650 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/908cf055-4162-49bd-93cb-4d0a9add9b11-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"908cf055-4162-49bd-93cb-4d0a9add9b11\") " pod="openstack/nova-cell1-conductor-0" Feb 24 02:38:56.352030 master-0 kubenswrapper[31411]: I0224 02:38:56.352006 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908cf055-4162-49bd-93cb-4d0a9add9b11-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"908cf055-4162-49bd-93cb-4d0a9add9b11\") " pod="openstack/nova-cell1-conductor-0" Feb 24 02:38:56.382350 master-0 kubenswrapper[31411]: I0224 02:38:56.382308 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vzxg\" (UniqueName: \"kubernetes.io/projected/908cf055-4162-49bd-93cb-4d0a9add9b11-kube-api-access-2vzxg\") pod \"nova-cell1-conductor-0\" (UID: \"908cf055-4162-49bd-93cb-4d0a9add9b11\") " pod="openstack/nova-cell1-conductor-0" Feb 24 02:38:56.430125 master-0 kubenswrapper[31411]: I0224 02:38:56.430059 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 24 02:38:56.982745 master-0 kubenswrapper[31411]: I0224 02:38:56.982689 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 02:38:57.022098 master-0 kubenswrapper[31411]: I0224 02:38:57.022025 31411 generic.go:334] "Generic (PLEG): container finished" podID="f376f0d6-c367-4d86-99ca-6e95487131bd" containerID="26b8c3b30bf6a253d55cc645b77067d93caa1cd7ad70f431ba92e12cfa372d72" exitCode=0 Feb 24 02:38:57.022098 master-0 kubenswrapper[31411]: I0224 02:38:57.022081 31411 generic.go:334] "Generic (PLEG): container finished" podID="f376f0d6-c367-4d86-99ca-6e95487131bd" containerID="f7505510464e66ea1b510507a55bf0b38e80216c1dc53ef2ecce3da21f516f7b" exitCode=143 Feb 24 02:38:57.022530 master-0 kubenswrapper[31411]: I0224 02:38:57.022121 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 02:38:57.022530 master-0 kubenswrapper[31411]: I0224 02:38:57.022253 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f376f0d6-c367-4d86-99ca-6e95487131bd","Type":"ContainerDied","Data":"26b8c3b30bf6a253d55cc645b77067d93caa1cd7ad70f431ba92e12cfa372d72"} Feb 24 02:38:57.022530 master-0 kubenswrapper[31411]: I0224 02:38:57.022300 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f376f0d6-c367-4d86-99ca-6e95487131bd","Type":"ContainerDied","Data":"f7505510464e66ea1b510507a55bf0b38e80216c1dc53ef2ecce3da21f516f7b"} Feb 24 02:38:57.022530 master-0 kubenswrapper[31411]: I0224 02:38:57.022313 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f376f0d6-c367-4d86-99ca-6e95487131bd","Type":"ContainerDied","Data":"4d151538736b41ce83bc48e429c3f76bfd9fa6fda389d1ce55760e0906a0a4a9"} Feb 24 02:38:57.022530 master-0 kubenswrapper[31411]: I0224 02:38:57.022345 31411 scope.go:117] "RemoveContainer" containerID="26b8c3b30bf6a253d55cc645b77067d93caa1cd7ad70f431ba92e12cfa372d72" Feb 24 02:38:57.058985 master-0 kubenswrapper[31411]: I0224 02:38:57.054970 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8","Type":"ContainerStarted","Data":"12bf47fdc9b56487a8d10fd3c20cf78bafa1deb6537237f573b16849885ab23c"} Feb 24 02:38:57.058985 master-0 kubenswrapper[31411]: I0224 02:38:57.055051 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"d289f4ce-9a2f-4d57-bf7b-414618c7c4e8","Type":"ContainerStarted","Data":"69a810964ef5021d0f6ed6ea0bffd052eb6d96d6ccd3a7ba1b0558e416805133"} Feb 24 02:38:57.058985 master-0 kubenswrapper[31411]: I0224 02:38:57.055101 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Feb 24 02:38:57.058985 master-0 kubenswrapper[31411]: I0224 02:38:57.055152 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Feb 24 02:38:57.082008 master-0 kubenswrapper[31411]: I0224 02:38:57.079116 31411 scope.go:117] "RemoveContainer" containerID="f7505510464e66ea1b510507a55bf0b38e80216c1dc53ef2ecce3da21f516f7b" Feb 24 02:38:57.083647 master-0 kubenswrapper[31411]: I0224 02:38:57.083116 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f376f0d6-c367-4d86-99ca-6e95487131bd-nova-metadata-tls-certs\") pod \"f376f0d6-c367-4d86-99ca-6e95487131bd\" (UID: \"f376f0d6-c367-4d86-99ca-6e95487131bd\") " Feb 24 02:38:57.083647 master-0 kubenswrapper[31411]: I0224 02:38:57.083331 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f376f0d6-c367-4d86-99ca-6e95487131bd-config-data\") pod \"f376f0d6-c367-4d86-99ca-6e95487131bd\" (UID: \"f376f0d6-c367-4d86-99ca-6e95487131bd\") " Feb 24 02:38:57.083647 master-0 kubenswrapper[31411]: I0224 02:38:57.083530 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdvxl\" (UniqueName: \"kubernetes.io/projected/f376f0d6-c367-4d86-99ca-6e95487131bd-kube-api-access-vdvxl\") pod \"f376f0d6-c367-4d86-99ca-6e95487131bd\" (UID: \"f376f0d6-c367-4d86-99ca-6e95487131bd\") " Feb 24 02:38:57.083647 master-0 kubenswrapper[31411]: I0224 02:38:57.083610 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f376f0d6-c367-4d86-99ca-6e95487131bd-combined-ca-bundle\") pod \"f376f0d6-c367-4d86-99ca-6e95487131bd\" (UID: \"f376f0d6-c367-4d86-99ca-6e95487131bd\") " Feb 24 02:38:57.084241 master-0 kubenswrapper[31411]: I0224 02:38:57.083667 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f376f0d6-c367-4d86-99ca-6e95487131bd-logs\") pod \"f376f0d6-c367-4d86-99ca-6e95487131bd\" (UID: \"f376f0d6-c367-4d86-99ca-6e95487131bd\") " Feb 24 02:38:57.084858 master-0 kubenswrapper[31411]: I0224 02:38:57.084564 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f376f0d6-c367-4d86-99ca-6e95487131bd-logs" (OuterVolumeSpecName: "logs") pod "f376f0d6-c367-4d86-99ca-6e95487131bd" (UID: "f376f0d6-c367-4d86-99ca-6e95487131bd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:38:57.122625 master-0 kubenswrapper[31411]: I0224 02:38:57.122044 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f376f0d6-c367-4d86-99ca-6e95487131bd-kube-api-access-vdvxl" (OuterVolumeSpecName: "kube-api-access-vdvxl") pod "f376f0d6-c367-4d86-99ca-6e95487131bd" (UID: "f376f0d6-c367-4d86-99ca-6e95487131bd"). InnerVolumeSpecName "kube-api-access-vdvxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:38:57.123749 master-0 kubenswrapper[31411]: I0224 02:38:57.123705 31411 scope.go:117] "RemoveContainer" containerID="26b8c3b30bf6a253d55cc645b77067d93caa1cd7ad70f431ba92e12cfa372d72" Feb 24 02:38:57.125094 master-0 kubenswrapper[31411]: E0224 02:38:57.125048 31411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26b8c3b30bf6a253d55cc645b77067d93caa1cd7ad70f431ba92e12cfa372d72\": container with ID starting with 26b8c3b30bf6a253d55cc645b77067d93caa1cd7ad70f431ba92e12cfa372d72 not found: ID does not exist" containerID="26b8c3b30bf6a253d55cc645b77067d93caa1cd7ad70f431ba92e12cfa372d72" Feb 24 02:38:57.125150 master-0 kubenswrapper[31411]: I0224 02:38:57.125100 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26b8c3b30bf6a253d55cc645b77067d93caa1cd7ad70f431ba92e12cfa372d72"} err="failed to get container status \"26b8c3b30bf6a253d55cc645b77067d93caa1cd7ad70f431ba92e12cfa372d72\": rpc error: code = NotFound desc = could not find container \"26b8c3b30bf6a253d55cc645b77067d93caa1cd7ad70f431ba92e12cfa372d72\": container with ID starting with 26b8c3b30bf6a253d55cc645b77067d93caa1cd7ad70f431ba92e12cfa372d72 not found: ID does not exist" Feb 24 02:38:57.125199 master-0 kubenswrapper[31411]: I0224 02:38:57.125151 31411 scope.go:117] "RemoveContainer" containerID="f7505510464e66ea1b510507a55bf0b38e80216c1dc53ef2ecce3da21f516f7b" Feb 24 02:38:57.125465 master-0 kubenswrapper[31411]: E0224 02:38:57.125427 31411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7505510464e66ea1b510507a55bf0b38e80216c1dc53ef2ecce3da21f516f7b\": container with ID starting with f7505510464e66ea1b510507a55bf0b38e80216c1dc53ef2ecce3da21f516f7b not found: ID does not exist" containerID="f7505510464e66ea1b510507a55bf0b38e80216c1dc53ef2ecce3da21f516f7b" Feb 24 02:38:57.125465 master-0 kubenswrapper[31411]: I0224 02:38:57.125458 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7505510464e66ea1b510507a55bf0b38e80216c1dc53ef2ecce3da21f516f7b"} err="failed to get container status \"f7505510464e66ea1b510507a55bf0b38e80216c1dc53ef2ecce3da21f516f7b\": rpc error: code = NotFound desc = could not find container \"f7505510464e66ea1b510507a55bf0b38e80216c1dc53ef2ecce3da21f516f7b\": container with ID starting with f7505510464e66ea1b510507a55bf0b38e80216c1dc53ef2ecce3da21f516f7b not found: ID does not exist" Feb 24 02:38:57.125546 master-0 kubenswrapper[31411]: I0224 02:38:57.125471 31411 scope.go:117] "RemoveContainer" containerID="26b8c3b30bf6a253d55cc645b77067d93caa1cd7ad70f431ba92e12cfa372d72" Feb 24 02:38:57.125925 master-0 kubenswrapper[31411]: I0224 02:38:57.125870 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26b8c3b30bf6a253d55cc645b77067d93caa1cd7ad70f431ba92e12cfa372d72"} err="failed to get container status \"26b8c3b30bf6a253d55cc645b77067d93caa1cd7ad70f431ba92e12cfa372d72\": rpc error: code = NotFound desc = could not find container \"26b8c3b30bf6a253d55cc645b77067d93caa1cd7ad70f431ba92e12cfa372d72\": container with ID starting with 26b8c3b30bf6a253d55cc645b77067d93caa1cd7ad70f431ba92e12cfa372d72 not found: ID does not exist" Feb 24 02:38:57.126198 master-0 kubenswrapper[31411]: I0224 02:38:57.125924 31411 scope.go:117] "RemoveContainer" containerID="f7505510464e66ea1b510507a55bf0b38e80216c1dc53ef2ecce3da21f516f7b" Feb 24 02:38:57.126198 master-0 kubenswrapper[31411]: I0224 02:38:57.126046 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 24 02:38:57.126371 master-0 kubenswrapper[31411]: I0224 02:38:57.126341 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7505510464e66ea1b510507a55bf0b38e80216c1dc53ef2ecce3da21f516f7b"} err="failed to get container status \"f7505510464e66ea1b510507a55bf0b38e80216c1dc53ef2ecce3da21f516f7b\": rpc error: code = NotFound desc = could not find container \"f7505510464e66ea1b510507a55bf0b38e80216c1dc53ef2ecce3da21f516f7b\": container with ID starting with f7505510464e66ea1b510507a55bf0b38e80216c1dc53ef2ecce3da21f516f7b not found: ID does not exist" Feb 24 02:38:57.132160 master-0 kubenswrapper[31411]: I0224 02:38:57.132100 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f376f0d6-c367-4d86-99ca-6e95487131bd-config-data" (OuterVolumeSpecName: "config-data") pod "f376f0d6-c367-4d86-99ca-6e95487131bd" (UID: "f376f0d6-c367-4d86-99ca-6e95487131bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:38:57.134959 master-0 kubenswrapper[31411]: I0224 02:38:57.134877 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-conductor-0" podStartSLOduration=65.663410862 podStartE2EDuration="1m42.134855805s" podCreationTimestamp="2026-02-24 02:37:15 +0000 UTC" firstStartedPulling="2026-02-24 02:37:25.203806112 +0000 UTC m=+988.421003948" lastFinishedPulling="2026-02-24 02:38:01.675251005 +0000 UTC m=+1024.892448891" observedRunningTime="2026-02-24 02:38:57.095422604 +0000 UTC m=+1080.312620480" watchObservedRunningTime="2026-02-24 02:38:57.134855805 +0000 UTC m=+1080.352053651" Feb 24 02:38:57.135231 master-0 kubenswrapper[31411]: W0224 02:38:57.135185 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod908cf055_4162_49bd_93cb_4d0a9add9b11.slice/crio-9bc91446bc1631fddb36a04ba713ec8c17c223b501796fae6a7f03839e749540 WatchSource:0}: Error finding container 9bc91446bc1631fddb36a04ba713ec8c17c223b501796fae6a7f03839e749540: Status 404 returned error can't find the container with id 9bc91446bc1631fddb36a04ba713ec8c17c223b501796fae6a7f03839e749540 Feb 24 02:38:57.135367 master-0 kubenswrapper[31411]: I0224 02:38:57.135334 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f376f0d6-c367-4d86-99ca-6e95487131bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f376f0d6-c367-4d86-99ca-6e95487131bd" (UID: "f376f0d6-c367-4d86-99ca-6e95487131bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:38:57.169208 master-0 kubenswrapper[31411]: I0224 02:38:57.169119 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f376f0d6-c367-4d86-99ca-6e95487131bd-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "f376f0d6-c367-4d86-99ca-6e95487131bd" (UID: "f376f0d6-c367-4d86-99ca-6e95487131bd"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:38:57.188332 master-0 kubenswrapper[31411]: I0224 02:38:57.188280 31411 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f376f0d6-c367-4d86-99ca-6e95487131bd-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:57.188332 master-0 kubenswrapper[31411]: I0224 02:38:57.188327 31411 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f376f0d6-c367-4d86-99ca-6e95487131bd-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:57.188434 master-0 kubenswrapper[31411]: I0224 02:38:57.188350 31411 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f376f0d6-c367-4d86-99ca-6e95487131bd-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:57.188434 master-0 kubenswrapper[31411]: I0224 02:38:57.188372 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdvxl\" (UniqueName: \"kubernetes.io/projected/f376f0d6-c367-4d86-99ca-6e95487131bd-kube-api-access-vdvxl\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:57.188434 master-0 kubenswrapper[31411]: I0224 02:38:57.188390 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f376f0d6-c367-4d86-99ca-6e95487131bd-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:57.294076 master-0 kubenswrapper[31411]: I0224 02:38:57.293936 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-conductor-0" Feb 24 02:38:57.387042 master-0 kubenswrapper[31411]: I0224 02:38:57.386920 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 02:38:57.422827 master-0 kubenswrapper[31411]: I0224 02:38:57.422738 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 02:38:57.482616 master-0 kubenswrapper[31411]: I0224 02:38:57.481825 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 24 02:38:57.483136 master-0 kubenswrapper[31411]: E0224 02:38:57.483116 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f376f0d6-c367-4d86-99ca-6e95487131bd" containerName="nova-metadata-log" Feb 24 02:38:57.483192 master-0 kubenswrapper[31411]: I0224 02:38:57.483137 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="f376f0d6-c367-4d86-99ca-6e95487131bd" containerName="nova-metadata-log" Feb 24 02:38:57.483272 master-0 kubenswrapper[31411]: E0224 02:38:57.483250 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f376f0d6-c367-4d86-99ca-6e95487131bd" containerName="nova-metadata-metadata" Feb 24 02:38:57.483272 master-0 kubenswrapper[31411]: I0224 02:38:57.483264 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="f376f0d6-c367-4d86-99ca-6e95487131bd" containerName="nova-metadata-metadata" Feb 24 02:38:57.483858 master-0 kubenswrapper[31411]: I0224 02:38:57.483735 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="f376f0d6-c367-4d86-99ca-6e95487131bd" containerName="nova-metadata-metadata" Feb 24 02:38:57.483858 master-0 kubenswrapper[31411]: I0224 02:38:57.483787 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="f376f0d6-c367-4d86-99ca-6e95487131bd" containerName="nova-metadata-log" Feb 24 02:38:57.490164 master-0 kubenswrapper[31411]: I0224 02:38:57.486303 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 02:38:57.499034 master-0 kubenswrapper[31411]: I0224 02:38:57.493441 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 24 02:38:57.499034 master-0 kubenswrapper[31411]: I0224 02:38:57.493798 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 24 02:38:57.542038 master-0 kubenswrapper[31411]: I0224 02:38:57.541968 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 02:38:57.603365 master-0 kubenswrapper[31411]: I0224 02:38:57.603178 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1063d6c-8667-43df-8967-4a22ce919924-config-data\") pod \"nova-metadata-0\" (UID: \"d1063d6c-8667-43df-8967-4a22ce919924\") " pod="openstack/nova-metadata-0" Feb 24 02:38:57.603365 master-0 kubenswrapper[31411]: I0224 02:38:57.603307 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc588\" (UniqueName: \"kubernetes.io/projected/d1063d6c-8667-43df-8967-4a22ce919924-kube-api-access-sc588\") pod \"nova-metadata-0\" (UID: \"d1063d6c-8667-43df-8967-4a22ce919924\") " pod="openstack/nova-metadata-0" Feb 24 02:38:57.603365 master-0 kubenswrapper[31411]: I0224 02:38:57.603339 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1063d6c-8667-43df-8967-4a22ce919924-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d1063d6c-8667-43df-8967-4a22ce919924\") " pod="openstack/nova-metadata-0" Feb 24 02:38:57.603756 master-0 kubenswrapper[31411]: I0224 02:38:57.603402 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1063d6c-8667-43df-8967-4a22ce919924-logs\") pod \"nova-metadata-0\" (UID: \"d1063d6c-8667-43df-8967-4a22ce919924\") " pod="openstack/nova-metadata-0" Feb 24 02:38:57.603756 master-0 kubenswrapper[31411]: I0224 02:38:57.603445 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1063d6c-8667-43df-8967-4a22ce919924-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d1063d6c-8667-43df-8967-4a22ce919924\") " pod="openstack/nova-metadata-0" Feb 24 02:38:57.705567 master-0 kubenswrapper[31411]: I0224 02:38:57.705486 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc588\" (UniqueName: \"kubernetes.io/projected/d1063d6c-8667-43df-8967-4a22ce919924-kube-api-access-sc588\") pod \"nova-metadata-0\" (UID: \"d1063d6c-8667-43df-8967-4a22ce919924\") " pod="openstack/nova-metadata-0" Feb 24 02:38:57.705567 master-0 kubenswrapper[31411]: I0224 02:38:57.705556 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1063d6c-8667-43df-8967-4a22ce919924-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d1063d6c-8667-43df-8967-4a22ce919924\") " pod="openstack/nova-metadata-0" Feb 24 02:38:57.706015 master-0 kubenswrapper[31411]: I0224 02:38:57.705836 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1063d6c-8667-43df-8967-4a22ce919924-logs\") pod \"nova-metadata-0\" (UID: \"d1063d6c-8667-43df-8967-4a22ce919924\") " pod="openstack/nova-metadata-0" Feb 24 02:38:57.706105 master-0 kubenswrapper[31411]: I0224 02:38:57.706050 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1063d6c-8667-43df-8967-4a22ce919924-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d1063d6c-8667-43df-8967-4a22ce919924\") " pod="openstack/nova-metadata-0" Feb 24 02:38:57.706774 master-0 kubenswrapper[31411]: I0224 02:38:57.706414 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1063d6c-8667-43df-8967-4a22ce919924-config-data\") pod \"nova-metadata-0\" (UID: \"d1063d6c-8667-43df-8967-4a22ce919924\") " pod="openstack/nova-metadata-0" Feb 24 02:38:57.707293 master-0 kubenswrapper[31411]: I0224 02:38:57.707218 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1063d6c-8667-43df-8967-4a22ce919924-logs\") pod \"nova-metadata-0\" (UID: \"d1063d6c-8667-43df-8967-4a22ce919924\") " pod="openstack/nova-metadata-0" Feb 24 02:38:57.710719 master-0 kubenswrapper[31411]: I0224 02:38:57.710676 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1063d6c-8667-43df-8967-4a22ce919924-config-data\") pod \"nova-metadata-0\" (UID: \"d1063d6c-8667-43df-8967-4a22ce919924\") " pod="openstack/nova-metadata-0" Feb 24 02:38:57.711622 master-0 kubenswrapper[31411]: I0224 02:38:57.711557 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1063d6c-8667-43df-8967-4a22ce919924-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d1063d6c-8667-43df-8967-4a22ce919924\") " pod="openstack/nova-metadata-0" Feb 24 02:38:57.717219 master-0 kubenswrapper[31411]: I0224 02:38:57.717165 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1063d6c-8667-43df-8967-4a22ce919924-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d1063d6c-8667-43df-8967-4a22ce919924\") " pod="openstack/nova-metadata-0" Feb 24 02:38:57.742744 master-0 kubenswrapper[31411]: I0224 02:38:57.742684 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc588\" (UniqueName: \"kubernetes.io/projected/d1063d6c-8667-43df-8967-4a22ce919924-kube-api-access-sc588\") pod \"nova-metadata-0\" (UID: \"d1063d6c-8667-43df-8967-4a22ce919924\") " pod="openstack/nova-metadata-0" Feb 24 02:38:57.827715 master-0 kubenswrapper[31411]: I0224 02:38:57.827654 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 02:38:58.085589 master-0 kubenswrapper[31411]: I0224 02:38:58.085488 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"908cf055-4162-49bd-93cb-4d0a9add9b11","Type":"ContainerStarted","Data":"dbeb8984bb43f9478f73b39ca11f09b75707ad134e785fb23219f4587366db7c"} Feb 24 02:38:58.085589 master-0 kubenswrapper[31411]: I0224 02:38:58.085586 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"908cf055-4162-49bd-93cb-4d0a9add9b11","Type":"ContainerStarted","Data":"9bc91446bc1631fddb36a04ba713ec8c17c223b501796fae6a7f03839e749540"} Feb 24 02:38:58.086332 master-0 kubenswrapper[31411]: I0224 02:38:58.085701 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 24 02:38:58.090192 master-0 kubenswrapper[31411]: I0224 02:38:58.090150 31411 generic.go:334] "Generic (PLEG): container finished" podID="29d4983d-bc42-45b7-ae25-f7c24cdd8512" containerID="cd4e16d849e2d2d47b143bc6c53da7b1c4101799b4668a0e8f0418503dd2eb11" exitCode=0 Feb 24 02:38:58.090306 master-0 kubenswrapper[31411]: I0224 02:38:58.090217 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29d4983d-bc42-45b7-ae25-f7c24cdd8512","Type":"ContainerDied","Data":"cd4e16d849e2d2d47b143bc6c53da7b1c4101799b4668a0e8f0418503dd2eb11"} Feb 24 02:38:58.115373 master-0 kubenswrapper[31411]: I0224 02:38:58.115248 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.11522767 podStartE2EDuration="2.11522767s" podCreationTimestamp="2026-02-24 02:38:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:38:58.10174293 +0000 UTC m=+1081.318940776" watchObservedRunningTime="2026-02-24 02:38:58.11522767 +0000 UTC m=+1081.332425516" Feb 24 02:38:58.359561 master-0 kubenswrapper[31411]: E0224 02:38:58.359464 31411 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="39af9503df23261c39f03f809ae380c68ac3e1b7857e4f4ca37ea5a60c937cd3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 24 02:38:58.384966 master-0 kubenswrapper[31411]: E0224 02:38:58.377263 31411 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="39af9503df23261c39f03f809ae380c68ac3e1b7857e4f4ca37ea5a60c937cd3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 24 02:38:58.399964 master-0 kubenswrapper[31411]: E0224 02:38:58.399229 31411 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="39af9503df23261c39f03f809ae380c68ac3e1b7857e4f4ca37ea5a60c937cd3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 24 02:38:58.399964 master-0 kubenswrapper[31411]: E0224 02:38:58.399321 31411 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="43c3681e-f1f6-4953-8b9c-fc9b08618f5f" containerName="nova-scheduler-scheduler" Feb 24 02:38:58.443608 master-0 kubenswrapper[31411]: I0224 02:38:58.437970 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 02:38:58.533921 master-0 kubenswrapper[31411]: I0224 02:38:58.533714 31411 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7cc6c67c77-h5cpc" podUID="5073b6d9-022e-431f-92a1-9e4dbb1a2707" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.249:5353: i/o timeout" Feb 24 02:38:58.821051 master-0 kubenswrapper[31411]: I0224 02:38:58.821003 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 02:38:58.953762 master-0 kubenswrapper[31411]: I0224 02:38:58.953693 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4m86\" (UniqueName: \"kubernetes.io/projected/29d4983d-bc42-45b7-ae25-f7c24cdd8512-kube-api-access-b4m86\") pod \"29d4983d-bc42-45b7-ae25-f7c24cdd8512\" (UID: \"29d4983d-bc42-45b7-ae25-f7c24cdd8512\") " Feb 24 02:38:58.953875 master-0 kubenswrapper[31411]: I0224 02:38:58.953785 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29d4983d-bc42-45b7-ae25-f7c24cdd8512-combined-ca-bundle\") pod \"29d4983d-bc42-45b7-ae25-f7c24cdd8512\" (UID: \"29d4983d-bc42-45b7-ae25-f7c24cdd8512\") " Feb 24 02:38:58.954372 master-0 kubenswrapper[31411]: I0224 02:38:58.954346 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29d4983d-bc42-45b7-ae25-f7c24cdd8512-logs\") pod \"29d4983d-bc42-45b7-ae25-f7c24cdd8512\" (UID: \"29d4983d-bc42-45b7-ae25-f7c24cdd8512\") " Feb 24 02:38:58.954506 master-0 kubenswrapper[31411]: I0224 02:38:58.954484 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29d4983d-bc42-45b7-ae25-f7c24cdd8512-config-data\") pod \"29d4983d-bc42-45b7-ae25-f7c24cdd8512\" (UID: \"29d4983d-bc42-45b7-ae25-f7c24cdd8512\") " Feb 24 02:38:58.955039 master-0 kubenswrapper[31411]: I0224 02:38:58.954976 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29d4983d-bc42-45b7-ae25-f7c24cdd8512-logs" (OuterVolumeSpecName: "logs") pod "29d4983d-bc42-45b7-ae25-f7c24cdd8512" (UID: "29d4983d-bc42-45b7-ae25-f7c24cdd8512"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:38:58.956492 master-0 kubenswrapper[31411]: I0224 02:38:58.956448 31411 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29d4983d-bc42-45b7-ae25-f7c24cdd8512-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:58.963617 master-0 kubenswrapper[31411]: I0224 02:38:58.963532 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29d4983d-bc42-45b7-ae25-f7c24cdd8512-kube-api-access-b4m86" (OuterVolumeSpecName: "kube-api-access-b4m86") pod "29d4983d-bc42-45b7-ae25-f7c24cdd8512" (UID: "29d4983d-bc42-45b7-ae25-f7c24cdd8512"). InnerVolumeSpecName "kube-api-access-b4m86". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:38:59.005056 master-0 kubenswrapper[31411]: I0224 02:38:59.004984 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29d4983d-bc42-45b7-ae25-f7c24cdd8512-config-data" (OuterVolumeSpecName: "config-data") pod "29d4983d-bc42-45b7-ae25-f7c24cdd8512" (UID: "29d4983d-bc42-45b7-ae25-f7c24cdd8512"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:38:59.007542 master-0 kubenswrapper[31411]: I0224 02:38:59.007504 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29d4983d-bc42-45b7-ae25-f7c24cdd8512-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29d4983d-bc42-45b7-ae25-f7c24cdd8512" (UID: "29d4983d-bc42-45b7-ae25-f7c24cdd8512"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:38:59.027661 master-0 kubenswrapper[31411]: I0224 02:38:59.027609 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-conductor-0" Feb 24 02:38:59.059892 master-0 kubenswrapper[31411]: I0224 02:38:59.059817 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4m86\" (UniqueName: \"kubernetes.io/projected/29d4983d-bc42-45b7-ae25-f7c24cdd8512-kube-api-access-b4m86\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:59.059892 master-0 kubenswrapper[31411]: I0224 02:38:59.059869 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29d4983d-bc42-45b7-ae25-f7c24cdd8512-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:59.059892 master-0 kubenswrapper[31411]: I0224 02:38:59.059881 31411 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29d4983d-bc42-45b7-ae25-f7c24cdd8512-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:38:59.118412 master-0 kubenswrapper[31411]: I0224 02:38:59.118299 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f376f0d6-c367-4d86-99ca-6e95487131bd" path="/var/lib/kubelet/pods/f376f0d6-c367-4d86-99ca-6e95487131bd/volumes" Feb 24 02:38:59.121606 master-0 kubenswrapper[31411]: I0224 02:38:59.121455 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d1063d6c-8667-43df-8967-4a22ce919924","Type":"ContainerStarted","Data":"764a91a85ac0d521c5936462cdbe700036ddc082cfa739afbda0b9bc198e3d84"} Feb 24 02:38:59.121606 master-0 kubenswrapper[31411]: I0224 02:38:59.121487 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 02:38:59.121606 master-0 kubenswrapper[31411]: I0224 02:38:59.121537 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d1063d6c-8667-43df-8967-4a22ce919924","Type":"ContainerStarted","Data":"de3f13fdad639eed662c0e91175e4959aed64be9827a2b1558c7b3885c52cafb"} Feb 24 02:38:59.121829 master-0 kubenswrapper[31411]: I0224 02:38:59.121603 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29d4983d-bc42-45b7-ae25-f7c24cdd8512","Type":"ContainerDied","Data":"bc692cd43f3ec446b25a7926a6ac1601bbdbd534f995866ede2c582240262229"} Feb 24 02:38:59.121829 master-0 kubenswrapper[31411]: I0224 02:38:59.121663 31411 scope.go:117] "RemoveContainer" containerID="cd4e16d849e2d2d47b143bc6c53da7b1c4101799b4668a0e8f0418503dd2eb11" Feb 24 02:38:59.170558 master-0 kubenswrapper[31411]: I0224 02:38:59.170501 31411 scope.go:117] "RemoveContainer" containerID="cd04292271f4cb1dff16081824ad4dfaf6996ff0309a6865903853f2b34aa592" Feb 24 02:38:59.188359 master-0 kubenswrapper[31411]: I0224 02:38:59.188030 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Feb 24 02:38:59.291585 master-0 kubenswrapper[31411]: I0224 02:38:59.291507 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 24 02:38:59.311350 master-0 kubenswrapper[31411]: I0224 02:38:59.311277 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 24 02:38:59.323698 master-0 kubenswrapper[31411]: I0224 02:38:59.323618 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 24 02:38:59.324264 master-0 kubenswrapper[31411]: E0224 02:38:59.324223 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d4983d-bc42-45b7-ae25-f7c24cdd8512" containerName="nova-api-api" Feb 24 02:38:59.324264 master-0 kubenswrapper[31411]: I0224 02:38:59.324246 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d4983d-bc42-45b7-ae25-f7c24cdd8512" containerName="nova-api-api" Feb 24 02:38:59.324413 master-0 kubenswrapper[31411]: E0224 02:38:59.324278 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d4983d-bc42-45b7-ae25-f7c24cdd8512" containerName="nova-api-log" Feb 24 02:38:59.324413 master-0 kubenswrapper[31411]: I0224 02:38:59.324287 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d4983d-bc42-45b7-ae25-f7c24cdd8512" containerName="nova-api-log" Feb 24 02:38:59.324727 master-0 kubenswrapper[31411]: I0224 02:38:59.324657 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="29d4983d-bc42-45b7-ae25-f7c24cdd8512" containerName="nova-api-log" Feb 24 02:38:59.324791 master-0 kubenswrapper[31411]: I0224 02:38:59.324731 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="29d4983d-bc42-45b7-ae25-f7c24cdd8512" containerName="nova-api-api" Feb 24 02:38:59.326094 master-0 kubenswrapper[31411]: I0224 02:38:59.326066 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 02:38:59.335892 master-0 kubenswrapper[31411]: I0224 02:38:59.334717 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 24 02:38:59.368825 master-0 kubenswrapper[31411]: I0224 02:38:59.366725 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 24 02:38:59.368825 master-0 kubenswrapper[31411]: I0224 02:38:59.368114 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnbs7\" (UniqueName: \"kubernetes.io/projected/4d124862-0eea-40c8-a889-0db0c3b88b6f-kube-api-access-wnbs7\") pod \"nova-api-0\" (UID: \"4d124862-0eea-40c8-a889-0db0c3b88b6f\") " pod="openstack/nova-api-0" Feb 24 02:38:59.368825 master-0 kubenswrapper[31411]: I0224 02:38:59.368212 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d124862-0eea-40c8-a889-0db0c3b88b6f-config-data\") pod \"nova-api-0\" (UID: \"4d124862-0eea-40c8-a889-0db0c3b88b6f\") " pod="openstack/nova-api-0" Feb 24 02:38:59.368825 master-0 kubenswrapper[31411]: I0224 02:38:59.368270 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d124862-0eea-40c8-a889-0db0c3b88b6f-logs\") pod \"nova-api-0\" (UID: \"4d124862-0eea-40c8-a889-0db0c3b88b6f\") " pod="openstack/nova-api-0" Feb 24 02:38:59.368825 master-0 kubenswrapper[31411]: I0224 02:38:59.368423 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d124862-0eea-40c8-a889-0db0c3b88b6f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4d124862-0eea-40c8-a889-0db0c3b88b6f\") " pod="openstack/nova-api-0" Feb 24 02:38:59.472588 master-0 kubenswrapper[31411]: I0224 02:38:59.472006 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d124862-0eea-40c8-a889-0db0c3b88b6f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4d124862-0eea-40c8-a889-0db0c3b88b6f\") " pod="openstack/nova-api-0" Feb 24 02:38:59.472588 master-0 kubenswrapper[31411]: I0224 02:38:59.472169 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnbs7\" (UniqueName: \"kubernetes.io/projected/4d124862-0eea-40c8-a889-0db0c3b88b6f-kube-api-access-wnbs7\") pod \"nova-api-0\" (UID: \"4d124862-0eea-40c8-a889-0db0c3b88b6f\") " pod="openstack/nova-api-0" Feb 24 02:38:59.472588 master-0 kubenswrapper[31411]: I0224 02:38:59.472263 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d124862-0eea-40c8-a889-0db0c3b88b6f-config-data\") pod \"nova-api-0\" (UID: \"4d124862-0eea-40c8-a889-0db0c3b88b6f\") " pod="openstack/nova-api-0" Feb 24 02:38:59.472588 master-0 kubenswrapper[31411]: I0224 02:38:59.472325 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d124862-0eea-40c8-a889-0db0c3b88b6f-logs\") pod \"nova-api-0\" (UID: \"4d124862-0eea-40c8-a889-0db0c3b88b6f\") " pod="openstack/nova-api-0" Feb 24 02:38:59.472933 master-0 kubenswrapper[31411]: I0224 02:38:59.472902 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d124862-0eea-40c8-a889-0db0c3b88b6f-logs\") pod \"nova-api-0\" (UID: \"4d124862-0eea-40c8-a889-0db0c3b88b6f\") " pod="openstack/nova-api-0" Feb 24 02:38:59.475889 master-0 kubenswrapper[31411]: I0224 02:38:59.475823 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d124862-0eea-40c8-a889-0db0c3b88b6f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4d124862-0eea-40c8-a889-0db0c3b88b6f\") " pod="openstack/nova-api-0" Feb 24 02:38:59.478871 master-0 kubenswrapper[31411]: I0224 02:38:59.478823 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d124862-0eea-40c8-a889-0db0c3b88b6f-config-data\") pod \"nova-api-0\" (UID: \"4d124862-0eea-40c8-a889-0db0c3b88b6f\") " pod="openstack/nova-api-0" Feb 24 02:38:59.489029 master-0 kubenswrapper[31411]: I0224 02:38:59.488984 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnbs7\" (UniqueName: \"kubernetes.io/projected/4d124862-0eea-40c8-a889-0db0c3b88b6f-kube-api-access-wnbs7\") pod \"nova-api-0\" (UID: \"4d124862-0eea-40c8-a889-0db0c3b88b6f\") " pod="openstack/nova-api-0" Feb 24 02:38:59.651743 master-0 kubenswrapper[31411]: I0224 02:38:59.651535 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 02:39:00.148214 master-0 kubenswrapper[31411]: I0224 02:39:00.148145 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d1063d6c-8667-43df-8967-4a22ce919924","Type":"ContainerStarted","Data":"3fbdcfc81c110b39ef61e318226540831254e1314a24eae991738b4b04cab74f"} Feb 24 02:39:00.165648 master-0 kubenswrapper[31411]: I0224 02:39:00.165511 31411 generic.go:334] "Generic (PLEG): container finished" podID="43c3681e-f1f6-4953-8b9c-fc9b08618f5f" containerID="39af9503df23261c39f03f809ae380c68ac3e1b7857e4f4ca37ea5a60c937cd3" exitCode=0 Feb 24 02:39:00.165978 master-0 kubenswrapper[31411]: I0224 02:39:00.165930 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"43c3681e-f1f6-4953-8b9c-fc9b08618f5f","Type":"ContainerDied","Data":"39af9503df23261c39f03f809ae380c68ac3e1b7857e4f4ca37ea5a60c937cd3"} Feb 24 02:39:00.200096 master-0 kubenswrapper[31411]: I0224 02:39:00.197421 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.19739371 podStartE2EDuration="3.19739371s" podCreationTimestamp="2026-02-24 02:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:39:00.180911286 +0000 UTC m=+1083.398109152" watchObservedRunningTime="2026-02-24 02:39:00.19739371 +0000 UTC m=+1083.414591556" Feb 24 02:39:00.233387 master-0 kubenswrapper[31411]: I0224 02:39:00.233311 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 24 02:39:00.710199 master-0 kubenswrapper[31411]: I0224 02:39:00.710108 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 24 02:39:00.828639 master-0 kubenswrapper[31411]: I0224 02:39:00.828539 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c3681e-f1f6-4953-8b9c-fc9b08618f5f-combined-ca-bundle\") pod \"43c3681e-f1f6-4953-8b9c-fc9b08618f5f\" (UID: \"43c3681e-f1f6-4953-8b9c-fc9b08618f5f\") " Feb 24 02:39:00.828849 master-0 kubenswrapper[31411]: I0224 02:39:00.828813 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2gj8\" (UniqueName: \"kubernetes.io/projected/43c3681e-f1f6-4953-8b9c-fc9b08618f5f-kube-api-access-s2gj8\") pod \"43c3681e-f1f6-4953-8b9c-fc9b08618f5f\" (UID: \"43c3681e-f1f6-4953-8b9c-fc9b08618f5f\") " Feb 24 02:39:00.829190 master-0 kubenswrapper[31411]: I0224 02:39:00.829157 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43c3681e-f1f6-4953-8b9c-fc9b08618f5f-config-data\") pod \"43c3681e-f1f6-4953-8b9c-fc9b08618f5f\" (UID: \"43c3681e-f1f6-4953-8b9c-fc9b08618f5f\") " Feb 24 02:39:00.834093 master-0 kubenswrapper[31411]: I0224 02:39:00.833999 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43c3681e-f1f6-4953-8b9c-fc9b08618f5f-kube-api-access-s2gj8" (OuterVolumeSpecName: "kube-api-access-s2gj8") pod "43c3681e-f1f6-4953-8b9c-fc9b08618f5f" (UID: "43c3681e-f1f6-4953-8b9c-fc9b08618f5f"). InnerVolumeSpecName "kube-api-access-s2gj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:39:00.888286 master-0 kubenswrapper[31411]: I0224 02:39:00.888199 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43c3681e-f1f6-4953-8b9c-fc9b08618f5f-config-data" (OuterVolumeSpecName: "config-data") pod "43c3681e-f1f6-4953-8b9c-fc9b08618f5f" (UID: "43c3681e-f1f6-4953-8b9c-fc9b08618f5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:39:00.889404 master-0 kubenswrapper[31411]: I0224 02:39:00.889018 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43c3681e-f1f6-4953-8b9c-fc9b08618f5f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43c3681e-f1f6-4953-8b9c-fc9b08618f5f" (UID: "43c3681e-f1f6-4953-8b9c-fc9b08618f5f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:39:00.936687 master-0 kubenswrapper[31411]: I0224 02:39:00.933998 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c3681e-f1f6-4953-8b9c-fc9b08618f5f-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:00.936687 master-0 kubenswrapper[31411]: I0224 02:39:00.934064 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2gj8\" (UniqueName: \"kubernetes.io/projected/43c3681e-f1f6-4953-8b9c-fc9b08618f5f-kube-api-access-s2gj8\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:00.936687 master-0 kubenswrapper[31411]: I0224 02:39:00.934084 31411 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43c3681e-f1f6-4953-8b9c-fc9b08618f5f-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:01.110484 master-0 kubenswrapper[31411]: I0224 02:39:01.110267 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29d4983d-bc42-45b7-ae25-f7c24cdd8512" path="/var/lib/kubelet/pods/29d4983d-bc42-45b7-ae25-f7c24cdd8512/volumes" Feb 24 02:39:01.131785 master-0 kubenswrapper[31411]: E0224 02:39:01.131716 31411 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43c3681e_f1f6_4953_8b9c_fc9b08618f5f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43c3681e_f1f6_4953_8b9c_fc9b08618f5f.slice/crio-0d601bdf26c3543d6c0e9f08b39adf897f702a9a79bfde25b0f059ee15ef7c1c\": RecentStats: unable to find data in memory cache]" Feb 24 02:39:01.181500 master-0 kubenswrapper[31411]: I0224 02:39:01.181299 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4d124862-0eea-40c8-a889-0db0c3b88b6f","Type":"ContainerStarted","Data":"9601bf9afddb31a9f961f50649d7d7c25e528fec724949558f0c5bf3dab1a90d"} Feb 24 02:39:01.182646 master-0 kubenswrapper[31411]: I0224 02:39:01.182615 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 24 02:39:01.182800 master-0 kubenswrapper[31411]: I0224 02:39:01.182406 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4d124862-0eea-40c8-a889-0db0c3b88b6f","Type":"ContainerStarted","Data":"addbd283a212183cc42604ebcc1797b239e11d751ff94f8a3e317c33af750846"} Feb 24 02:39:01.185648 master-0 kubenswrapper[31411]: I0224 02:39:01.182676 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4d124862-0eea-40c8-a889-0db0c3b88b6f","Type":"ContainerStarted","Data":"0daa576ce562a516670a064433a0bee98177af149e34c3ea9ef236d7b479b1db"} Feb 24 02:39:01.185648 master-0 kubenswrapper[31411]: I0224 02:39:01.184910 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"43c3681e-f1f6-4953-8b9c-fc9b08618f5f","Type":"ContainerDied","Data":"0d601bdf26c3543d6c0e9f08b39adf897f702a9a79bfde25b0f059ee15ef7c1c"} Feb 24 02:39:01.185648 master-0 kubenswrapper[31411]: I0224 02:39:01.184947 31411 scope.go:117] "RemoveContainer" containerID="39af9503df23261c39f03f809ae380c68ac3e1b7857e4f4ca37ea5a60c937cd3" Feb 24 02:39:01.188151 master-0 kubenswrapper[31411]: I0224 02:39:01.187796 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Feb 24 02:39:01.265391 master-0 kubenswrapper[31411]: I0224 02:39:01.264900 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.2648339379999998 podStartE2EDuration="2.264833938s" podCreationTimestamp="2026-02-24 02:38:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:39:01.203393817 +0000 UTC m=+1084.420591663" watchObservedRunningTime="2026-02-24 02:39:01.264833938 +0000 UTC m=+1084.482031794" Feb 24 02:39:01.307993 master-0 kubenswrapper[31411]: I0224 02:39:01.307922 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 02:39:01.337670 master-0 kubenswrapper[31411]: I0224 02:39:01.336364 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 02:39:01.352888 master-0 kubenswrapper[31411]: I0224 02:39:01.352814 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 02:39:01.354337 master-0 kubenswrapper[31411]: E0224 02:39:01.353779 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43c3681e-f1f6-4953-8b9c-fc9b08618f5f" containerName="nova-scheduler-scheduler" Feb 24 02:39:01.354337 master-0 kubenswrapper[31411]: I0224 02:39:01.353808 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="43c3681e-f1f6-4953-8b9c-fc9b08618f5f" containerName="nova-scheduler-scheduler" Feb 24 02:39:01.354337 master-0 kubenswrapper[31411]: I0224 02:39:01.354263 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="43c3681e-f1f6-4953-8b9c-fc9b08618f5f" containerName="nova-scheduler-scheduler" Feb 24 02:39:01.355756 master-0 kubenswrapper[31411]: I0224 02:39:01.355718 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 24 02:39:01.362540 master-0 kubenswrapper[31411]: I0224 02:39:01.362408 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 24 02:39:01.395602 master-0 kubenswrapper[31411]: I0224 02:39:01.381391 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 02:39:01.452912 master-0 kubenswrapper[31411]: I0224 02:39:01.452864 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcb4h\" (UniqueName: \"kubernetes.io/projected/a0c89c95-1f73-4996-87ef-183a13c5891b-kube-api-access-kcb4h\") pod \"nova-scheduler-0\" (UID: \"a0c89c95-1f73-4996-87ef-183a13c5891b\") " pod="openstack/nova-scheduler-0" Feb 24 02:39:01.453205 master-0 kubenswrapper[31411]: I0224 02:39:01.453186 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0c89c95-1f73-4996-87ef-183a13c5891b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a0c89c95-1f73-4996-87ef-183a13c5891b\") " pod="openstack/nova-scheduler-0" Feb 24 02:39:01.453305 master-0 kubenswrapper[31411]: I0224 02:39:01.453292 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0c89c95-1f73-4996-87ef-183a13c5891b-config-data\") pod \"nova-scheduler-0\" (UID: \"a0c89c95-1f73-4996-87ef-183a13c5891b\") " pod="openstack/nova-scheduler-0" Feb 24 02:39:01.556083 master-0 kubenswrapper[31411]: I0224 02:39:01.556012 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcb4h\" (UniqueName: \"kubernetes.io/projected/a0c89c95-1f73-4996-87ef-183a13c5891b-kube-api-access-kcb4h\") pod \"nova-scheduler-0\" (UID: \"a0c89c95-1f73-4996-87ef-183a13c5891b\") " pod="openstack/nova-scheduler-0" Feb 24 02:39:01.557101 master-0 kubenswrapper[31411]: I0224 02:39:01.557043 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0c89c95-1f73-4996-87ef-183a13c5891b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a0c89c95-1f73-4996-87ef-183a13c5891b\") " pod="openstack/nova-scheduler-0" Feb 24 02:39:01.558225 master-0 kubenswrapper[31411]: I0224 02:39:01.558176 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0c89c95-1f73-4996-87ef-183a13c5891b-config-data\") pod \"nova-scheduler-0\" (UID: \"a0c89c95-1f73-4996-87ef-183a13c5891b\") " pod="openstack/nova-scheduler-0" Feb 24 02:39:01.564299 master-0 kubenswrapper[31411]: I0224 02:39:01.564224 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0c89c95-1f73-4996-87ef-183a13c5891b-config-data\") pod \"nova-scheduler-0\" (UID: \"a0c89c95-1f73-4996-87ef-183a13c5891b\") " pod="openstack/nova-scheduler-0" Feb 24 02:39:01.564466 master-0 kubenswrapper[31411]: I0224 02:39:01.564273 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0c89c95-1f73-4996-87ef-183a13c5891b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a0c89c95-1f73-4996-87ef-183a13c5891b\") " pod="openstack/nova-scheduler-0" Feb 24 02:39:01.587867 master-0 kubenswrapper[31411]: I0224 02:39:01.587782 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcb4h\" (UniqueName: \"kubernetes.io/projected/a0c89c95-1f73-4996-87ef-183a13c5891b-kube-api-access-kcb4h\") pod \"nova-scheduler-0\" (UID: \"a0c89c95-1f73-4996-87ef-183a13c5891b\") " pod="openstack/nova-scheduler-0" Feb 24 02:39:01.718998 master-0 kubenswrapper[31411]: I0224 02:39:01.718929 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 24 02:39:02.346851 master-0 kubenswrapper[31411]: W0224 02:39:02.346750 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0c89c95_1f73_4996_87ef_183a13c5891b.slice/crio-a2d663333afe44b9dc109d3a8046b259f66dbb401a1198bbe7452059e7742f68 WatchSource:0}: Error finding container a2d663333afe44b9dc109d3a8046b259f66dbb401a1198bbe7452059e7742f68: Status 404 returned error can't find the container with id a2d663333afe44b9dc109d3a8046b259f66dbb401a1198bbe7452059e7742f68 Feb 24 02:39:02.352166 master-0 kubenswrapper[31411]: I0224 02:39:02.352084 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 02:39:02.828343 master-0 kubenswrapper[31411]: I0224 02:39:02.828241 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 24 02:39:02.829764 master-0 kubenswrapper[31411]: I0224 02:39:02.829712 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 24 02:39:03.119194 master-0 kubenswrapper[31411]: I0224 02:39:03.119058 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43c3681e-f1f6-4953-8b9c-fc9b08618f5f" path="/var/lib/kubelet/pods/43c3681e-f1f6-4953-8b9c-fc9b08618f5f/volumes" Feb 24 02:39:03.238643 master-0 kubenswrapper[31411]: I0224 02:39:03.238531 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a0c89c95-1f73-4996-87ef-183a13c5891b","Type":"ContainerStarted","Data":"aa137fbe31c14117fca8e22a5b021aa2ed89c5fcd7d0e23ca31df88996652d0d"} Feb 24 02:39:03.238643 master-0 kubenswrapper[31411]: I0224 02:39:03.238634 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a0c89c95-1f73-4996-87ef-183a13c5891b","Type":"ContainerStarted","Data":"a2d663333afe44b9dc109d3a8046b259f66dbb401a1198bbe7452059e7742f68"} Feb 24 02:39:03.288645 master-0 kubenswrapper[31411]: I0224 02:39:03.288072 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.288039637 podStartE2EDuration="2.288039637s" podCreationTimestamp="2026-02-24 02:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:39:03.270996997 +0000 UTC m=+1086.488194883" watchObservedRunningTime="2026-02-24 02:39:03.288039637 +0000 UTC m=+1086.505237523" Feb 24 02:39:06.495401 master-0 kubenswrapper[31411]: I0224 02:39:06.495279 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 24 02:39:06.720098 master-0 kubenswrapper[31411]: I0224 02:39:06.719972 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 24 02:39:07.828813 master-0 kubenswrapper[31411]: I0224 02:39:07.828730 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 24 02:39:07.829558 master-0 kubenswrapper[31411]: I0224 02:39:07.828826 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 24 02:39:08.843900 master-0 kubenswrapper[31411]: I0224 02:39:08.843825 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d1063d6c-8667-43df-8967-4a22ce919924" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.8:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 02:39:08.844679 master-0 kubenswrapper[31411]: I0224 02:39:08.843877 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d1063d6c-8667-43df-8967-4a22ce919924" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.8:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:39:09.652341 master-0 kubenswrapper[31411]: I0224 02:39:09.652269 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 24 02:39:09.652341 master-0 kubenswrapper[31411]: I0224 02:39:09.652336 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 24 02:39:10.735143 master-0 kubenswrapper[31411]: I0224 02:39:10.734882 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4d124862-0eea-40c8-a889-0db0c3b88b6f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.1.9:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 02:39:10.736850 master-0 kubenswrapper[31411]: I0224 02:39:10.735349 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4d124862-0eea-40c8-a889-0db0c3b88b6f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.1.9:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 02:39:11.720316 master-0 kubenswrapper[31411]: I0224 02:39:11.720223 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 24 02:39:11.762821 master-0 kubenswrapper[31411]: I0224 02:39:11.762744 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 24 02:39:11.879119 master-0 kubenswrapper[31411]: I0224 02:39:11.879029 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 24 02:39:14.895620 master-0 kubenswrapper[31411]: I0224 02:39:14.895526 31411 generic.go:334] "Generic (PLEG): container finished" podID="f22b0f7d-7993-44dd-a35a-d2481099ae64" containerID="69707733ec9e5fa74055743f47f3681472495d56d54a84abc91f63f23da9cfee" exitCode=137 Feb 24 02:39:14.895620 master-0 kubenswrapper[31411]: I0224 02:39:14.895620 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f22b0f7d-7993-44dd-a35a-d2481099ae64","Type":"ContainerDied","Data":"69707733ec9e5fa74055743f47f3681472495d56d54a84abc91f63f23da9cfee"} Feb 24 02:39:15.459693 master-0 kubenswrapper[31411]: I0224 02:39:15.459633 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:39:15.509853 master-0 kubenswrapper[31411]: I0224 02:39:15.509788 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pd9sr\" (UniqueName: \"kubernetes.io/projected/f22b0f7d-7993-44dd-a35a-d2481099ae64-kube-api-access-pd9sr\") pod \"f22b0f7d-7993-44dd-a35a-d2481099ae64\" (UID: \"f22b0f7d-7993-44dd-a35a-d2481099ae64\") " Feb 24 02:39:15.510194 master-0 kubenswrapper[31411]: I0224 02:39:15.510158 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22b0f7d-7993-44dd-a35a-d2481099ae64-combined-ca-bundle\") pod \"f22b0f7d-7993-44dd-a35a-d2481099ae64\" (UID: \"f22b0f7d-7993-44dd-a35a-d2481099ae64\") " Feb 24 02:39:15.510438 master-0 kubenswrapper[31411]: I0224 02:39:15.510410 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22b0f7d-7993-44dd-a35a-d2481099ae64-config-data\") pod \"f22b0f7d-7993-44dd-a35a-d2481099ae64\" (UID: \"f22b0f7d-7993-44dd-a35a-d2481099ae64\") " Feb 24 02:39:15.516059 master-0 kubenswrapper[31411]: I0224 02:39:15.516022 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f22b0f7d-7993-44dd-a35a-d2481099ae64-kube-api-access-pd9sr" (OuterVolumeSpecName: "kube-api-access-pd9sr") pod "f22b0f7d-7993-44dd-a35a-d2481099ae64" (UID: "f22b0f7d-7993-44dd-a35a-d2481099ae64"). InnerVolumeSpecName "kube-api-access-pd9sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:39:15.590499 master-0 kubenswrapper[31411]: I0224 02:39:15.586032 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f22b0f7d-7993-44dd-a35a-d2481099ae64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f22b0f7d-7993-44dd-a35a-d2481099ae64" (UID: "f22b0f7d-7993-44dd-a35a-d2481099ae64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:39:15.598115 master-0 kubenswrapper[31411]: I0224 02:39:15.597736 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f22b0f7d-7993-44dd-a35a-d2481099ae64-config-data" (OuterVolumeSpecName: "config-data") pod "f22b0f7d-7993-44dd-a35a-d2481099ae64" (UID: "f22b0f7d-7993-44dd-a35a-d2481099ae64"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:39:15.619734 master-0 kubenswrapper[31411]: I0224 02:39:15.619207 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pd9sr\" (UniqueName: \"kubernetes.io/projected/f22b0f7d-7993-44dd-a35a-d2481099ae64-kube-api-access-pd9sr\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:15.619734 master-0 kubenswrapper[31411]: I0224 02:39:15.619248 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22b0f7d-7993-44dd-a35a-d2481099ae64-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:15.619734 master-0 kubenswrapper[31411]: I0224 02:39:15.619262 31411 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22b0f7d-7993-44dd-a35a-d2481099ae64-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:15.916494 master-0 kubenswrapper[31411]: I0224 02:39:15.916425 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f22b0f7d-7993-44dd-a35a-d2481099ae64","Type":"ContainerDied","Data":"80654eec1f9cb0a32f902ab48489a095d24845d0555177bc314d6830c9725a87"} Feb 24 02:39:15.924872 master-0 kubenswrapper[31411]: I0224 02:39:15.916505 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:39:15.924872 master-0 kubenswrapper[31411]: I0224 02:39:15.916669 31411 scope.go:117] "RemoveContainer" containerID="69707733ec9e5fa74055743f47f3681472495d56d54a84abc91f63f23da9cfee" Feb 24 02:39:16.032201 master-0 kubenswrapper[31411]: I0224 02:39:16.032132 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 24 02:39:16.050561 master-0 kubenswrapper[31411]: I0224 02:39:16.050465 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 24 02:39:16.143657 master-0 kubenswrapper[31411]: I0224 02:39:16.143466 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 24 02:39:16.144456 master-0 kubenswrapper[31411]: E0224 02:39:16.144180 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f22b0f7d-7993-44dd-a35a-d2481099ae64" containerName="nova-cell1-novncproxy-novncproxy" Feb 24 02:39:16.144456 master-0 kubenswrapper[31411]: I0224 02:39:16.144205 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="f22b0f7d-7993-44dd-a35a-d2481099ae64" containerName="nova-cell1-novncproxy-novncproxy" Feb 24 02:39:16.144884 master-0 kubenswrapper[31411]: I0224 02:39:16.144849 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="f22b0f7d-7993-44dd-a35a-d2481099ae64" containerName="nova-cell1-novncproxy-novncproxy" Feb 24 02:39:16.145815 master-0 kubenswrapper[31411]: I0224 02:39:16.145784 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:39:16.150078 master-0 kubenswrapper[31411]: I0224 02:39:16.150044 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 24 02:39:16.150670 master-0 kubenswrapper[31411]: I0224 02:39:16.150637 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 24 02:39:16.150811 master-0 kubenswrapper[31411]: I0224 02:39:16.150643 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 24 02:39:16.236598 master-0 kubenswrapper[31411]: I0224 02:39:16.236540 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/47607b00-65e8-4d2e-90de-95f633ed0872-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"47607b00-65e8-4d2e-90de-95f633ed0872\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:39:16.236870 master-0 kubenswrapper[31411]: I0224 02:39:16.236628 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47607b00-65e8-4d2e-90de-95f633ed0872-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"47607b00-65e8-4d2e-90de-95f633ed0872\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:39:16.236870 master-0 kubenswrapper[31411]: I0224 02:39:16.236698 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqr8s\" (UniqueName: \"kubernetes.io/projected/47607b00-65e8-4d2e-90de-95f633ed0872-kube-api-access-jqr8s\") pod \"nova-cell1-novncproxy-0\" (UID: \"47607b00-65e8-4d2e-90de-95f633ed0872\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:39:16.236870 master-0 kubenswrapper[31411]: I0224 02:39:16.236767 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47607b00-65e8-4d2e-90de-95f633ed0872-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"47607b00-65e8-4d2e-90de-95f633ed0872\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:39:16.236870 master-0 kubenswrapper[31411]: I0224 02:39:16.236822 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/47607b00-65e8-4d2e-90de-95f633ed0872-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"47607b00-65e8-4d2e-90de-95f633ed0872\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:39:16.348903 master-0 kubenswrapper[31411]: I0224 02:39:16.348725 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47607b00-65e8-4d2e-90de-95f633ed0872-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"47607b00-65e8-4d2e-90de-95f633ed0872\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:39:16.349302 master-0 kubenswrapper[31411]: I0224 02:39:16.349256 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/47607b00-65e8-4d2e-90de-95f633ed0872-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"47607b00-65e8-4d2e-90de-95f633ed0872\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:39:16.350163 master-0 kubenswrapper[31411]: I0224 02:39:16.350106 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/47607b00-65e8-4d2e-90de-95f633ed0872-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"47607b00-65e8-4d2e-90de-95f633ed0872\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:39:16.350813 master-0 kubenswrapper[31411]: I0224 02:39:16.350769 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47607b00-65e8-4d2e-90de-95f633ed0872-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"47607b00-65e8-4d2e-90de-95f633ed0872\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:39:16.351024 master-0 kubenswrapper[31411]: I0224 02:39:16.351003 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqr8s\" (UniqueName: \"kubernetes.io/projected/47607b00-65e8-4d2e-90de-95f633ed0872-kube-api-access-jqr8s\") pod \"nova-cell1-novncproxy-0\" (UID: \"47607b00-65e8-4d2e-90de-95f633ed0872\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:39:16.354514 master-0 kubenswrapper[31411]: I0224 02:39:16.354387 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47607b00-65e8-4d2e-90de-95f633ed0872-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"47607b00-65e8-4d2e-90de-95f633ed0872\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:39:16.354846 master-0 kubenswrapper[31411]: I0224 02:39:16.354758 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/47607b00-65e8-4d2e-90de-95f633ed0872-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"47607b00-65e8-4d2e-90de-95f633ed0872\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:39:16.355669 master-0 kubenswrapper[31411]: I0224 02:39:16.355566 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 24 02:39:16.356607 master-0 kubenswrapper[31411]: I0224 02:39:16.356191 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/47607b00-65e8-4d2e-90de-95f633ed0872-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"47607b00-65e8-4d2e-90de-95f633ed0872\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:39:16.357678 master-0 kubenswrapper[31411]: I0224 02:39:16.357407 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47607b00-65e8-4d2e-90de-95f633ed0872-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"47607b00-65e8-4d2e-90de-95f633ed0872\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:39:16.507694 master-0 kubenswrapper[31411]: I0224 02:39:16.507637 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqr8s\" (UniqueName: \"kubernetes.io/projected/47607b00-65e8-4d2e-90de-95f633ed0872-kube-api-access-jqr8s\") pod \"nova-cell1-novncproxy-0\" (UID: \"47607b00-65e8-4d2e-90de-95f633ed0872\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:39:16.771933 master-0 kubenswrapper[31411]: I0224 02:39:16.771864 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:39:17.134226 master-0 kubenswrapper[31411]: I0224 02:39:17.133973 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f22b0f7d-7993-44dd-a35a-d2481099ae64" path="/var/lib/kubelet/pods/f22b0f7d-7993-44dd-a35a-d2481099ae64/volumes" Feb 24 02:39:17.376533 master-0 kubenswrapper[31411]: I0224 02:39:17.363045 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 24 02:39:17.378741 master-0 kubenswrapper[31411]: W0224 02:39:17.378653 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47607b00_65e8_4d2e_90de_95f633ed0872.slice/crio-6483d54cdf2b040797a8fcef967e98411ca3c61daafd997c99fe498962dbe1f9 WatchSource:0}: Error finding container 6483d54cdf2b040797a8fcef967e98411ca3c61daafd997c99fe498962dbe1f9: Status 404 returned error can't find the container with id 6483d54cdf2b040797a8fcef967e98411ca3c61daafd997c99fe498962dbe1f9 Feb 24 02:39:17.834409 master-0 kubenswrapper[31411]: I0224 02:39:17.834332 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 24 02:39:17.835951 master-0 kubenswrapper[31411]: I0224 02:39:17.835903 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 24 02:39:17.852775 master-0 kubenswrapper[31411]: I0224 02:39:17.852336 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 24 02:39:17.954555 master-0 kubenswrapper[31411]: I0224 02:39:17.954489 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"47607b00-65e8-4d2e-90de-95f633ed0872","Type":"ContainerStarted","Data":"4e369899f9a5bc70c60934f364b5ddd7d552167d7a940c703ba79495089a8223"} Feb 24 02:39:17.954885 master-0 kubenswrapper[31411]: I0224 02:39:17.954864 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"47607b00-65e8-4d2e-90de-95f633ed0872","Type":"ContainerStarted","Data":"6483d54cdf2b040797a8fcef967e98411ca3c61daafd997c99fe498962dbe1f9"} Feb 24 02:39:17.992981 master-0 kubenswrapper[31411]: I0224 02:39:17.992954 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 24 02:39:18.220449 master-0 kubenswrapper[31411]: I0224 02:39:18.218532 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.218495112 podStartE2EDuration="2.218495112s" podCreationTimestamp="2026-02-24 02:39:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:39:18.193059116 +0000 UTC m=+1101.410256972" watchObservedRunningTime="2026-02-24 02:39:18.218495112 +0000 UTC m=+1101.435692968" Feb 24 02:39:19.658165 master-0 kubenswrapper[31411]: I0224 02:39:19.657392 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 24 02:39:19.659398 master-0 kubenswrapper[31411]: I0224 02:39:19.658757 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 24 02:39:19.668430 master-0 kubenswrapper[31411]: I0224 02:39:19.668365 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 24 02:39:19.669966 master-0 kubenswrapper[31411]: I0224 02:39:19.669898 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 24 02:39:20.006264 master-0 kubenswrapper[31411]: I0224 02:39:20.006211 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 24 02:39:20.011976 master-0 kubenswrapper[31411]: I0224 02:39:20.011919 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 24 02:39:20.350011 master-0 kubenswrapper[31411]: I0224 02:39:20.349875 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7586c46c57-vgvpz"] Feb 24 02:39:20.353299 master-0 kubenswrapper[31411]: I0224 02:39:20.352412 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7586c46c57-vgvpz" Feb 24 02:39:20.367363 master-0 kubenswrapper[31411]: I0224 02:39:20.367281 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7586c46c57-vgvpz"] Feb 24 02:39:20.544397 master-0 kubenswrapper[31411]: I0224 02:39:20.543348 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c86d48d3-ae36-493d-8e45-02729b2681f1-ovsdbserver-sb\") pod \"dnsmasq-dns-7586c46c57-vgvpz\" (UID: \"c86d48d3-ae36-493d-8e45-02729b2681f1\") " pod="openstack/dnsmasq-dns-7586c46c57-vgvpz" Feb 24 02:39:20.544397 master-0 kubenswrapper[31411]: I0224 02:39:20.543453 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c86d48d3-ae36-493d-8e45-02729b2681f1-ovsdbserver-nb\") pod \"dnsmasq-dns-7586c46c57-vgvpz\" (UID: \"c86d48d3-ae36-493d-8e45-02729b2681f1\") " pod="openstack/dnsmasq-dns-7586c46c57-vgvpz" Feb 24 02:39:20.544397 master-0 kubenswrapper[31411]: I0224 02:39:20.543481 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c86d48d3-ae36-493d-8e45-02729b2681f1-dns-svc\") pod \"dnsmasq-dns-7586c46c57-vgvpz\" (UID: \"c86d48d3-ae36-493d-8e45-02729b2681f1\") " pod="openstack/dnsmasq-dns-7586c46c57-vgvpz" Feb 24 02:39:20.544397 master-0 kubenswrapper[31411]: I0224 02:39:20.543538 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c86d48d3-ae36-493d-8e45-02729b2681f1-config\") pod \"dnsmasq-dns-7586c46c57-vgvpz\" (UID: \"c86d48d3-ae36-493d-8e45-02729b2681f1\") " pod="openstack/dnsmasq-dns-7586c46c57-vgvpz" Feb 24 02:39:20.544397 master-0 kubenswrapper[31411]: I0224 02:39:20.543617 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq897\" (UniqueName: \"kubernetes.io/projected/c86d48d3-ae36-493d-8e45-02729b2681f1-kube-api-access-wq897\") pod \"dnsmasq-dns-7586c46c57-vgvpz\" (UID: \"c86d48d3-ae36-493d-8e45-02729b2681f1\") " pod="openstack/dnsmasq-dns-7586c46c57-vgvpz" Feb 24 02:39:20.548620 master-0 kubenswrapper[31411]: I0224 02:39:20.545108 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c86d48d3-ae36-493d-8e45-02729b2681f1-dns-swift-storage-0\") pod \"dnsmasq-dns-7586c46c57-vgvpz\" (UID: \"c86d48d3-ae36-493d-8e45-02729b2681f1\") " pod="openstack/dnsmasq-dns-7586c46c57-vgvpz" Feb 24 02:39:20.649704 master-0 kubenswrapper[31411]: I0224 02:39:20.647242 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c86d48d3-ae36-493d-8e45-02729b2681f1-ovsdbserver-sb\") pod \"dnsmasq-dns-7586c46c57-vgvpz\" (UID: \"c86d48d3-ae36-493d-8e45-02729b2681f1\") " pod="openstack/dnsmasq-dns-7586c46c57-vgvpz" Feb 24 02:39:20.649704 master-0 kubenswrapper[31411]: I0224 02:39:20.647361 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c86d48d3-ae36-493d-8e45-02729b2681f1-ovsdbserver-nb\") pod \"dnsmasq-dns-7586c46c57-vgvpz\" (UID: \"c86d48d3-ae36-493d-8e45-02729b2681f1\") " pod="openstack/dnsmasq-dns-7586c46c57-vgvpz" Feb 24 02:39:20.649704 master-0 kubenswrapper[31411]: I0224 02:39:20.647387 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c86d48d3-ae36-493d-8e45-02729b2681f1-dns-svc\") pod \"dnsmasq-dns-7586c46c57-vgvpz\" (UID: \"c86d48d3-ae36-493d-8e45-02729b2681f1\") " pod="openstack/dnsmasq-dns-7586c46c57-vgvpz" Feb 24 02:39:20.649704 master-0 kubenswrapper[31411]: I0224 02:39:20.647446 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c86d48d3-ae36-493d-8e45-02729b2681f1-config\") pod \"dnsmasq-dns-7586c46c57-vgvpz\" (UID: \"c86d48d3-ae36-493d-8e45-02729b2681f1\") " pod="openstack/dnsmasq-dns-7586c46c57-vgvpz" Feb 24 02:39:20.649704 master-0 kubenswrapper[31411]: I0224 02:39:20.647505 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq897\" (UniqueName: \"kubernetes.io/projected/c86d48d3-ae36-493d-8e45-02729b2681f1-kube-api-access-wq897\") pod \"dnsmasq-dns-7586c46c57-vgvpz\" (UID: \"c86d48d3-ae36-493d-8e45-02729b2681f1\") " pod="openstack/dnsmasq-dns-7586c46c57-vgvpz" Feb 24 02:39:20.649704 master-0 kubenswrapper[31411]: I0224 02:39:20.647534 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c86d48d3-ae36-493d-8e45-02729b2681f1-dns-swift-storage-0\") pod \"dnsmasq-dns-7586c46c57-vgvpz\" (UID: \"c86d48d3-ae36-493d-8e45-02729b2681f1\") " pod="openstack/dnsmasq-dns-7586c46c57-vgvpz" Feb 24 02:39:20.652969 master-0 kubenswrapper[31411]: I0224 02:39:20.652904 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c86d48d3-ae36-493d-8e45-02729b2681f1-ovsdbserver-nb\") pod \"dnsmasq-dns-7586c46c57-vgvpz\" (UID: \"c86d48d3-ae36-493d-8e45-02729b2681f1\") " pod="openstack/dnsmasq-dns-7586c46c57-vgvpz" Feb 24 02:39:20.655674 master-0 kubenswrapper[31411]: I0224 02:39:20.653264 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c86d48d3-ae36-493d-8e45-02729b2681f1-dns-swift-storage-0\") pod \"dnsmasq-dns-7586c46c57-vgvpz\" (UID: \"c86d48d3-ae36-493d-8e45-02729b2681f1\") " pod="openstack/dnsmasq-dns-7586c46c57-vgvpz" Feb 24 02:39:20.655674 master-0 kubenswrapper[31411]: I0224 02:39:20.653539 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c86d48d3-ae36-493d-8e45-02729b2681f1-ovsdbserver-sb\") pod \"dnsmasq-dns-7586c46c57-vgvpz\" (UID: \"c86d48d3-ae36-493d-8e45-02729b2681f1\") " pod="openstack/dnsmasq-dns-7586c46c57-vgvpz" Feb 24 02:39:20.655674 master-0 kubenswrapper[31411]: I0224 02:39:20.654162 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c86d48d3-ae36-493d-8e45-02729b2681f1-dns-svc\") pod \"dnsmasq-dns-7586c46c57-vgvpz\" (UID: \"c86d48d3-ae36-493d-8e45-02729b2681f1\") " pod="openstack/dnsmasq-dns-7586c46c57-vgvpz" Feb 24 02:39:20.659608 master-0 kubenswrapper[31411]: I0224 02:39:20.658684 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c86d48d3-ae36-493d-8e45-02729b2681f1-config\") pod \"dnsmasq-dns-7586c46c57-vgvpz\" (UID: \"c86d48d3-ae36-493d-8e45-02729b2681f1\") " pod="openstack/dnsmasq-dns-7586c46c57-vgvpz" Feb 24 02:39:20.690698 master-0 kubenswrapper[31411]: I0224 02:39:20.682698 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq897\" (UniqueName: \"kubernetes.io/projected/c86d48d3-ae36-493d-8e45-02729b2681f1-kube-api-access-wq897\") pod \"dnsmasq-dns-7586c46c57-vgvpz\" (UID: \"c86d48d3-ae36-493d-8e45-02729b2681f1\") " pod="openstack/dnsmasq-dns-7586c46c57-vgvpz" Feb 24 02:39:20.690698 master-0 kubenswrapper[31411]: I0224 02:39:20.688295 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7586c46c57-vgvpz" Feb 24 02:39:21.228679 master-0 kubenswrapper[31411]: I0224 02:39:21.227313 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7586c46c57-vgvpz"] Feb 24 02:39:21.773028 master-0 kubenswrapper[31411]: I0224 02:39:21.772855 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:39:22.059360 master-0 kubenswrapper[31411]: I0224 02:39:22.056249 31411 generic.go:334] "Generic (PLEG): container finished" podID="c86d48d3-ae36-493d-8e45-02729b2681f1" containerID="ec783c92f3f9f405ff4948f73c9ac614f5fb87f6b8dfe788d028cb51bab66f5c" exitCode=0 Feb 24 02:39:22.059887 master-0 kubenswrapper[31411]: I0224 02:39:22.059822 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7586c46c57-vgvpz" event={"ID":"c86d48d3-ae36-493d-8e45-02729b2681f1","Type":"ContainerDied","Data":"ec783c92f3f9f405ff4948f73c9ac614f5fb87f6b8dfe788d028cb51bab66f5c"} Feb 24 02:39:22.065509 master-0 kubenswrapper[31411]: I0224 02:39:22.065394 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7586c46c57-vgvpz" event={"ID":"c86d48d3-ae36-493d-8e45-02729b2681f1","Type":"ContainerStarted","Data":"a7357889e66da4ca9048506582c1fe203d2ad8746aa1e2c15fc8d90203286709"} Feb 24 02:39:23.075579 master-0 kubenswrapper[31411]: I0224 02:39:23.075493 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7586c46c57-vgvpz" event={"ID":"c86d48d3-ae36-493d-8e45-02729b2681f1","Type":"ContainerStarted","Data":"f0a3627778e6ea815c5f8a64ee38b792c071e79da1884930df0b074912196ec4"} Feb 24 02:39:23.076865 master-0 kubenswrapper[31411]: I0224 02:39:23.076831 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7586c46c57-vgvpz" Feb 24 02:39:23.274730 master-0 kubenswrapper[31411]: I0224 02:39:23.274564 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7586c46c57-vgvpz" podStartSLOduration=3.27453493 podStartE2EDuration="3.27453493s" podCreationTimestamp="2026-02-24 02:39:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:39:23.261712409 +0000 UTC m=+1106.478910255" watchObservedRunningTime="2026-02-24 02:39:23.27453493 +0000 UTC m=+1106.491732776" Feb 24 02:39:23.485767 master-0 kubenswrapper[31411]: I0224 02:39:23.485657 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 24 02:39:23.486239 master-0 kubenswrapper[31411]: I0224 02:39:23.485984 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4d124862-0eea-40c8-a889-0db0c3b88b6f" containerName="nova-api-log" containerID="cri-o://addbd283a212183cc42604ebcc1797b239e11d751ff94f8a3e317c33af750846" gracePeriod=30 Feb 24 02:39:23.486239 master-0 kubenswrapper[31411]: I0224 02:39:23.486178 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4d124862-0eea-40c8-a889-0db0c3b88b6f" containerName="nova-api-api" containerID="cri-o://9601bf9afddb31a9f961f50649d7d7c25e528fec724949558f0c5bf3dab1a90d" gracePeriod=30 Feb 24 02:39:24.092821 master-0 kubenswrapper[31411]: I0224 02:39:24.092682 31411 generic.go:334] "Generic (PLEG): container finished" podID="4d124862-0eea-40c8-a889-0db0c3b88b6f" containerID="addbd283a212183cc42604ebcc1797b239e11d751ff94f8a3e317c33af750846" exitCode=143 Feb 24 02:39:24.093899 master-0 kubenswrapper[31411]: I0224 02:39:24.092816 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4d124862-0eea-40c8-a889-0db0c3b88b6f","Type":"ContainerDied","Data":"addbd283a212183cc42604ebcc1797b239e11d751ff94f8a3e317c33af750846"} Feb 24 02:39:26.772889 master-0 kubenswrapper[31411]: I0224 02:39:26.772823 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:39:26.802682 master-0 kubenswrapper[31411]: I0224 02:39:26.802601 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:39:27.140086 master-0 kubenswrapper[31411]: I0224 02:39:27.140027 31411 generic.go:334] "Generic (PLEG): container finished" podID="4d124862-0eea-40c8-a889-0db0c3b88b6f" containerID="9601bf9afddb31a9f961f50649d7d7c25e528fec724949558f0c5bf3dab1a90d" exitCode=0 Feb 24 02:39:27.140340 master-0 kubenswrapper[31411]: I0224 02:39:27.140136 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4d124862-0eea-40c8-a889-0db0c3b88b6f","Type":"ContainerDied","Data":"9601bf9afddb31a9f961f50649d7d7c25e528fec724949558f0c5bf3dab1a90d"} Feb 24 02:39:27.166886 master-0 kubenswrapper[31411]: I0224 02:39:27.166789 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 24 02:39:27.327556 master-0 kubenswrapper[31411]: I0224 02:39:27.327502 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 02:39:27.416145 master-0 kubenswrapper[31411]: I0224 02:39:27.414712 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnbs7\" (UniqueName: \"kubernetes.io/projected/4d124862-0eea-40c8-a889-0db0c3b88b6f-kube-api-access-wnbs7\") pod \"4d124862-0eea-40c8-a889-0db0c3b88b6f\" (UID: \"4d124862-0eea-40c8-a889-0db0c3b88b6f\") " Feb 24 02:39:27.416145 master-0 kubenswrapper[31411]: I0224 02:39:27.414823 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d124862-0eea-40c8-a889-0db0c3b88b6f-config-data\") pod \"4d124862-0eea-40c8-a889-0db0c3b88b6f\" (UID: \"4d124862-0eea-40c8-a889-0db0c3b88b6f\") " Feb 24 02:39:27.420095 master-0 kubenswrapper[31411]: I0224 02:39:27.420015 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d124862-0eea-40c8-a889-0db0c3b88b6f-kube-api-access-wnbs7" (OuterVolumeSpecName: "kube-api-access-wnbs7") pod "4d124862-0eea-40c8-a889-0db0c3b88b6f" (UID: "4d124862-0eea-40c8-a889-0db0c3b88b6f"). InnerVolumeSpecName "kube-api-access-wnbs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:39:27.450933 master-0 kubenswrapper[31411]: I0224 02:39:27.448535 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-rxn8v"] Feb 24 02:39:27.450933 master-0 kubenswrapper[31411]: E0224 02:39:27.449717 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d124862-0eea-40c8-a889-0db0c3b88b6f" containerName="nova-api-api" Feb 24 02:39:27.450933 master-0 kubenswrapper[31411]: I0224 02:39:27.449743 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d124862-0eea-40c8-a889-0db0c3b88b6f" containerName="nova-api-api" Feb 24 02:39:27.450933 master-0 kubenswrapper[31411]: E0224 02:39:27.449797 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d124862-0eea-40c8-a889-0db0c3b88b6f" containerName="nova-api-log" Feb 24 02:39:27.450933 master-0 kubenswrapper[31411]: I0224 02:39:27.449807 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d124862-0eea-40c8-a889-0db0c3b88b6f" containerName="nova-api-log" Feb 24 02:39:27.450933 master-0 kubenswrapper[31411]: I0224 02:39:27.450226 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d124862-0eea-40c8-a889-0db0c3b88b6f" containerName="nova-api-log" Feb 24 02:39:27.450933 master-0 kubenswrapper[31411]: I0224 02:39:27.450297 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d124862-0eea-40c8-a889-0db0c3b88b6f" containerName="nova-api-api" Feb 24 02:39:27.451836 master-0 kubenswrapper[31411]: I0224 02:39:27.451809 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rxn8v" Feb 24 02:39:27.454262 master-0 kubenswrapper[31411]: I0224 02:39:27.453769 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 24 02:39:27.456542 master-0 kubenswrapper[31411]: I0224 02:39:27.455629 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 24 02:39:27.472877 master-0 kubenswrapper[31411]: I0224 02:39:27.472818 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-rxn8v"] Feb 24 02:39:27.499534 master-0 kubenswrapper[31411]: I0224 02:39:27.499473 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-host-discover-stqn6"] Feb 24 02:39:27.506321 master-0 kubenswrapper[31411]: I0224 02:39:27.506285 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-stqn6" Feb 24 02:39:27.515244 master-0 kubenswrapper[31411]: I0224 02:39:27.515171 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-stqn6"] Feb 24 02:39:27.521602 master-0 kubenswrapper[31411]: I0224 02:39:27.521543 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d124862-0eea-40c8-a889-0db0c3b88b6f-combined-ca-bundle\") pod \"4d124862-0eea-40c8-a889-0db0c3b88b6f\" (UID: \"4d124862-0eea-40c8-a889-0db0c3b88b6f\") " Feb 24 02:39:27.521714 master-0 kubenswrapper[31411]: I0224 02:39:27.521684 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d124862-0eea-40c8-a889-0db0c3b88b6f-logs\") pod \"4d124862-0eea-40c8-a889-0db0c3b88b6f\" (UID: \"4d124862-0eea-40c8-a889-0db0c3b88b6f\") " Feb 24 02:39:27.523437 master-0 kubenswrapper[31411]: I0224 02:39:27.523400 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f284bd2b-440f-48e1-b995-9da2d0519a0b-scripts\") pod \"nova-cell1-cell-mapping-rxn8v\" (UID: \"f284bd2b-440f-48e1-b995-9da2d0519a0b\") " pod="openstack/nova-cell1-cell-mapping-rxn8v" Feb 24 02:39:27.523558 master-0 kubenswrapper[31411]: I0224 02:39:27.523486 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f284bd2b-440f-48e1-b995-9da2d0519a0b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-rxn8v\" (UID: \"f284bd2b-440f-48e1-b995-9da2d0519a0b\") " pod="openstack/nova-cell1-cell-mapping-rxn8v" Feb 24 02:39:27.523558 master-0 kubenswrapper[31411]: I0224 02:39:27.523549 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f284bd2b-440f-48e1-b995-9da2d0519a0b-config-data\") pod \"nova-cell1-cell-mapping-rxn8v\" (UID: \"f284bd2b-440f-48e1-b995-9da2d0519a0b\") " pod="openstack/nova-cell1-cell-mapping-rxn8v" Feb 24 02:39:27.523746 master-0 kubenswrapper[31411]: I0224 02:39:27.523597 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1c49571-1aa8-4a21-9d31-76039b6413d8-config-data\") pod \"nova-cell1-host-discover-stqn6\" (UID: \"f1c49571-1aa8-4a21-9d31-76039b6413d8\") " pod="openstack/nova-cell1-host-discover-stqn6" Feb 24 02:39:27.523746 master-0 kubenswrapper[31411]: I0224 02:39:27.523651 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnvfb\" (UniqueName: \"kubernetes.io/projected/f1c49571-1aa8-4a21-9d31-76039b6413d8-kube-api-access-xnvfb\") pod \"nova-cell1-host-discover-stqn6\" (UID: \"f1c49571-1aa8-4a21-9d31-76039b6413d8\") " pod="openstack/nova-cell1-host-discover-stqn6" Feb 24 02:39:27.523746 master-0 kubenswrapper[31411]: I0224 02:39:27.523697 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2jnn\" (UniqueName: \"kubernetes.io/projected/f284bd2b-440f-48e1-b995-9da2d0519a0b-kube-api-access-v2jnn\") pod \"nova-cell1-cell-mapping-rxn8v\" (UID: \"f284bd2b-440f-48e1-b995-9da2d0519a0b\") " pod="openstack/nova-cell1-cell-mapping-rxn8v" Feb 24 02:39:27.523746 master-0 kubenswrapper[31411]: I0224 02:39:27.523719 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1c49571-1aa8-4a21-9d31-76039b6413d8-scripts\") pod \"nova-cell1-host-discover-stqn6\" (UID: \"f1c49571-1aa8-4a21-9d31-76039b6413d8\") " pod="openstack/nova-cell1-host-discover-stqn6" Feb 24 02:39:27.523883 master-0 kubenswrapper[31411]: I0224 02:39:27.523756 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c49571-1aa8-4a21-9d31-76039b6413d8-combined-ca-bundle\") pod \"nova-cell1-host-discover-stqn6\" (UID: \"f1c49571-1aa8-4a21-9d31-76039b6413d8\") " pod="openstack/nova-cell1-host-discover-stqn6" Feb 24 02:39:27.523883 master-0 kubenswrapper[31411]: I0224 02:39:27.523869 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnbs7\" (UniqueName: \"kubernetes.io/projected/4d124862-0eea-40c8-a889-0db0c3b88b6f-kube-api-access-wnbs7\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:27.525271 master-0 kubenswrapper[31411]: I0224 02:39:27.524628 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d124862-0eea-40c8-a889-0db0c3b88b6f-config-data" (OuterVolumeSpecName: "config-data") pod "4d124862-0eea-40c8-a889-0db0c3b88b6f" (UID: "4d124862-0eea-40c8-a889-0db0c3b88b6f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:39:27.526045 master-0 kubenswrapper[31411]: I0224 02:39:27.525679 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d124862-0eea-40c8-a889-0db0c3b88b6f-logs" (OuterVolumeSpecName: "logs") pod "4d124862-0eea-40c8-a889-0db0c3b88b6f" (UID: "4d124862-0eea-40c8-a889-0db0c3b88b6f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:39:27.572555 master-0 kubenswrapper[31411]: I0224 02:39:27.572475 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d124862-0eea-40c8-a889-0db0c3b88b6f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d124862-0eea-40c8-a889-0db0c3b88b6f" (UID: "4d124862-0eea-40c8-a889-0db0c3b88b6f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:39:27.627438 master-0 kubenswrapper[31411]: I0224 02:39:27.627370 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnvfb\" (UniqueName: \"kubernetes.io/projected/f1c49571-1aa8-4a21-9d31-76039b6413d8-kube-api-access-xnvfb\") pod \"nova-cell1-host-discover-stqn6\" (UID: \"f1c49571-1aa8-4a21-9d31-76039b6413d8\") " pod="openstack/nova-cell1-host-discover-stqn6" Feb 24 02:39:27.627711 master-0 kubenswrapper[31411]: I0224 02:39:27.627547 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2jnn\" (UniqueName: \"kubernetes.io/projected/f284bd2b-440f-48e1-b995-9da2d0519a0b-kube-api-access-v2jnn\") pod \"nova-cell1-cell-mapping-rxn8v\" (UID: \"f284bd2b-440f-48e1-b995-9da2d0519a0b\") " pod="openstack/nova-cell1-cell-mapping-rxn8v" Feb 24 02:39:27.627711 master-0 kubenswrapper[31411]: I0224 02:39:27.627614 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1c49571-1aa8-4a21-9d31-76039b6413d8-scripts\") pod \"nova-cell1-host-discover-stqn6\" (UID: \"f1c49571-1aa8-4a21-9d31-76039b6413d8\") " pod="openstack/nova-cell1-host-discover-stqn6" Feb 24 02:39:27.627711 master-0 kubenswrapper[31411]: I0224 02:39:27.627677 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c49571-1aa8-4a21-9d31-76039b6413d8-combined-ca-bundle\") pod \"nova-cell1-host-discover-stqn6\" (UID: \"f1c49571-1aa8-4a21-9d31-76039b6413d8\") " pod="openstack/nova-cell1-host-discover-stqn6" Feb 24 02:39:27.627823 master-0 kubenswrapper[31411]: I0224 02:39:27.627791 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f284bd2b-440f-48e1-b995-9da2d0519a0b-scripts\") pod \"nova-cell1-cell-mapping-rxn8v\" (UID: \"f284bd2b-440f-48e1-b995-9da2d0519a0b\") " pod="openstack/nova-cell1-cell-mapping-rxn8v" Feb 24 02:39:27.627918 master-0 kubenswrapper[31411]: I0224 02:39:27.627886 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f284bd2b-440f-48e1-b995-9da2d0519a0b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-rxn8v\" (UID: \"f284bd2b-440f-48e1-b995-9da2d0519a0b\") " pod="openstack/nova-cell1-cell-mapping-rxn8v" Feb 24 02:39:27.627987 master-0 kubenswrapper[31411]: I0224 02:39:27.627965 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f284bd2b-440f-48e1-b995-9da2d0519a0b-config-data\") pod \"nova-cell1-cell-mapping-rxn8v\" (UID: \"f284bd2b-440f-48e1-b995-9da2d0519a0b\") " pod="openstack/nova-cell1-cell-mapping-rxn8v" Feb 24 02:39:27.628056 master-0 kubenswrapper[31411]: I0224 02:39:27.628017 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1c49571-1aa8-4a21-9d31-76039b6413d8-config-data\") pod \"nova-cell1-host-discover-stqn6\" (UID: \"f1c49571-1aa8-4a21-9d31-76039b6413d8\") " pod="openstack/nova-cell1-host-discover-stqn6" Feb 24 02:39:27.628158 master-0 kubenswrapper[31411]: I0224 02:39:27.628128 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d124862-0eea-40c8-a889-0db0c3b88b6f-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:27.628158 master-0 kubenswrapper[31411]: I0224 02:39:27.628154 31411 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d124862-0eea-40c8-a889-0db0c3b88b6f-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:27.628231 master-0 kubenswrapper[31411]: I0224 02:39:27.628168 31411 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d124862-0eea-40c8-a889-0db0c3b88b6f-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:27.634856 master-0 kubenswrapper[31411]: I0224 02:39:27.634812 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f284bd2b-440f-48e1-b995-9da2d0519a0b-scripts\") pod \"nova-cell1-cell-mapping-rxn8v\" (UID: \"f284bd2b-440f-48e1-b995-9da2d0519a0b\") " pod="openstack/nova-cell1-cell-mapping-rxn8v" Feb 24 02:39:27.635082 master-0 kubenswrapper[31411]: I0224 02:39:27.635031 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f284bd2b-440f-48e1-b995-9da2d0519a0b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-rxn8v\" (UID: \"f284bd2b-440f-48e1-b995-9da2d0519a0b\") " pod="openstack/nova-cell1-cell-mapping-rxn8v" Feb 24 02:39:27.637857 master-0 kubenswrapper[31411]: I0224 02:39:27.637273 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c49571-1aa8-4a21-9d31-76039b6413d8-combined-ca-bundle\") pod \"nova-cell1-host-discover-stqn6\" (UID: \"f1c49571-1aa8-4a21-9d31-76039b6413d8\") " pod="openstack/nova-cell1-host-discover-stqn6" Feb 24 02:39:27.638352 master-0 kubenswrapper[31411]: I0224 02:39:27.638307 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f284bd2b-440f-48e1-b995-9da2d0519a0b-config-data\") pod \"nova-cell1-cell-mapping-rxn8v\" (UID: \"f284bd2b-440f-48e1-b995-9da2d0519a0b\") " pod="openstack/nova-cell1-cell-mapping-rxn8v" Feb 24 02:39:27.642781 master-0 kubenswrapper[31411]: I0224 02:39:27.642724 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1c49571-1aa8-4a21-9d31-76039b6413d8-config-data\") pod \"nova-cell1-host-discover-stqn6\" (UID: \"f1c49571-1aa8-4a21-9d31-76039b6413d8\") " pod="openstack/nova-cell1-host-discover-stqn6" Feb 24 02:39:27.644623 master-0 kubenswrapper[31411]: I0224 02:39:27.644189 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1c49571-1aa8-4a21-9d31-76039b6413d8-scripts\") pod \"nova-cell1-host-discover-stqn6\" (UID: \"f1c49571-1aa8-4a21-9d31-76039b6413d8\") " pod="openstack/nova-cell1-host-discover-stqn6" Feb 24 02:39:27.645719 master-0 kubenswrapper[31411]: I0224 02:39:27.645686 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2jnn\" (UniqueName: \"kubernetes.io/projected/f284bd2b-440f-48e1-b995-9da2d0519a0b-kube-api-access-v2jnn\") pod \"nova-cell1-cell-mapping-rxn8v\" (UID: \"f284bd2b-440f-48e1-b995-9da2d0519a0b\") " pod="openstack/nova-cell1-cell-mapping-rxn8v" Feb 24 02:39:27.652265 master-0 kubenswrapper[31411]: I0224 02:39:27.649998 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnvfb\" (UniqueName: \"kubernetes.io/projected/f1c49571-1aa8-4a21-9d31-76039b6413d8-kube-api-access-xnvfb\") pod \"nova-cell1-host-discover-stqn6\" (UID: \"f1c49571-1aa8-4a21-9d31-76039b6413d8\") " pod="openstack/nova-cell1-host-discover-stqn6" Feb 24 02:39:27.836995 master-0 kubenswrapper[31411]: I0224 02:39:27.836929 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rxn8v" Feb 24 02:39:27.856336 master-0 kubenswrapper[31411]: I0224 02:39:27.856265 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-stqn6" Feb 24 02:39:28.167060 master-0 kubenswrapper[31411]: I0224 02:39:28.167012 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 02:39:28.175968 master-0 kubenswrapper[31411]: I0224 02:39:28.175909 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4d124862-0eea-40c8-a889-0db0c3b88b6f","Type":"ContainerDied","Data":"0daa576ce562a516670a064433a0bee98177af149e34c3ea9ef236d7b479b1db"} Feb 24 02:39:28.176135 master-0 kubenswrapper[31411]: I0224 02:39:28.175978 31411 scope.go:117] "RemoveContainer" containerID="9601bf9afddb31a9f961f50649d7d7c25e528fec724949558f0c5bf3dab1a90d" Feb 24 02:39:28.214769 master-0 kubenswrapper[31411]: I0224 02:39:28.214698 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 24 02:39:28.220334 master-0 kubenswrapper[31411]: I0224 02:39:28.220273 31411 scope.go:117] "RemoveContainer" containerID="addbd283a212183cc42604ebcc1797b239e11d751ff94f8a3e317c33af750846" Feb 24 02:39:28.225103 master-0 kubenswrapper[31411]: I0224 02:39:28.225039 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 24 02:39:28.248387 master-0 kubenswrapper[31411]: I0224 02:39:28.248314 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 24 02:39:28.252505 master-0 kubenswrapper[31411]: I0224 02:39:28.250784 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 02:39:28.255143 master-0 kubenswrapper[31411]: I0224 02:39:28.255087 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 24 02:39:28.255276 master-0 kubenswrapper[31411]: I0224 02:39:28.255219 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 24 02:39:28.263174 master-0 kubenswrapper[31411]: I0224 02:39:28.263110 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 24 02:39:28.268453 master-0 kubenswrapper[31411]: I0224 02:39:28.268396 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 24 02:39:28.361158 master-0 kubenswrapper[31411]: I0224 02:39:28.359151 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd50fb2-396f-455d-96e3-82f255504258-config-data\") pod \"nova-api-0\" (UID: \"1fd50fb2-396f-455d-96e3-82f255504258\") " pod="openstack/nova-api-0" Feb 24 02:39:28.361158 master-0 kubenswrapper[31411]: I0224 02:39:28.359332 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fd50fb2-396f-455d-96e3-82f255504258-public-tls-certs\") pod \"nova-api-0\" (UID: \"1fd50fb2-396f-455d-96e3-82f255504258\") " pod="openstack/nova-api-0" Feb 24 02:39:28.361158 master-0 kubenswrapper[31411]: I0224 02:39:28.359363 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fd50fb2-396f-455d-96e3-82f255504258-logs\") pod \"nova-api-0\" (UID: \"1fd50fb2-396f-455d-96e3-82f255504258\") " pod="openstack/nova-api-0" Feb 24 02:39:28.361158 master-0 kubenswrapper[31411]: I0224 02:39:28.359386 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fd50fb2-396f-455d-96e3-82f255504258-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1fd50fb2-396f-455d-96e3-82f255504258\") " pod="openstack/nova-api-0" Feb 24 02:39:28.361158 master-0 kubenswrapper[31411]: I0224 02:39:28.359467 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd50fb2-396f-455d-96e3-82f255504258-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1fd50fb2-396f-455d-96e3-82f255504258\") " pod="openstack/nova-api-0" Feb 24 02:39:28.361158 master-0 kubenswrapper[31411]: I0224 02:39:28.359548 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j4gl\" (UniqueName: \"kubernetes.io/projected/1fd50fb2-396f-455d-96e3-82f255504258-kube-api-access-2j4gl\") pod \"nova-api-0\" (UID: \"1fd50fb2-396f-455d-96e3-82f255504258\") " pod="openstack/nova-api-0" Feb 24 02:39:28.410646 master-0 kubenswrapper[31411]: I0224 02:39:28.410564 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-rxn8v"] Feb 24 02:39:28.466576 master-0 kubenswrapper[31411]: I0224 02:39:28.466518 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j4gl\" (UniqueName: \"kubernetes.io/projected/1fd50fb2-396f-455d-96e3-82f255504258-kube-api-access-2j4gl\") pod \"nova-api-0\" (UID: \"1fd50fb2-396f-455d-96e3-82f255504258\") " pod="openstack/nova-api-0" Feb 24 02:39:28.470117 master-0 kubenswrapper[31411]: I0224 02:39:28.469787 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd50fb2-396f-455d-96e3-82f255504258-config-data\") pod \"nova-api-0\" (UID: \"1fd50fb2-396f-455d-96e3-82f255504258\") " pod="openstack/nova-api-0" Feb 24 02:39:28.470117 master-0 kubenswrapper[31411]: I0224 02:39:28.469913 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fd50fb2-396f-455d-96e3-82f255504258-public-tls-certs\") pod \"nova-api-0\" (UID: \"1fd50fb2-396f-455d-96e3-82f255504258\") " pod="openstack/nova-api-0" Feb 24 02:39:28.470117 master-0 kubenswrapper[31411]: I0224 02:39:28.469939 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fd50fb2-396f-455d-96e3-82f255504258-logs\") pod \"nova-api-0\" (UID: \"1fd50fb2-396f-455d-96e3-82f255504258\") " pod="openstack/nova-api-0" Feb 24 02:39:28.470117 master-0 kubenswrapper[31411]: I0224 02:39:28.469958 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fd50fb2-396f-455d-96e3-82f255504258-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1fd50fb2-396f-455d-96e3-82f255504258\") " pod="openstack/nova-api-0" Feb 24 02:39:28.470117 master-0 kubenswrapper[31411]: I0224 02:39:28.470045 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd50fb2-396f-455d-96e3-82f255504258-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1fd50fb2-396f-455d-96e3-82f255504258\") " pod="openstack/nova-api-0" Feb 24 02:39:28.470945 master-0 kubenswrapper[31411]: I0224 02:39:28.470915 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fd50fb2-396f-455d-96e3-82f255504258-logs\") pod \"nova-api-0\" (UID: \"1fd50fb2-396f-455d-96e3-82f255504258\") " pod="openstack/nova-api-0" Feb 24 02:39:28.474827 master-0 kubenswrapper[31411]: I0224 02:39:28.474800 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd50fb2-396f-455d-96e3-82f255504258-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1fd50fb2-396f-455d-96e3-82f255504258\") " pod="openstack/nova-api-0" Feb 24 02:39:28.474827 master-0 kubenswrapper[31411]: I0224 02:39:28.474814 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fd50fb2-396f-455d-96e3-82f255504258-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1fd50fb2-396f-455d-96e3-82f255504258\") " pod="openstack/nova-api-0" Feb 24 02:39:28.475800 master-0 kubenswrapper[31411]: I0224 02:39:28.475757 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd50fb2-396f-455d-96e3-82f255504258-config-data\") pod \"nova-api-0\" (UID: \"1fd50fb2-396f-455d-96e3-82f255504258\") " pod="openstack/nova-api-0" Feb 24 02:39:28.476704 master-0 kubenswrapper[31411]: I0224 02:39:28.476635 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fd50fb2-396f-455d-96e3-82f255504258-public-tls-certs\") pod \"nova-api-0\" (UID: \"1fd50fb2-396f-455d-96e3-82f255504258\") " pod="openstack/nova-api-0" Feb 24 02:39:28.487872 master-0 kubenswrapper[31411]: I0224 02:39:28.487826 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j4gl\" (UniqueName: \"kubernetes.io/projected/1fd50fb2-396f-455d-96e3-82f255504258-kube-api-access-2j4gl\") pod \"nova-api-0\" (UID: \"1fd50fb2-396f-455d-96e3-82f255504258\") " pod="openstack/nova-api-0" Feb 24 02:39:28.518979 master-0 kubenswrapper[31411]: I0224 02:39:28.518923 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-stqn6"] Feb 24 02:39:28.524676 master-0 kubenswrapper[31411]: W0224 02:39:28.524635 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1c49571_1aa8_4a21_9d31_76039b6413d8.slice/crio-4df11bddf5befea3979bcaa162ccb434251624e6168eb883a3b58eefb77941ff WatchSource:0}: Error finding container 4df11bddf5befea3979bcaa162ccb434251624e6168eb883a3b58eefb77941ff: Status 404 returned error can't find the container with id 4df11bddf5befea3979bcaa162ccb434251624e6168eb883a3b58eefb77941ff Feb 24 02:39:28.587859 master-0 kubenswrapper[31411]: I0224 02:39:28.587793 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 02:39:29.172031 master-0 kubenswrapper[31411]: I0224 02:39:29.164435 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d124862-0eea-40c8-a889-0db0c3b88b6f" path="/var/lib/kubelet/pods/4d124862-0eea-40c8-a889-0db0c3b88b6f/volumes" Feb 24 02:39:29.176445 master-0 kubenswrapper[31411]: I0224 02:39:29.176147 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 24 02:39:29.208799 master-0 kubenswrapper[31411]: I0224 02:39:29.208749 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rxn8v" event={"ID":"f284bd2b-440f-48e1-b995-9da2d0519a0b","Type":"ContainerStarted","Data":"ba129ba62efac6ca8a65fbed4375af6d02144201fef82d26b760768f8a027dfc"} Feb 24 02:39:29.209019 master-0 kubenswrapper[31411]: I0224 02:39:29.209001 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rxn8v" event={"ID":"f284bd2b-440f-48e1-b995-9da2d0519a0b","Type":"ContainerStarted","Data":"4684453582a90a5523757cd37843759aa9920e55c4215abc3eac2b8891052621"} Feb 24 02:39:29.222156 master-0 kubenswrapper[31411]: I0224 02:39:29.222100 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-stqn6" event={"ID":"f1c49571-1aa8-4a21-9d31-76039b6413d8","Type":"ContainerStarted","Data":"6c1be05d035ee036673fcd1bda38523824882e54a83f1346c5a1559a69b29a6e"} Feb 24 02:39:29.225102 master-0 kubenswrapper[31411]: I0224 02:39:29.225075 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-stqn6" event={"ID":"f1c49571-1aa8-4a21-9d31-76039b6413d8","Type":"ContainerStarted","Data":"4df11bddf5befea3979bcaa162ccb434251624e6168eb883a3b58eefb77941ff"} Feb 24 02:39:29.261558 master-0 kubenswrapper[31411]: I0224 02:39:29.261463 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-rxn8v" podStartSLOduration=2.261442076 podStartE2EDuration="2.261442076s" podCreationTimestamp="2026-02-24 02:39:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:39:29.2399049 +0000 UTC m=+1112.457102746" watchObservedRunningTime="2026-02-24 02:39:29.261442076 +0000 UTC m=+1112.478639922" Feb 24 02:39:29.271271 master-0 kubenswrapper[31411]: I0224 02:39:29.271202 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-host-discover-stqn6" podStartSLOduration=2.2711836610000002 podStartE2EDuration="2.271183661s" podCreationTimestamp="2026-02-24 02:39:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:39:29.269935286 +0000 UTC m=+1112.487133142" watchObservedRunningTime="2026-02-24 02:39:29.271183661 +0000 UTC m=+1112.488381507" Feb 24 02:39:30.238227 master-0 kubenswrapper[31411]: I0224 02:39:30.238154 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1fd50fb2-396f-455d-96e3-82f255504258","Type":"ContainerStarted","Data":"0c00c4257a2704cd311edc4d1d51f76bbcfce6ef8aa22c99df2f139d6a7a4e3d"} Feb 24 02:39:30.238227 master-0 kubenswrapper[31411]: I0224 02:39:30.238237 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1fd50fb2-396f-455d-96e3-82f255504258","Type":"ContainerStarted","Data":"854018a4a4e5e90e8b8bc651e0ec05005fe48cdc4098746ef2956b24466c1776"} Feb 24 02:39:30.238903 master-0 kubenswrapper[31411]: I0224 02:39:30.238254 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1fd50fb2-396f-455d-96e3-82f255504258","Type":"ContainerStarted","Data":"45acf1f77db21ac8e29caa0394daaccb808b9e0e29095abb2f32b8b856d56c21"} Feb 24 02:39:30.264603 master-0 kubenswrapper[31411]: I0224 02:39:30.263667 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.263646896 podStartE2EDuration="2.263646896s" podCreationTimestamp="2026-02-24 02:39:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:39:30.261117525 +0000 UTC m=+1113.478315371" watchObservedRunningTime="2026-02-24 02:39:30.263646896 +0000 UTC m=+1113.480844742" Feb 24 02:39:30.690785 master-0 kubenswrapper[31411]: I0224 02:39:30.690543 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7586c46c57-vgvpz" Feb 24 02:39:30.817493 master-0 kubenswrapper[31411]: I0224 02:39:30.817218 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6fd9d5d9-zff6h"] Feb 24 02:39:30.817493 master-0 kubenswrapper[31411]: I0224 02:39:30.817485 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f6fd9d5d9-zff6h" podUID="d432fd91-0f08-43b3-8201-65f8d4e7efe8" containerName="dnsmasq-dns" containerID="cri-o://8ce04f2df867785b0371b6428fe78a5e62680d50bb459d1c42d88b284e3d4a4a" gracePeriod=10 Feb 24 02:39:31.266904 master-0 kubenswrapper[31411]: I0224 02:39:31.266836 31411 generic.go:334] "Generic (PLEG): container finished" podID="d432fd91-0f08-43b3-8201-65f8d4e7efe8" containerID="8ce04f2df867785b0371b6428fe78a5e62680d50bb459d1c42d88b284e3d4a4a" exitCode=0 Feb 24 02:39:31.267483 master-0 kubenswrapper[31411]: I0224 02:39:31.267150 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6fd9d5d9-zff6h" event={"ID":"d432fd91-0f08-43b3-8201-65f8d4e7efe8","Type":"ContainerDied","Data":"8ce04f2df867785b0371b6428fe78a5e62680d50bb459d1c42d88b284e3d4a4a"} Feb 24 02:39:31.420966 master-0 kubenswrapper[31411]: I0224 02:39:31.420930 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6fd9d5d9-zff6h" Feb 24 02:39:31.492843 master-0 kubenswrapper[31411]: I0224 02:39:31.492439 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-ovsdbserver-nb\") pod \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\" (UID: \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\") " Feb 24 02:39:31.492843 master-0 kubenswrapper[31411]: I0224 02:39:31.492527 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-ovsdbserver-sb\") pod \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\" (UID: \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\") " Feb 24 02:39:31.492843 master-0 kubenswrapper[31411]: I0224 02:39:31.492713 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-config\") pod \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\" (UID: \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\") " Feb 24 02:39:31.492843 master-0 kubenswrapper[31411]: I0224 02:39:31.492788 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-dns-swift-storage-0\") pod \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\" (UID: \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\") " Feb 24 02:39:31.493318 master-0 kubenswrapper[31411]: I0224 02:39:31.492942 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-dns-svc\") pod \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\" (UID: \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\") " Feb 24 02:39:31.493318 master-0 kubenswrapper[31411]: I0224 02:39:31.492997 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9xlj\" (UniqueName: \"kubernetes.io/projected/d432fd91-0f08-43b3-8201-65f8d4e7efe8-kube-api-access-v9xlj\") pod \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\" (UID: \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\") " Feb 24 02:39:31.550532 master-0 kubenswrapper[31411]: I0224 02:39:31.550458 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d432fd91-0f08-43b3-8201-65f8d4e7efe8-kube-api-access-v9xlj" (OuterVolumeSpecName: "kube-api-access-v9xlj") pod "d432fd91-0f08-43b3-8201-65f8d4e7efe8" (UID: "d432fd91-0f08-43b3-8201-65f8d4e7efe8"). InnerVolumeSpecName "kube-api-access-v9xlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:39:31.591072 master-0 kubenswrapper[31411]: I0224 02:39:31.591015 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d432fd91-0f08-43b3-8201-65f8d4e7efe8" (UID: "d432fd91-0f08-43b3-8201-65f8d4e7efe8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:39:31.592504 master-0 kubenswrapper[31411]: I0224 02:39:31.592455 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d432fd91-0f08-43b3-8201-65f8d4e7efe8" (UID: "d432fd91-0f08-43b3-8201-65f8d4e7efe8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:39:31.595090 master-0 kubenswrapper[31411]: I0224 02:39:31.595058 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-config" (OuterVolumeSpecName: "config") pod "d432fd91-0f08-43b3-8201-65f8d4e7efe8" (UID: "d432fd91-0f08-43b3-8201-65f8d4e7efe8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:39:31.595301 master-0 kubenswrapper[31411]: I0224 02:39:31.595267 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-config\") pod \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\" (UID: \"d432fd91-0f08-43b3-8201-65f8d4e7efe8\") " Feb 24 02:39:31.596010 master-0 kubenswrapper[31411]: I0224 02:39:31.595979 31411 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:31.596061 master-0 kubenswrapper[31411]: I0224 02:39:31.596004 31411 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:31.596061 master-0 kubenswrapper[31411]: I0224 02:39:31.596035 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9xlj\" (UniqueName: \"kubernetes.io/projected/d432fd91-0f08-43b3-8201-65f8d4e7efe8-kube-api-access-v9xlj\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:31.596130 master-0 kubenswrapper[31411]: W0224 02:39:31.596118 31411 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/d432fd91-0f08-43b3-8201-65f8d4e7efe8/volumes/kubernetes.io~configmap/config Feb 24 02:39:31.596167 master-0 kubenswrapper[31411]: I0224 02:39:31.596130 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-config" (OuterVolumeSpecName: "config") pod "d432fd91-0f08-43b3-8201-65f8d4e7efe8" (UID: "d432fd91-0f08-43b3-8201-65f8d4e7efe8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:39:31.606433 master-0 kubenswrapper[31411]: I0224 02:39:31.606394 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d432fd91-0f08-43b3-8201-65f8d4e7efe8" (UID: "d432fd91-0f08-43b3-8201-65f8d4e7efe8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:39:31.612196 master-0 kubenswrapper[31411]: I0224 02:39:31.612152 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d432fd91-0f08-43b3-8201-65f8d4e7efe8" (UID: "d432fd91-0f08-43b3-8201-65f8d4e7efe8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:39:31.698630 master-0 kubenswrapper[31411]: I0224 02:39:31.698536 31411 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:31.698630 master-0 kubenswrapper[31411]: I0224 02:39:31.698605 31411 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:31.698630 master-0 kubenswrapper[31411]: I0224 02:39:31.698619 31411 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d432fd91-0f08-43b3-8201-65f8d4e7efe8-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:32.294358 master-0 kubenswrapper[31411]: I0224 02:39:32.294099 31411 generic.go:334] "Generic (PLEG): container finished" podID="f1c49571-1aa8-4a21-9d31-76039b6413d8" containerID="6c1be05d035ee036673fcd1bda38523824882e54a83f1346c5a1559a69b29a6e" exitCode=0 Feb 24 02:39:32.294358 master-0 kubenswrapper[31411]: I0224 02:39:32.294217 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-stqn6" event={"ID":"f1c49571-1aa8-4a21-9d31-76039b6413d8","Type":"ContainerDied","Data":"6c1be05d035ee036673fcd1bda38523824882e54a83f1346c5a1559a69b29a6e"} Feb 24 02:39:32.300453 master-0 kubenswrapper[31411]: I0224 02:39:32.300379 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6fd9d5d9-zff6h" event={"ID":"d432fd91-0f08-43b3-8201-65f8d4e7efe8","Type":"ContainerDied","Data":"45326ccb0211773e324f73244d8a906c31aa58835a98f1feda3ba56614a40ca3"} Feb 24 02:39:32.300695 master-0 kubenswrapper[31411]: I0224 02:39:32.300454 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6fd9d5d9-zff6h" Feb 24 02:39:32.300811 master-0 kubenswrapper[31411]: I0224 02:39:32.300613 31411 scope.go:117] "RemoveContainer" containerID="8ce04f2df867785b0371b6428fe78a5e62680d50bb459d1c42d88b284e3d4a4a" Feb 24 02:39:32.371039 master-0 kubenswrapper[31411]: I0224 02:39:32.364451 31411 scope.go:117] "RemoveContainer" containerID="acf7c71f5752a3d46eae2f230998f1510de297f03fdd4fc0d2087b03506352d3" Feb 24 02:39:32.379110 master-0 kubenswrapper[31411]: I0224 02:39:32.376080 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6fd9d5d9-zff6h"] Feb 24 02:39:32.390728 master-0 kubenswrapper[31411]: I0224 02:39:32.390666 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f6fd9d5d9-zff6h"] Feb 24 02:39:33.111951 master-0 kubenswrapper[31411]: I0224 02:39:33.111863 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d432fd91-0f08-43b3-8201-65f8d4e7efe8" path="/var/lib/kubelet/pods/d432fd91-0f08-43b3-8201-65f8d4e7efe8/volumes" Feb 24 02:39:33.320151 master-0 kubenswrapper[31411]: I0224 02:39:33.320075 31411 generic.go:334] "Generic (PLEG): container finished" podID="f284bd2b-440f-48e1-b995-9da2d0519a0b" containerID="ba129ba62efac6ca8a65fbed4375af6d02144201fef82d26b760768f8a027dfc" exitCode=0 Feb 24 02:39:33.320997 master-0 kubenswrapper[31411]: I0224 02:39:33.320198 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rxn8v" event={"ID":"f284bd2b-440f-48e1-b995-9da2d0519a0b","Type":"ContainerDied","Data":"ba129ba62efac6ca8a65fbed4375af6d02144201fef82d26b760768f8a027dfc"} Feb 24 02:39:33.985388 master-0 kubenswrapper[31411]: I0224 02:39:33.985300 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-stqn6" Feb 24 02:39:34.099703 master-0 kubenswrapper[31411]: I0224 02:39:34.098313 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c49571-1aa8-4a21-9d31-76039b6413d8-combined-ca-bundle\") pod \"f1c49571-1aa8-4a21-9d31-76039b6413d8\" (UID: \"f1c49571-1aa8-4a21-9d31-76039b6413d8\") " Feb 24 02:39:34.099703 master-0 kubenswrapper[31411]: I0224 02:39:34.098403 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnvfb\" (UniqueName: \"kubernetes.io/projected/f1c49571-1aa8-4a21-9d31-76039b6413d8-kube-api-access-xnvfb\") pod \"f1c49571-1aa8-4a21-9d31-76039b6413d8\" (UID: \"f1c49571-1aa8-4a21-9d31-76039b6413d8\") " Feb 24 02:39:34.099703 master-0 kubenswrapper[31411]: I0224 02:39:34.098494 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1c49571-1aa8-4a21-9d31-76039b6413d8-config-data\") pod \"f1c49571-1aa8-4a21-9d31-76039b6413d8\" (UID: \"f1c49571-1aa8-4a21-9d31-76039b6413d8\") " Feb 24 02:39:34.099703 master-0 kubenswrapper[31411]: I0224 02:39:34.098619 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1c49571-1aa8-4a21-9d31-76039b6413d8-scripts\") pod \"f1c49571-1aa8-4a21-9d31-76039b6413d8\" (UID: \"f1c49571-1aa8-4a21-9d31-76039b6413d8\") " Feb 24 02:39:34.103603 master-0 kubenswrapper[31411]: I0224 02:39:34.103096 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1c49571-1aa8-4a21-9d31-76039b6413d8-scripts" (OuterVolumeSpecName: "scripts") pod "f1c49571-1aa8-4a21-9d31-76039b6413d8" (UID: "f1c49571-1aa8-4a21-9d31-76039b6413d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:39:34.104151 master-0 kubenswrapper[31411]: I0224 02:39:34.104107 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1c49571-1aa8-4a21-9d31-76039b6413d8-kube-api-access-xnvfb" (OuterVolumeSpecName: "kube-api-access-xnvfb") pod "f1c49571-1aa8-4a21-9d31-76039b6413d8" (UID: "f1c49571-1aa8-4a21-9d31-76039b6413d8"). InnerVolumeSpecName "kube-api-access-xnvfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:39:34.142519 master-0 kubenswrapper[31411]: I0224 02:39:34.142455 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1c49571-1aa8-4a21-9d31-76039b6413d8-config-data" (OuterVolumeSpecName: "config-data") pod "f1c49571-1aa8-4a21-9d31-76039b6413d8" (UID: "f1c49571-1aa8-4a21-9d31-76039b6413d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:39:34.158708 master-0 kubenswrapper[31411]: I0224 02:39:34.158649 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1c49571-1aa8-4a21-9d31-76039b6413d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1c49571-1aa8-4a21-9d31-76039b6413d8" (UID: "f1c49571-1aa8-4a21-9d31-76039b6413d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:39:34.205095 master-0 kubenswrapper[31411]: I0224 02:39:34.205033 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c49571-1aa8-4a21-9d31-76039b6413d8-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:34.205095 master-0 kubenswrapper[31411]: I0224 02:39:34.205090 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnvfb\" (UniqueName: \"kubernetes.io/projected/f1c49571-1aa8-4a21-9d31-76039b6413d8-kube-api-access-xnvfb\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:34.205235 master-0 kubenswrapper[31411]: I0224 02:39:34.205115 31411 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1c49571-1aa8-4a21-9d31-76039b6413d8-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:34.205235 master-0 kubenswrapper[31411]: I0224 02:39:34.205135 31411 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1c49571-1aa8-4a21-9d31-76039b6413d8-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:34.342913 master-0 kubenswrapper[31411]: I0224 02:39:34.342810 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-stqn6" event={"ID":"f1c49571-1aa8-4a21-9d31-76039b6413d8","Type":"ContainerDied","Data":"4df11bddf5befea3979bcaa162ccb434251624e6168eb883a3b58eefb77941ff"} Feb 24 02:39:34.342913 master-0 kubenswrapper[31411]: I0224 02:39:34.342882 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-stqn6" Feb 24 02:39:34.342913 master-0 kubenswrapper[31411]: I0224 02:39:34.342910 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4df11bddf5befea3979bcaa162ccb434251624e6168eb883a3b58eefb77941ff" Feb 24 02:39:34.757835 master-0 kubenswrapper[31411]: I0224 02:39:34.757795 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rxn8v" Feb 24 02:39:34.827982 master-0 kubenswrapper[31411]: I0224 02:39:34.827866 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f284bd2b-440f-48e1-b995-9da2d0519a0b-scripts\") pod \"f284bd2b-440f-48e1-b995-9da2d0519a0b\" (UID: \"f284bd2b-440f-48e1-b995-9da2d0519a0b\") " Feb 24 02:39:34.827982 master-0 kubenswrapper[31411]: I0224 02:39:34.827948 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f284bd2b-440f-48e1-b995-9da2d0519a0b-combined-ca-bundle\") pod \"f284bd2b-440f-48e1-b995-9da2d0519a0b\" (UID: \"f284bd2b-440f-48e1-b995-9da2d0519a0b\") " Feb 24 02:39:34.830306 master-0 kubenswrapper[31411]: I0224 02:39:34.830245 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f284bd2b-440f-48e1-b995-9da2d0519a0b-config-data\") pod \"f284bd2b-440f-48e1-b995-9da2d0519a0b\" (UID: \"f284bd2b-440f-48e1-b995-9da2d0519a0b\") " Feb 24 02:39:34.830720 master-0 kubenswrapper[31411]: I0224 02:39:34.830646 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2jnn\" (UniqueName: \"kubernetes.io/projected/f284bd2b-440f-48e1-b995-9da2d0519a0b-kube-api-access-v2jnn\") pod \"f284bd2b-440f-48e1-b995-9da2d0519a0b\" (UID: \"f284bd2b-440f-48e1-b995-9da2d0519a0b\") " Feb 24 02:39:34.838535 master-0 kubenswrapper[31411]: I0224 02:39:34.838488 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f284bd2b-440f-48e1-b995-9da2d0519a0b-kube-api-access-v2jnn" (OuterVolumeSpecName: "kube-api-access-v2jnn") pod "f284bd2b-440f-48e1-b995-9da2d0519a0b" (UID: "f284bd2b-440f-48e1-b995-9da2d0519a0b"). InnerVolumeSpecName "kube-api-access-v2jnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:39:34.853348 master-0 kubenswrapper[31411]: I0224 02:39:34.853281 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f284bd2b-440f-48e1-b995-9da2d0519a0b-scripts" (OuterVolumeSpecName: "scripts") pod "f284bd2b-440f-48e1-b995-9da2d0519a0b" (UID: "f284bd2b-440f-48e1-b995-9da2d0519a0b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:39:34.869837 master-0 kubenswrapper[31411]: I0224 02:39:34.869772 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f284bd2b-440f-48e1-b995-9da2d0519a0b-config-data" (OuterVolumeSpecName: "config-data") pod "f284bd2b-440f-48e1-b995-9da2d0519a0b" (UID: "f284bd2b-440f-48e1-b995-9da2d0519a0b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:39:34.895503 master-0 kubenswrapper[31411]: I0224 02:39:34.895427 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f284bd2b-440f-48e1-b995-9da2d0519a0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f284bd2b-440f-48e1-b995-9da2d0519a0b" (UID: "f284bd2b-440f-48e1-b995-9da2d0519a0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:39:34.937982 master-0 kubenswrapper[31411]: I0224 02:39:34.937772 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2jnn\" (UniqueName: \"kubernetes.io/projected/f284bd2b-440f-48e1-b995-9da2d0519a0b-kube-api-access-v2jnn\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:34.937982 master-0 kubenswrapper[31411]: I0224 02:39:34.937967 31411 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f284bd2b-440f-48e1-b995-9da2d0519a0b-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:34.937982 master-0 kubenswrapper[31411]: I0224 02:39:34.937984 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f284bd2b-440f-48e1-b995-9da2d0519a0b-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:34.937982 master-0 kubenswrapper[31411]: I0224 02:39:34.937997 31411 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f284bd2b-440f-48e1-b995-9da2d0519a0b-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:35.366285 master-0 kubenswrapper[31411]: I0224 02:39:35.366187 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rxn8v" event={"ID":"f284bd2b-440f-48e1-b995-9da2d0519a0b","Type":"ContainerDied","Data":"4684453582a90a5523757cd37843759aa9920e55c4215abc3eac2b8891052621"} Feb 24 02:39:35.366285 master-0 kubenswrapper[31411]: I0224 02:39:35.366261 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4684453582a90a5523757cd37843759aa9920e55c4215abc3eac2b8891052621" Feb 24 02:39:35.367441 master-0 kubenswrapper[31411]: I0224 02:39:35.366321 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rxn8v" Feb 24 02:39:35.619940 master-0 kubenswrapper[31411]: I0224 02:39:35.619816 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 24 02:39:35.620366 master-0 kubenswrapper[31411]: I0224 02:39:35.620312 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1fd50fb2-396f-455d-96e3-82f255504258" containerName="nova-api-log" containerID="cri-o://854018a4a4e5e90e8b8bc651e0ec05005fe48cdc4098746ef2956b24466c1776" gracePeriod=30 Feb 24 02:39:35.620473 master-0 kubenswrapper[31411]: I0224 02:39:35.620388 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1fd50fb2-396f-455d-96e3-82f255504258" containerName="nova-api-api" containerID="cri-o://0c00c4257a2704cd311edc4d1d51f76bbcfce6ef8aa22c99df2f139d6a7a4e3d" gracePeriod=30 Feb 24 02:39:35.704937 master-0 kubenswrapper[31411]: I0224 02:39:35.704796 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 02:39:35.705429 master-0 kubenswrapper[31411]: I0224 02:39:35.705219 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a0c89c95-1f73-4996-87ef-183a13c5891b" containerName="nova-scheduler-scheduler" containerID="cri-o://aa137fbe31c14117fca8e22a5b021aa2ed89c5fcd7d0e23ca31df88996652d0d" gracePeriod=30 Feb 24 02:39:35.813724 master-0 kubenswrapper[31411]: I0224 02:39:35.812784 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 02:39:35.813724 master-0 kubenswrapper[31411]: I0224 02:39:35.813238 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d1063d6c-8667-43df-8967-4a22ce919924" containerName="nova-metadata-log" containerID="cri-o://764a91a85ac0d521c5936462cdbe700036ddc082cfa739afbda0b9bc198e3d84" gracePeriod=30 Feb 24 02:39:35.814175 master-0 kubenswrapper[31411]: I0224 02:39:35.814141 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d1063d6c-8667-43df-8967-4a22ce919924" containerName="nova-metadata-metadata" containerID="cri-o://3fbdcfc81c110b39ef61e318226540831254e1314a24eae991738b4b04cab74f" gracePeriod=30 Feb 24 02:39:36.384433 master-0 kubenswrapper[31411]: I0224 02:39:36.384355 31411 generic.go:334] "Generic (PLEG): container finished" podID="d1063d6c-8667-43df-8967-4a22ce919924" containerID="764a91a85ac0d521c5936462cdbe700036ddc082cfa739afbda0b9bc198e3d84" exitCode=143 Feb 24 02:39:36.385102 master-0 kubenswrapper[31411]: I0224 02:39:36.384436 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d1063d6c-8667-43df-8967-4a22ce919924","Type":"ContainerDied","Data":"764a91a85ac0d521c5936462cdbe700036ddc082cfa739afbda0b9bc198e3d84"} Feb 24 02:39:36.385216 master-0 kubenswrapper[31411]: I0224 02:39:36.385120 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 02:39:36.386919 master-0 kubenswrapper[31411]: I0224 02:39:36.386858 31411 generic.go:334] "Generic (PLEG): container finished" podID="1fd50fb2-396f-455d-96e3-82f255504258" containerID="0c00c4257a2704cd311edc4d1d51f76bbcfce6ef8aa22c99df2f139d6a7a4e3d" exitCode=0 Feb 24 02:39:36.386919 master-0 kubenswrapper[31411]: I0224 02:39:36.386909 31411 generic.go:334] "Generic (PLEG): container finished" podID="1fd50fb2-396f-455d-96e3-82f255504258" containerID="854018a4a4e5e90e8b8bc651e0ec05005fe48cdc4098746ef2956b24466c1776" exitCode=143 Feb 24 02:39:36.387022 master-0 kubenswrapper[31411]: I0224 02:39:36.386944 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1fd50fb2-396f-455d-96e3-82f255504258","Type":"ContainerDied","Data":"0c00c4257a2704cd311edc4d1d51f76bbcfce6ef8aa22c99df2f139d6a7a4e3d"} Feb 24 02:39:36.387022 master-0 kubenswrapper[31411]: I0224 02:39:36.387003 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1fd50fb2-396f-455d-96e3-82f255504258","Type":"ContainerDied","Data":"854018a4a4e5e90e8b8bc651e0ec05005fe48cdc4098746ef2956b24466c1776"} Feb 24 02:39:36.387022 master-0 kubenswrapper[31411]: I0224 02:39:36.387020 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1fd50fb2-396f-455d-96e3-82f255504258","Type":"ContainerDied","Data":"45acf1f77db21ac8e29caa0394daaccb808b9e0e29095abb2f32b8b856d56c21"} Feb 24 02:39:36.387112 master-0 kubenswrapper[31411]: I0224 02:39:36.387043 31411 scope.go:117] "RemoveContainer" containerID="0c00c4257a2704cd311edc4d1d51f76bbcfce6ef8aa22c99df2f139d6a7a4e3d" Feb 24 02:39:36.431940 master-0 kubenswrapper[31411]: I0224 02:39:36.425815 31411 scope.go:117] "RemoveContainer" containerID="854018a4a4e5e90e8b8bc651e0ec05005fe48cdc4098746ef2956b24466c1776" Feb 24 02:39:36.489835 master-0 kubenswrapper[31411]: I0224 02:39:36.486625 31411 scope.go:117] "RemoveContainer" containerID="0c00c4257a2704cd311edc4d1d51f76bbcfce6ef8aa22c99df2f139d6a7a4e3d" Feb 24 02:39:36.489835 master-0 kubenswrapper[31411]: E0224 02:39:36.487655 31411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c00c4257a2704cd311edc4d1d51f76bbcfce6ef8aa22c99df2f139d6a7a4e3d\": container with ID starting with 0c00c4257a2704cd311edc4d1d51f76bbcfce6ef8aa22c99df2f139d6a7a4e3d not found: ID does not exist" containerID="0c00c4257a2704cd311edc4d1d51f76bbcfce6ef8aa22c99df2f139d6a7a4e3d" Feb 24 02:39:36.489835 master-0 kubenswrapper[31411]: I0224 02:39:36.487716 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c00c4257a2704cd311edc4d1d51f76bbcfce6ef8aa22c99df2f139d6a7a4e3d"} err="failed to get container status \"0c00c4257a2704cd311edc4d1d51f76bbcfce6ef8aa22c99df2f139d6a7a4e3d\": rpc error: code = NotFound desc = could not find container \"0c00c4257a2704cd311edc4d1d51f76bbcfce6ef8aa22c99df2f139d6a7a4e3d\": container with ID starting with 0c00c4257a2704cd311edc4d1d51f76bbcfce6ef8aa22c99df2f139d6a7a4e3d not found: ID does not exist" Feb 24 02:39:36.489835 master-0 kubenswrapper[31411]: I0224 02:39:36.487757 31411 scope.go:117] "RemoveContainer" containerID="854018a4a4e5e90e8b8bc651e0ec05005fe48cdc4098746ef2956b24466c1776" Feb 24 02:39:36.489835 master-0 kubenswrapper[31411]: E0224 02:39:36.488546 31411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"854018a4a4e5e90e8b8bc651e0ec05005fe48cdc4098746ef2956b24466c1776\": container with ID starting with 854018a4a4e5e90e8b8bc651e0ec05005fe48cdc4098746ef2956b24466c1776 not found: ID does not exist" containerID="854018a4a4e5e90e8b8bc651e0ec05005fe48cdc4098746ef2956b24466c1776" Feb 24 02:39:36.489835 master-0 kubenswrapper[31411]: I0224 02:39:36.488627 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"854018a4a4e5e90e8b8bc651e0ec05005fe48cdc4098746ef2956b24466c1776"} err="failed to get container status \"854018a4a4e5e90e8b8bc651e0ec05005fe48cdc4098746ef2956b24466c1776\": rpc error: code = NotFound desc = could not find container \"854018a4a4e5e90e8b8bc651e0ec05005fe48cdc4098746ef2956b24466c1776\": container with ID starting with 854018a4a4e5e90e8b8bc651e0ec05005fe48cdc4098746ef2956b24466c1776 not found: ID does not exist" Feb 24 02:39:36.489835 master-0 kubenswrapper[31411]: I0224 02:39:36.488677 31411 scope.go:117] "RemoveContainer" containerID="0c00c4257a2704cd311edc4d1d51f76bbcfce6ef8aa22c99df2f139d6a7a4e3d" Feb 24 02:39:36.489835 master-0 kubenswrapper[31411]: I0224 02:39:36.489076 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c00c4257a2704cd311edc4d1d51f76bbcfce6ef8aa22c99df2f139d6a7a4e3d"} err="failed to get container status \"0c00c4257a2704cd311edc4d1d51f76bbcfce6ef8aa22c99df2f139d6a7a4e3d\": rpc error: code = NotFound desc = could not find container \"0c00c4257a2704cd311edc4d1d51f76bbcfce6ef8aa22c99df2f139d6a7a4e3d\": container with ID starting with 0c00c4257a2704cd311edc4d1d51f76bbcfce6ef8aa22c99df2f139d6a7a4e3d not found: ID does not exist" Feb 24 02:39:36.489835 master-0 kubenswrapper[31411]: I0224 02:39:36.489102 31411 scope.go:117] "RemoveContainer" containerID="854018a4a4e5e90e8b8bc651e0ec05005fe48cdc4098746ef2956b24466c1776" Feb 24 02:39:36.489835 master-0 kubenswrapper[31411]: I0224 02:39:36.489340 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"854018a4a4e5e90e8b8bc651e0ec05005fe48cdc4098746ef2956b24466c1776"} err="failed to get container status \"854018a4a4e5e90e8b8bc651e0ec05005fe48cdc4098746ef2956b24466c1776\": rpc error: code = NotFound desc = could not find container \"854018a4a4e5e90e8b8bc651e0ec05005fe48cdc4098746ef2956b24466c1776\": container with ID starting with 854018a4a4e5e90e8b8bc651e0ec05005fe48cdc4098746ef2956b24466c1776 not found: ID does not exist" Feb 24 02:39:36.499029 master-0 kubenswrapper[31411]: I0224 02:39:36.496102 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2j4gl\" (UniqueName: \"kubernetes.io/projected/1fd50fb2-396f-455d-96e3-82f255504258-kube-api-access-2j4gl\") pod \"1fd50fb2-396f-455d-96e3-82f255504258\" (UID: \"1fd50fb2-396f-455d-96e3-82f255504258\") " Feb 24 02:39:36.499029 master-0 kubenswrapper[31411]: I0224 02:39:36.496431 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fd50fb2-396f-455d-96e3-82f255504258-logs\") pod \"1fd50fb2-396f-455d-96e3-82f255504258\" (UID: \"1fd50fb2-396f-455d-96e3-82f255504258\") " Feb 24 02:39:36.499029 master-0 kubenswrapper[31411]: I0224 02:39:36.496475 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd50fb2-396f-455d-96e3-82f255504258-combined-ca-bundle\") pod \"1fd50fb2-396f-455d-96e3-82f255504258\" (UID: \"1fd50fb2-396f-455d-96e3-82f255504258\") " Feb 24 02:39:36.499029 master-0 kubenswrapper[31411]: I0224 02:39:36.496702 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fd50fb2-396f-455d-96e3-82f255504258-internal-tls-certs\") pod \"1fd50fb2-396f-455d-96e3-82f255504258\" (UID: \"1fd50fb2-396f-455d-96e3-82f255504258\") " Feb 24 02:39:36.499029 master-0 kubenswrapper[31411]: I0224 02:39:36.496731 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fd50fb2-396f-455d-96e3-82f255504258-public-tls-certs\") pod \"1fd50fb2-396f-455d-96e3-82f255504258\" (UID: \"1fd50fb2-396f-455d-96e3-82f255504258\") " Feb 24 02:39:36.499029 master-0 kubenswrapper[31411]: I0224 02:39:36.496770 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fd50fb2-396f-455d-96e3-82f255504258-logs" (OuterVolumeSpecName: "logs") pod "1fd50fb2-396f-455d-96e3-82f255504258" (UID: "1fd50fb2-396f-455d-96e3-82f255504258"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:39:36.499029 master-0 kubenswrapper[31411]: I0224 02:39:36.496971 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd50fb2-396f-455d-96e3-82f255504258-config-data\") pod \"1fd50fb2-396f-455d-96e3-82f255504258\" (UID: \"1fd50fb2-396f-455d-96e3-82f255504258\") " Feb 24 02:39:36.499029 master-0 kubenswrapper[31411]: I0224 02:39:36.497545 31411 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fd50fb2-396f-455d-96e3-82f255504258-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:36.502774 master-0 kubenswrapper[31411]: I0224 02:39:36.502051 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fd50fb2-396f-455d-96e3-82f255504258-kube-api-access-2j4gl" (OuterVolumeSpecName: "kube-api-access-2j4gl") pod "1fd50fb2-396f-455d-96e3-82f255504258" (UID: "1fd50fb2-396f-455d-96e3-82f255504258"). InnerVolumeSpecName "kube-api-access-2j4gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:39:36.530213 master-0 kubenswrapper[31411]: I0224 02:39:36.530155 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fd50fb2-396f-455d-96e3-82f255504258-config-data" (OuterVolumeSpecName: "config-data") pod "1fd50fb2-396f-455d-96e3-82f255504258" (UID: "1fd50fb2-396f-455d-96e3-82f255504258"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:39:36.538525 master-0 kubenswrapper[31411]: I0224 02:39:36.538459 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fd50fb2-396f-455d-96e3-82f255504258-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1fd50fb2-396f-455d-96e3-82f255504258" (UID: "1fd50fb2-396f-455d-96e3-82f255504258"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:39:36.582458 master-0 kubenswrapper[31411]: I0224 02:39:36.582385 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fd50fb2-396f-455d-96e3-82f255504258-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1fd50fb2-396f-455d-96e3-82f255504258" (UID: "1fd50fb2-396f-455d-96e3-82f255504258"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:39:36.585370 master-0 kubenswrapper[31411]: I0224 02:39:36.585334 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fd50fb2-396f-455d-96e3-82f255504258-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1fd50fb2-396f-455d-96e3-82f255504258" (UID: "1fd50fb2-396f-455d-96e3-82f255504258"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:39:36.600114 master-0 kubenswrapper[31411]: I0224 02:39:36.600064 31411 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd50fb2-396f-455d-96e3-82f255504258-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:36.600114 master-0 kubenswrapper[31411]: I0224 02:39:36.600107 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2j4gl\" (UniqueName: \"kubernetes.io/projected/1fd50fb2-396f-455d-96e3-82f255504258-kube-api-access-2j4gl\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:36.600114 master-0 kubenswrapper[31411]: I0224 02:39:36.600119 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd50fb2-396f-455d-96e3-82f255504258-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:36.600313 master-0 kubenswrapper[31411]: I0224 02:39:36.600130 31411 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fd50fb2-396f-455d-96e3-82f255504258-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:36.600313 master-0 kubenswrapper[31411]: I0224 02:39:36.600140 31411 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fd50fb2-396f-455d-96e3-82f255504258-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:36.722407 master-0 kubenswrapper[31411]: E0224 02:39:36.722340 31411 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aa137fbe31c14117fca8e22a5b021aa2ed89c5fcd7d0e23ca31df88996652d0d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 24 02:39:36.727194 master-0 kubenswrapper[31411]: E0224 02:39:36.727065 31411 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aa137fbe31c14117fca8e22a5b021aa2ed89c5fcd7d0e23ca31df88996652d0d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 24 02:39:36.729432 master-0 kubenswrapper[31411]: E0224 02:39:36.729359 31411 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aa137fbe31c14117fca8e22a5b021aa2ed89c5fcd7d0e23ca31df88996652d0d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 24 02:39:36.729525 master-0 kubenswrapper[31411]: E0224 02:39:36.729444 31411 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="a0c89c95-1f73-4996-87ef-183a13c5891b" containerName="nova-scheduler-scheduler" Feb 24 02:39:37.420687 master-0 kubenswrapper[31411]: I0224 02:39:37.420604 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 02:39:37.501238 master-0 kubenswrapper[31411]: I0224 02:39:37.501125 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 24 02:39:37.540531 master-0 kubenswrapper[31411]: I0224 02:39:37.540424 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 24 02:39:37.578601 master-0 kubenswrapper[31411]: I0224 02:39:37.575082 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 24 02:39:37.578601 master-0 kubenswrapper[31411]: E0224 02:39:37.575927 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d432fd91-0f08-43b3-8201-65f8d4e7efe8" containerName="init" Feb 24 02:39:37.578601 master-0 kubenswrapper[31411]: I0224 02:39:37.575948 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="d432fd91-0f08-43b3-8201-65f8d4e7efe8" containerName="init" Feb 24 02:39:37.578601 master-0 kubenswrapper[31411]: E0224 02:39:37.575986 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c49571-1aa8-4a21-9d31-76039b6413d8" containerName="nova-manage" Feb 24 02:39:37.578601 master-0 kubenswrapper[31411]: I0224 02:39:37.575994 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c49571-1aa8-4a21-9d31-76039b6413d8" containerName="nova-manage" Feb 24 02:39:37.578601 master-0 kubenswrapper[31411]: E0224 02:39:37.576039 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fd50fb2-396f-455d-96e3-82f255504258" containerName="nova-api-api" Feb 24 02:39:37.578601 master-0 kubenswrapper[31411]: I0224 02:39:37.576053 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fd50fb2-396f-455d-96e3-82f255504258" containerName="nova-api-api" Feb 24 02:39:37.578601 master-0 kubenswrapper[31411]: E0224 02:39:37.576076 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f284bd2b-440f-48e1-b995-9da2d0519a0b" containerName="nova-manage" Feb 24 02:39:37.578601 master-0 kubenswrapper[31411]: I0224 02:39:37.576084 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="f284bd2b-440f-48e1-b995-9da2d0519a0b" containerName="nova-manage" Feb 24 02:39:37.578601 master-0 kubenswrapper[31411]: E0224 02:39:37.576101 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d432fd91-0f08-43b3-8201-65f8d4e7efe8" containerName="dnsmasq-dns" Feb 24 02:39:37.578601 master-0 kubenswrapper[31411]: I0224 02:39:37.576111 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="d432fd91-0f08-43b3-8201-65f8d4e7efe8" containerName="dnsmasq-dns" Feb 24 02:39:37.578601 master-0 kubenswrapper[31411]: E0224 02:39:37.576123 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fd50fb2-396f-455d-96e3-82f255504258" containerName="nova-api-log" Feb 24 02:39:37.578601 master-0 kubenswrapper[31411]: I0224 02:39:37.576134 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fd50fb2-396f-455d-96e3-82f255504258" containerName="nova-api-log" Feb 24 02:39:37.578601 master-0 kubenswrapper[31411]: I0224 02:39:37.576492 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="f284bd2b-440f-48e1-b995-9da2d0519a0b" containerName="nova-manage" Feb 24 02:39:37.578601 master-0 kubenswrapper[31411]: I0224 02:39:37.576536 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c49571-1aa8-4a21-9d31-76039b6413d8" containerName="nova-manage" Feb 24 02:39:37.578601 master-0 kubenswrapper[31411]: I0224 02:39:37.576560 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="d432fd91-0f08-43b3-8201-65f8d4e7efe8" containerName="dnsmasq-dns" Feb 24 02:39:37.578601 master-0 kubenswrapper[31411]: I0224 02:39:37.576593 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fd50fb2-396f-455d-96e3-82f255504258" containerName="nova-api-api" Feb 24 02:39:37.578601 master-0 kubenswrapper[31411]: I0224 02:39:37.576615 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fd50fb2-396f-455d-96e3-82f255504258" containerName="nova-api-log" Feb 24 02:39:37.578601 master-0 kubenswrapper[31411]: I0224 02:39:37.578450 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 02:39:37.588181 master-0 kubenswrapper[31411]: I0224 02:39:37.588107 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 24 02:39:37.595253 master-0 kubenswrapper[31411]: I0224 02:39:37.592084 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 24 02:39:37.601246 master-0 kubenswrapper[31411]: I0224 02:39:37.601170 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 24 02:39:37.643940 master-0 kubenswrapper[31411]: I0224 02:39:37.624382 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 24 02:39:37.735239 master-0 kubenswrapper[31411]: I0224 02:39:37.735147 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/685eb0ae-79cd-488a-894f-2ef620e61225-public-tls-certs\") pod \"nova-api-0\" (UID: \"685eb0ae-79cd-488a-894f-2ef620e61225\") " pod="openstack/nova-api-0" Feb 24 02:39:37.735489 master-0 kubenswrapper[31411]: I0224 02:39:37.735291 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lpjf\" (UniqueName: \"kubernetes.io/projected/685eb0ae-79cd-488a-894f-2ef620e61225-kube-api-access-4lpjf\") pod \"nova-api-0\" (UID: \"685eb0ae-79cd-488a-894f-2ef620e61225\") " pod="openstack/nova-api-0" Feb 24 02:39:37.736594 master-0 kubenswrapper[31411]: I0224 02:39:37.736248 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/685eb0ae-79cd-488a-894f-2ef620e61225-logs\") pod \"nova-api-0\" (UID: \"685eb0ae-79cd-488a-894f-2ef620e61225\") " pod="openstack/nova-api-0" Feb 24 02:39:37.736594 master-0 kubenswrapper[31411]: I0224 02:39:37.736286 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/685eb0ae-79cd-488a-894f-2ef620e61225-internal-tls-certs\") pod \"nova-api-0\" (UID: \"685eb0ae-79cd-488a-894f-2ef620e61225\") " pod="openstack/nova-api-0" Feb 24 02:39:37.736594 master-0 kubenswrapper[31411]: I0224 02:39:37.736438 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/685eb0ae-79cd-488a-894f-2ef620e61225-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"685eb0ae-79cd-488a-894f-2ef620e61225\") " pod="openstack/nova-api-0" Feb 24 02:39:37.736594 master-0 kubenswrapper[31411]: I0224 02:39:37.736490 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/685eb0ae-79cd-488a-894f-2ef620e61225-config-data\") pod \"nova-api-0\" (UID: \"685eb0ae-79cd-488a-894f-2ef620e61225\") " pod="openstack/nova-api-0" Feb 24 02:39:37.838759 master-0 kubenswrapper[31411]: I0224 02:39:37.838683 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/685eb0ae-79cd-488a-894f-2ef620e61225-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"685eb0ae-79cd-488a-894f-2ef620e61225\") " pod="openstack/nova-api-0" Feb 24 02:39:37.839069 master-0 kubenswrapper[31411]: I0224 02:39:37.838819 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/685eb0ae-79cd-488a-894f-2ef620e61225-config-data\") pod \"nova-api-0\" (UID: \"685eb0ae-79cd-488a-894f-2ef620e61225\") " pod="openstack/nova-api-0" Feb 24 02:39:37.839069 master-0 kubenswrapper[31411]: I0224 02:39:37.838881 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/685eb0ae-79cd-488a-894f-2ef620e61225-public-tls-certs\") pod \"nova-api-0\" (UID: \"685eb0ae-79cd-488a-894f-2ef620e61225\") " pod="openstack/nova-api-0" Feb 24 02:39:37.839984 master-0 kubenswrapper[31411]: I0224 02:39:37.839939 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lpjf\" (UniqueName: \"kubernetes.io/projected/685eb0ae-79cd-488a-894f-2ef620e61225-kube-api-access-4lpjf\") pod \"nova-api-0\" (UID: \"685eb0ae-79cd-488a-894f-2ef620e61225\") " pod="openstack/nova-api-0" Feb 24 02:39:37.840046 master-0 kubenswrapper[31411]: I0224 02:39:37.840031 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/685eb0ae-79cd-488a-894f-2ef620e61225-logs\") pod \"nova-api-0\" (UID: \"685eb0ae-79cd-488a-894f-2ef620e61225\") " pod="openstack/nova-api-0" Feb 24 02:39:37.840088 master-0 kubenswrapper[31411]: I0224 02:39:37.840062 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/685eb0ae-79cd-488a-894f-2ef620e61225-internal-tls-certs\") pod \"nova-api-0\" (UID: \"685eb0ae-79cd-488a-894f-2ef620e61225\") " pod="openstack/nova-api-0" Feb 24 02:39:37.840911 master-0 kubenswrapper[31411]: I0224 02:39:37.840862 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/685eb0ae-79cd-488a-894f-2ef620e61225-logs\") pod \"nova-api-0\" (UID: \"685eb0ae-79cd-488a-894f-2ef620e61225\") " pod="openstack/nova-api-0" Feb 24 02:39:37.845085 master-0 kubenswrapper[31411]: I0224 02:39:37.845046 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/685eb0ae-79cd-488a-894f-2ef620e61225-config-data\") pod \"nova-api-0\" (UID: \"685eb0ae-79cd-488a-894f-2ef620e61225\") " pod="openstack/nova-api-0" Feb 24 02:39:37.845225 master-0 kubenswrapper[31411]: I0224 02:39:37.845107 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/685eb0ae-79cd-488a-894f-2ef620e61225-public-tls-certs\") pod \"nova-api-0\" (UID: \"685eb0ae-79cd-488a-894f-2ef620e61225\") " pod="openstack/nova-api-0" Feb 24 02:39:37.851722 master-0 kubenswrapper[31411]: I0224 02:39:37.851667 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/685eb0ae-79cd-488a-894f-2ef620e61225-internal-tls-certs\") pod \"nova-api-0\" (UID: \"685eb0ae-79cd-488a-894f-2ef620e61225\") " pod="openstack/nova-api-0" Feb 24 02:39:37.854235 master-0 kubenswrapper[31411]: I0224 02:39:37.854194 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/685eb0ae-79cd-488a-894f-2ef620e61225-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"685eb0ae-79cd-488a-894f-2ef620e61225\") " pod="openstack/nova-api-0" Feb 24 02:39:37.863125 master-0 kubenswrapper[31411]: I0224 02:39:37.863084 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lpjf\" (UniqueName: \"kubernetes.io/projected/685eb0ae-79cd-488a-894f-2ef620e61225-kube-api-access-4lpjf\") pod \"nova-api-0\" (UID: \"685eb0ae-79cd-488a-894f-2ef620e61225\") " pod="openstack/nova-api-0" Feb 24 02:39:37.941896 master-0 kubenswrapper[31411]: I0224 02:39:37.941818 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 02:39:38.545320 master-0 kubenswrapper[31411]: W0224 02:39:38.545242 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod685eb0ae_79cd_488a_894f_2ef620e61225.slice/crio-665a129de68566957fd3f18b556e3c21c125d2320bb3e7d0d969cc04dd8db706 WatchSource:0}: Error finding container 665a129de68566957fd3f18b556e3c21c125d2320bb3e7d0d969cc04dd8db706: Status 404 returned error can't find the container with id 665a129de68566957fd3f18b556e3c21c125d2320bb3e7d0d969cc04dd8db706 Feb 24 02:39:38.564504 master-0 kubenswrapper[31411]: I0224 02:39:38.564426 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 24 02:39:38.950509 master-0 kubenswrapper[31411]: I0224 02:39:38.950403 31411 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d1063d6c-8667-43df-8967-4a22ce919924" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.8:8775/\": read tcp 10.128.0.2:60452->10.128.1.8:8775: read: connection reset by peer" Feb 24 02:39:38.950509 master-0 kubenswrapper[31411]: I0224 02:39:38.950434 31411 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d1063d6c-8667-43df-8967-4a22ce919924" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.8:8775/\": read tcp 10.128.0.2:60454->10.128.1.8:8775: read: connection reset by peer" Feb 24 02:39:39.123099 master-0 kubenswrapper[31411]: I0224 02:39:39.123024 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fd50fb2-396f-455d-96e3-82f255504258" path="/var/lib/kubelet/pods/1fd50fb2-396f-455d-96e3-82f255504258/volumes" Feb 24 02:39:39.466993 master-0 kubenswrapper[31411]: I0224 02:39:39.466910 31411 generic.go:334] "Generic (PLEG): container finished" podID="d1063d6c-8667-43df-8967-4a22ce919924" containerID="3fbdcfc81c110b39ef61e318226540831254e1314a24eae991738b4b04cab74f" exitCode=0 Feb 24 02:39:39.467145 master-0 kubenswrapper[31411]: I0224 02:39:39.466996 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d1063d6c-8667-43df-8967-4a22ce919924","Type":"ContainerDied","Data":"3fbdcfc81c110b39ef61e318226540831254e1314a24eae991738b4b04cab74f"} Feb 24 02:39:39.467145 master-0 kubenswrapper[31411]: I0224 02:39:39.467028 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d1063d6c-8667-43df-8967-4a22ce919924","Type":"ContainerDied","Data":"de3f13fdad639eed662c0e91175e4959aed64be9827a2b1558c7b3885c52cafb"} Feb 24 02:39:39.467145 master-0 kubenswrapper[31411]: I0224 02:39:39.467041 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de3f13fdad639eed662c0e91175e4959aed64be9827a2b1558c7b3885c52cafb" Feb 24 02:39:39.471600 master-0 kubenswrapper[31411]: I0224 02:39:39.470193 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"685eb0ae-79cd-488a-894f-2ef620e61225","Type":"ContainerStarted","Data":"aa1de4e67ca5101faba73fd07d676e06c127926f662690e8fb47a3b1b24cb200"} Feb 24 02:39:39.471600 master-0 kubenswrapper[31411]: I0224 02:39:39.470261 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"685eb0ae-79cd-488a-894f-2ef620e61225","Type":"ContainerStarted","Data":"57e84e19604961f787f5c8d344fbb6bd9b5e00fe28c5c2bd93dfa4b0a610e8cc"} Feb 24 02:39:39.471600 master-0 kubenswrapper[31411]: I0224 02:39:39.470283 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"685eb0ae-79cd-488a-894f-2ef620e61225","Type":"ContainerStarted","Data":"665a129de68566957fd3f18b556e3c21c125d2320bb3e7d0d969cc04dd8db706"} Feb 24 02:39:39.503073 master-0 kubenswrapper[31411]: I0224 02:39:39.501380 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.501359283 podStartE2EDuration="2.501359283s" podCreationTimestamp="2026-02-24 02:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:39:39.496751293 +0000 UTC m=+1122.713949169" watchObservedRunningTime="2026-02-24 02:39:39.501359283 +0000 UTC m=+1122.718557139" Feb 24 02:39:39.554368 master-0 kubenswrapper[31411]: I0224 02:39:39.554309 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 02:39:39.599899 master-0 kubenswrapper[31411]: I0224 02:39:39.599820 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc588\" (UniqueName: \"kubernetes.io/projected/d1063d6c-8667-43df-8967-4a22ce919924-kube-api-access-sc588\") pod \"d1063d6c-8667-43df-8967-4a22ce919924\" (UID: \"d1063d6c-8667-43df-8967-4a22ce919924\") " Feb 24 02:39:39.600207 master-0 kubenswrapper[31411]: I0224 02:39:39.599928 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1063d6c-8667-43df-8967-4a22ce919924-nova-metadata-tls-certs\") pod \"d1063d6c-8667-43df-8967-4a22ce919924\" (UID: \"d1063d6c-8667-43df-8967-4a22ce919924\") " Feb 24 02:39:39.600207 master-0 kubenswrapper[31411]: I0224 02:39:39.600053 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1063d6c-8667-43df-8967-4a22ce919924-config-data\") pod \"d1063d6c-8667-43df-8967-4a22ce919924\" (UID: \"d1063d6c-8667-43df-8967-4a22ce919924\") " Feb 24 02:39:39.600207 master-0 kubenswrapper[31411]: I0224 02:39:39.600109 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1063d6c-8667-43df-8967-4a22ce919924-combined-ca-bundle\") pod \"d1063d6c-8667-43df-8967-4a22ce919924\" (UID: \"d1063d6c-8667-43df-8967-4a22ce919924\") " Feb 24 02:39:39.600207 master-0 kubenswrapper[31411]: I0224 02:39:39.600155 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1063d6c-8667-43df-8967-4a22ce919924-logs\") pod \"d1063d6c-8667-43df-8967-4a22ce919924\" (UID: \"d1063d6c-8667-43df-8967-4a22ce919924\") " Feb 24 02:39:39.603197 master-0 kubenswrapper[31411]: I0224 02:39:39.601224 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1063d6c-8667-43df-8967-4a22ce919924-logs" (OuterVolumeSpecName: "logs") pod "d1063d6c-8667-43df-8967-4a22ce919924" (UID: "d1063d6c-8667-43df-8967-4a22ce919924"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 02:39:39.606785 master-0 kubenswrapper[31411]: I0224 02:39:39.606718 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1063d6c-8667-43df-8967-4a22ce919924-kube-api-access-sc588" (OuterVolumeSpecName: "kube-api-access-sc588") pod "d1063d6c-8667-43df-8967-4a22ce919924" (UID: "d1063d6c-8667-43df-8967-4a22ce919924"). InnerVolumeSpecName "kube-api-access-sc588". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:39:39.650169 master-0 kubenswrapper[31411]: I0224 02:39:39.650115 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1063d6c-8667-43df-8967-4a22ce919924-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1063d6c-8667-43df-8967-4a22ce919924" (UID: "d1063d6c-8667-43df-8967-4a22ce919924"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:39:39.652931 master-0 kubenswrapper[31411]: I0224 02:39:39.652882 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1063d6c-8667-43df-8967-4a22ce919924-config-data" (OuterVolumeSpecName: "config-data") pod "d1063d6c-8667-43df-8967-4a22ce919924" (UID: "d1063d6c-8667-43df-8967-4a22ce919924"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:39:39.699250 master-0 kubenswrapper[31411]: I0224 02:39:39.699163 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1063d6c-8667-43df-8967-4a22ce919924-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d1063d6c-8667-43df-8967-4a22ce919924" (UID: "d1063d6c-8667-43df-8967-4a22ce919924"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:39:39.703354 master-0 kubenswrapper[31411]: I0224 02:39:39.703315 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1063d6c-8667-43df-8967-4a22ce919924-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:39.703354 master-0 kubenswrapper[31411]: I0224 02:39:39.703350 31411 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1063d6c-8667-43df-8967-4a22ce919924-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:39.703499 master-0 kubenswrapper[31411]: I0224 02:39:39.703362 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc588\" (UniqueName: \"kubernetes.io/projected/d1063d6c-8667-43df-8967-4a22ce919924-kube-api-access-sc588\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:39.703499 master-0 kubenswrapper[31411]: I0224 02:39:39.703373 31411 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1063d6c-8667-43df-8967-4a22ce919924-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:39.703499 master-0 kubenswrapper[31411]: I0224 02:39:39.703383 31411 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1063d6c-8667-43df-8967-4a22ce919924-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:40.500181 master-0 kubenswrapper[31411]: I0224 02:39:40.500055 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 02:39:40.586544 master-0 kubenswrapper[31411]: I0224 02:39:40.586462 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 02:39:40.591203 master-0 kubenswrapper[31411]: I0224 02:39:40.591109 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 02:39:40.646306 master-0 kubenswrapper[31411]: I0224 02:39:40.646207 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 24 02:39:40.651053 master-0 kubenswrapper[31411]: E0224 02:39:40.650974 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1063d6c-8667-43df-8967-4a22ce919924" containerName="nova-metadata-log" Feb 24 02:39:40.651053 master-0 kubenswrapper[31411]: I0224 02:39:40.651026 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1063d6c-8667-43df-8967-4a22ce919924" containerName="nova-metadata-log" Feb 24 02:39:40.651294 master-0 kubenswrapper[31411]: E0224 02:39:40.651125 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1063d6c-8667-43df-8967-4a22ce919924" containerName="nova-metadata-metadata" Feb 24 02:39:40.651294 master-0 kubenswrapper[31411]: I0224 02:39:40.651135 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1063d6c-8667-43df-8967-4a22ce919924" containerName="nova-metadata-metadata" Feb 24 02:39:40.655623 master-0 kubenswrapper[31411]: I0224 02:39:40.655561 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1063d6c-8667-43df-8967-4a22ce919924" containerName="nova-metadata-metadata" Feb 24 02:39:40.655623 master-0 kubenswrapper[31411]: I0224 02:39:40.655631 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1063d6c-8667-43df-8967-4a22ce919924" containerName="nova-metadata-log" Feb 24 02:39:40.657535 master-0 kubenswrapper[31411]: I0224 02:39:40.657484 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 02:39:40.663002 master-0 kubenswrapper[31411]: I0224 02:39:40.662901 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 24 02:39:40.663623 master-0 kubenswrapper[31411]: I0224 02:39:40.663546 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 24 02:39:40.672561 master-0 kubenswrapper[31411]: I0224 02:39:40.672461 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 02:39:40.800345 master-0 kubenswrapper[31411]: I0224 02:39:40.800165 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a42cba47-e321-4b11-8df3-14382a641521-logs\") pod \"nova-metadata-0\" (UID: \"a42cba47-e321-4b11-8df3-14382a641521\") " pod="openstack/nova-metadata-0" Feb 24 02:39:40.800345 master-0 kubenswrapper[31411]: I0224 02:39:40.800308 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2npgn\" (UniqueName: \"kubernetes.io/projected/a42cba47-e321-4b11-8df3-14382a641521-kube-api-access-2npgn\") pod \"nova-metadata-0\" (UID: \"a42cba47-e321-4b11-8df3-14382a641521\") " pod="openstack/nova-metadata-0" Feb 24 02:39:40.801041 master-0 kubenswrapper[31411]: I0224 02:39:40.800374 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a42cba47-e321-4b11-8df3-14382a641521-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a42cba47-e321-4b11-8df3-14382a641521\") " pod="openstack/nova-metadata-0" Feb 24 02:39:40.801041 master-0 kubenswrapper[31411]: I0224 02:39:40.800863 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a42cba47-e321-4b11-8df3-14382a641521-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a42cba47-e321-4b11-8df3-14382a641521\") " pod="openstack/nova-metadata-0" Feb 24 02:39:40.801492 master-0 kubenswrapper[31411]: I0224 02:39:40.801416 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a42cba47-e321-4b11-8df3-14382a641521-config-data\") pod \"nova-metadata-0\" (UID: \"a42cba47-e321-4b11-8df3-14382a641521\") " pod="openstack/nova-metadata-0" Feb 24 02:39:40.906539 master-0 kubenswrapper[31411]: I0224 02:39:40.906399 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a42cba47-e321-4b11-8df3-14382a641521-config-data\") pod \"nova-metadata-0\" (UID: \"a42cba47-e321-4b11-8df3-14382a641521\") " pod="openstack/nova-metadata-0" Feb 24 02:39:40.907048 master-0 kubenswrapper[31411]: I0224 02:39:40.906679 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a42cba47-e321-4b11-8df3-14382a641521-logs\") pod \"nova-metadata-0\" (UID: \"a42cba47-e321-4b11-8df3-14382a641521\") " pod="openstack/nova-metadata-0" Feb 24 02:39:40.907048 master-0 kubenswrapper[31411]: I0224 02:39:40.906812 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2npgn\" (UniqueName: \"kubernetes.io/projected/a42cba47-e321-4b11-8df3-14382a641521-kube-api-access-2npgn\") pod \"nova-metadata-0\" (UID: \"a42cba47-e321-4b11-8df3-14382a641521\") " pod="openstack/nova-metadata-0" Feb 24 02:39:40.907048 master-0 kubenswrapper[31411]: I0224 02:39:40.906936 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a42cba47-e321-4b11-8df3-14382a641521-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a42cba47-e321-4b11-8df3-14382a641521\") " pod="openstack/nova-metadata-0" Feb 24 02:39:40.907409 master-0 kubenswrapper[31411]: I0224 02:39:40.907159 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a42cba47-e321-4b11-8df3-14382a641521-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a42cba47-e321-4b11-8df3-14382a641521\") " pod="openstack/nova-metadata-0" Feb 24 02:39:40.907409 master-0 kubenswrapper[31411]: I0224 02:39:40.907219 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a42cba47-e321-4b11-8df3-14382a641521-logs\") pod \"nova-metadata-0\" (UID: \"a42cba47-e321-4b11-8df3-14382a641521\") " pod="openstack/nova-metadata-0" Feb 24 02:39:40.913428 master-0 kubenswrapper[31411]: I0224 02:39:40.913358 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a42cba47-e321-4b11-8df3-14382a641521-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a42cba47-e321-4b11-8df3-14382a641521\") " pod="openstack/nova-metadata-0" Feb 24 02:39:40.913658 master-0 kubenswrapper[31411]: I0224 02:39:40.913458 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a42cba47-e321-4b11-8df3-14382a641521-config-data\") pod \"nova-metadata-0\" (UID: \"a42cba47-e321-4b11-8df3-14382a641521\") " pod="openstack/nova-metadata-0" Feb 24 02:39:40.917073 master-0 kubenswrapper[31411]: I0224 02:39:40.915033 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a42cba47-e321-4b11-8df3-14382a641521-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a42cba47-e321-4b11-8df3-14382a641521\") " pod="openstack/nova-metadata-0" Feb 24 02:39:40.929085 master-0 kubenswrapper[31411]: I0224 02:39:40.929022 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2npgn\" (UniqueName: \"kubernetes.io/projected/a42cba47-e321-4b11-8df3-14382a641521-kube-api-access-2npgn\") pod \"nova-metadata-0\" (UID: \"a42cba47-e321-4b11-8df3-14382a641521\") " pod="openstack/nova-metadata-0" Feb 24 02:39:41.027788 master-0 kubenswrapper[31411]: I0224 02:39:41.027555 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 02:39:41.116683 master-0 kubenswrapper[31411]: I0224 02:39:41.116552 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1063d6c-8667-43df-8967-4a22ce919924" path="/var/lib/kubelet/pods/d1063d6c-8667-43df-8967-4a22ce919924/volumes" Feb 24 02:39:41.386109 master-0 kubenswrapper[31411]: I0224 02:39:41.386057 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 24 02:39:41.522249 master-0 kubenswrapper[31411]: I0224 02:39:41.522043 31411 generic.go:334] "Generic (PLEG): container finished" podID="a0c89c95-1f73-4996-87ef-183a13c5891b" containerID="aa137fbe31c14117fca8e22a5b021aa2ed89c5fcd7d0e23ca31df88996652d0d" exitCode=0 Feb 24 02:39:41.522249 master-0 kubenswrapper[31411]: I0224 02:39:41.522142 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a0c89c95-1f73-4996-87ef-183a13c5891b","Type":"ContainerDied","Data":"aa137fbe31c14117fca8e22a5b021aa2ed89c5fcd7d0e23ca31df88996652d0d"} Feb 24 02:39:41.522249 master-0 kubenswrapper[31411]: I0224 02:39:41.522203 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 24 02:39:41.522249 master-0 kubenswrapper[31411]: I0224 02:39:41.522238 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a0c89c95-1f73-4996-87ef-183a13c5891b","Type":"ContainerDied","Data":"a2d663333afe44b9dc109d3a8046b259f66dbb401a1198bbe7452059e7742f68"} Feb 24 02:39:41.522249 master-0 kubenswrapper[31411]: I0224 02:39:41.522277 31411 scope.go:117] "RemoveContainer" containerID="aa137fbe31c14117fca8e22a5b021aa2ed89c5fcd7d0e23ca31df88996652d0d" Feb 24 02:39:41.565686 master-0 kubenswrapper[31411]: I0224 02:39:41.565084 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0c89c95-1f73-4996-87ef-183a13c5891b-config-data\") pod \"a0c89c95-1f73-4996-87ef-183a13c5891b\" (UID: \"a0c89c95-1f73-4996-87ef-183a13c5891b\") " Feb 24 02:39:41.565686 master-0 kubenswrapper[31411]: I0224 02:39:41.565313 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0c89c95-1f73-4996-87ef-183a13c5891b-combined-ca-bundle\") pod \"a0c89c95-1f73-4996-87ef-183a13c5891b\" (UID: \"a0c89c95-1f73-4996-87ef-183a13c5891b\") " Feb 24 02:39:41.565686 master-0 kubenswrapper[31411]: I0224 02:39:41.565375 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcb4h\" (UniqueName: \"kubernetes.io/projected/a0c89c95-1f73-4996-87ef-183a13c5891b-kube-api-access-kcb4h\") pod \"a0c89c95-1f73-4996-87ef-183a13c5891b\" (UID: \"a0c89c95-1f73-4996-87ef-183a13c5891b\") " Feb 24 02:39:41.585596 master-0 kubenswrapper[31411]: I0224 02:39:41.577056 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0c89c95-1f73-4996-87ef-183a13c5891b-kube-api-access-kcb4h" (OuterVolumeSpecName: "kube-api-access-kcb4h") pod "a0c89c95-1f73-4996-87ef-183a13c5891b" (UID: "a0c89c95-1f73-4996-87ef-183a13c5891b"). InnerVolumeSpecName "kube-api-access-kcb4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:39:41.636595 master-0 kubenswrapper[31411]: I0224 02:39:41.633517 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0c89c95-1f73-4996-87ef-183a13c5891b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0c89c95-1f73-4996-87ef-183a13c5891b" (UID: "a0c89c95-1f73-4996-87ef-183a13c5891b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:39:41.643595 master-0 kubenswrapper[31411]: I0224 02:39:41.637858 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0c89c95-1f73-4996-87ef-183a13c5891b-config-data" (OuterVolumeSpecName: "config-data") pod "a0c89c95-1f73-4996-87ef-183a13c5891b" (UID: "a0c89c95-1f73-4996-87ef-183a13c5891b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:39:41.652603 master-0 kubenswrapper[31411]: I0224 02:39:41.647051 31411 scope.go:117] "RemoveContainer" containerID="aa137fbe31c14117fca8e22a5b021aa2ed89c5fcd7d0e23ca31df88996652d0d" Feb 24 02:39:41.652603 master-0 kubenswrapper[31411]: I0224 02:39:41.648147 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 02:39:41.658605 master-0 kubenswrapper[31411]: E0224 02:39:41.653810 31411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa137fbe31c14117fca8e22a5b021aa2ed89c5fcd7d0e23ca31df88996652d0d\": container with ID starting with aa137fbe31c14117fca8e22a5b021aa2ed89c5fcd7d0e23ca31df88996652d0d not found: ID does not exist" containerID="aa137fbe31c14117fca8e22a5b021aa2ed89c5fcd7d0e23ca31df88996652d0d" Feb 24 02:39:41.658605 master-0 kubenswrapper[31411]: I0224 02:39:41.653903 31411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa137fbe31c14117fca8e22a5b021aa2ed89c5fcd7d0e23ca31df88996652d0d"} err="failed to get container status \"aa137fbe31c14117fca8e22a5b021aa2ed89c5fcd7d0e23ca31df88996652d0d\": rpc error: code = NotFound desc = could not find container \"aa137fbe31c14117fca8e22a5b021aa2ed89c5fcd7d0e23ca31df88996652d0d\": container with ID starting with aa137fbe31c14117fca8e22a5b021aa2ed89c5fcd7d0e23ca31df88996652d0d not found: ID does not exist" Feb 24 02:39:41.672621 master-0 kubenswrapper[31411]: I0224 02:39:41.669482 31411 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0c89c95-1f73-4996-87ef-183a13c5891b-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:41.672621 master-0 kubenswrapper[31411]: I0224 02:39:41.669531 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0c89c95-1f73-4996-87ef-183a13c5891b-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:41.672621 master-0 kubenswrapper[31411]: I0224 02:39:41.669542 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcb4h\" (UniqueName: \"kubernetes.io/projected/a0c89c95-1f73-4996-87ef-183a13c5891b-kube-api-access-kcb4h\") on node \"master-0\" DevicePath \"\"" Feb 24 02:39:41.880757 master-0 kubenswrapper[31411]: I0224 02:39:41.880436 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 02:39:41.903890 master-0 kubenswrapper[31411]: I0224 02:39:41.903839 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 02:39:41.917627 master-0 kubenswrapper[31411]: I0224 02:39:41.917460 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 02:39:41.919628 master-0 kubenswrapper[31411]: E0224 02:39:41.918043 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0c89c95-1f73-4996-87ef-183a13c5891b" containerName="nova-scheduler-scheduler" Feb 24 02:39:41.919628 master-0 kubenswrapper[31411]: I0224 02:39:41.918064 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0c89c95-1f73-4996-87ef-183a13c5891b" containerName="nova-scheduler-scheduler" Feb 24 02:39:41.919628 master-0 kubenswrapper[31411]: I0224 02:39:41.918381 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0c89c95-1f73-4996-87ef-183a13c5891b" containerName="nova-scheduler-scheduler" Feb 24 02:39:41.925661 master-0 kubenswrapper[31411]: I0224 02:39:41.924327 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 24 02:39:41.940599 master-0 kubenswrapper[31411]: I0224 02:39:41.936754 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 24 02:39:41.957229 master-0 kubenswrapper[31411]: I0224 02:39:41.956851 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 02:39:42.081330 master-0 kubenswrapper[31411]: I0224 02:39:42.081259 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad415201-c48f-463e-b907-3a9bf748006d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ad415201-c48f-463e-b907-3a9bf748006d\") " pod="openstack/nova-scheduler-0" Feb 24 02:39:42.081481 master-0 kubenswrapper[31411]: I0224 02:39:42.081414 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad415201-c48f-463e-b907-3a9bf748006d-config-data\") pod \"nova-scheduler-0\" (UID: \"ad415201-c48f-463e-b907-3a9bf748006d\") " pod="openstack/nova-scheduler-0" Feb 24 02:39:42.081481 master-0 kubenswrapper[31411]: I0224 02:39:42.081439 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nlk6\" (UniqueName: \"kubernetes.io/projected/ad415201-c48f-463e-b907-3a9bf748006d-kube-api-access-8nlk6\") pod \"nova-scheduler-0\" (UID: \"ad415201-c48f-463e-b907-3a9bf748006d\") " pod="openstack/nova-scheduler-0" Feb 24 02:39:42.183619 master-0 kubenswrapper[31411]: I0224 02:39:42.183541 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad415201-c48f-463e-b907-3a9bf748006d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ad415201-c48f-463e-b907-3a9bf748006d\") " pod="openstack/nova-scheduler-0" Feb 24 02:39:42.184005 master-0 kubenswrapper[31411]: I0224 02:39:42.183969 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad415201-c48f-463e-b907-3a9bf748006d-config-data\") pod \"nova-scheduler-0\" (UID: \"ad415201-c48f-463e-b907-3a9bf748006d\") " pod="openstack/nova-scheduler-0" Feb 24 02:39:42.184066 master-0 kubenswrapper[31411]: I0224 02:39:42.184010 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nlk6\" (UniqueName: \"kubernetes.io/projected/ad415201-c48f-463e-b907-3a9bf748006d-kube-api-access-8nlk6\") pod \"nova-scheduler-0\" (UID: \"ad415201-c48f-463e-b907-3a9bf748006d\") " pod="openstack/nova-scheduler-0" Feb 24 02:39:42.196911 master-0 kubenswrapper[31411]: I0224 02:39:42.194603 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad415201-c48f-463e-b907-3a9bf748006d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ad415201-c48f-463e-b907-3a9bf748006d\") " pod="openstack/nova-scheduler-0" Feb 24 02:39:42.196911 master-0 kubenswrapper[31411]: I0224 02:39:42.194898 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad415201-c48f-463e-b907-3a9bf748006d-config-data\") pod \"nova-scheduler-0\" (UID: \"ad415201-c48f-463e-b907-3a9bf748006d\") " pod="openstack/nova-scheduler-0" Feb 24 02:39:42.218056 master-0 kubenswrapper[31411]: I0224 02:39:42.217984 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nlk6\" (UniqueName: \"kubernetes.io/projected/ad415201-c48f-463e-b907-3a9bf748006d-kube-api-access-8nlk6\") pod \"nova-scheduler-0\" (UID: \"ad415201-c48f-463e-b907-3a9bf748006d\") " pod="openstack/nova-scheduler-0" Feb 24 02:39:42.438865 master-0 kubenswrapper[31411]: I0224 02:39:42.438693 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 24 02:39:42.544488 master-0 kubenswrapper[31411]: I0224 02:39:42.544386 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a42cba47-e321-4b11-8df3-14382a641521","Type":"ContainerStarted","Data":"144fa1a93e3dce7ec3bdc45ae4401bc587ed4566142a606913493d9be1a72a9a"} Feb 24 02:39:42.544488 master-0 kubenswrapper[31411]: I0224 02:39:42.544465 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a42cba47-e321-4b11-8df3-14382a641521","Type":"ContainerStarted","Data":"76e397a493398910670ad5138e01a9b96884bcec34c719df3f4ef42a30a35b77"} Feb 24 02:39:42.544488 master-0 kubenswrapper[31411]: I0224 02:39:42.544479 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a42cba47-e321-4b11-8df3-14382a641521","Type":"ContainerStarted","Data":"886bb106ee4931bd534904a2d5e1f58d2be86e2870818289a0f3f7e99f7aa59c"} Feb 24 02:39:42.582504 master-0 kubenswrapper[31411]: I0224 02:39:42.582429 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.582409769 podStartE2EDuration="2.582409769s" podCreationTimestamp="2026-02-24 02:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:39:42.581200115 +0000 UTC m=+1125.798397951" watchObservedRunningTime="2026-02-24 02:39:42.582409769 +0000 UTC m=+1125.799607615" Feb 24 02:39:43.026803 master-0 kubenswrapper[31411]: W0224 02:39:43.026742 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad415201_c48f_463e_b907_3a9bf748006d.slice/crio-213fb17a951cdabe0ab8485269cf350b29bf0ba370531ffc11e7b3357ffc6f61 WatchSource:0}: Error finding container 213fb17a951cdabe0ab8485269cf350b29bf0ba370531ffc11e7b3357ffc6f61: Status 404 returned error can't find the container with id 213fb17a951cdabe0ab8485269cf350b29bf0ba370531ffc11e7b3357ffc6f61 Feb 24 02:39:43.027884 master-0 kubenswrapper[31411]: I0224 02:39:43.027803 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 02:39:43.110092 master-0 kubenswrapper[31411]: I0224 02:39:43.110035 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0c89c95-1f73-4996-87ef-183a13c5891b" path="/var/lib/kubelet/pods/a0c89c95-1f73-4996-87ef-183a13c5891b/volumes" Feb 24 02:39:43.565436 master-0 kubenswrapper[31411]: I0224 02:39:43.565304 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ad415201-c48f-463e-b907-3a9bf748006d","Type":"ContainerStarted","Data":"e9be5f788162a2b427b10442a3bfaf16d3d6a2a65b40abb2f0b67a3348aedb4b"} Feb 24 02:39:43.565436 master-0 kubenswrapper[31411]: I0224 02:39:43.565381 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ad415201-c48f-463e-b907-3a9bf748006d","Type":"ContainerStarted","Data":"213fb17a951cdabe0ab8485269cf350b29bf0ba370531ffc11e7b3357ffc6f61"} Feb 24 02:39:43.611225 master-0 kubenswrapper[31411]: I0224 02:39:43.611003 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.610981192 podStartE2EDuration="2.610981192s" podCreationTimestamp="2026-02-24 02:39:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:39:43.598951033 +0000 UTC m=+1126.816148909" watchObservedRunningTime="2026-02-24 02:39:43.610981192 +0000 UTC m=+1126.828179038" Feb 24 02:39:46.028769 master-0 kubenswrapper[31411]: I0224 02:39:46.028623 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 24 02:39:46.028769 master-0 kubenswrapper[31411]: I0224 02:39:46.028728 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 24 02:39:47.439883 master-0 kubenswrapper[31411]: I0224 02:39:47.439823 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 24 02:39:47.942870 master-0 kubenswrapper[31411]: I0224 02:39:47.942754 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 24 02:39:47.942870 master-0 kubenswrapper[31411]: I0224 02:39:47.942853 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 24 02:39:48.974527 master-0 kubenswrapper[31411]: I0224 02:39:48.969739 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="685eb0ae-79cd-488a-894f-2ef620e61225" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.128.1.16:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:39:48.974527 master-0 kubenswrapper[31411]: I0224 02:39:48.970098 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="685eb0ae-79cd-488a-894f-2ef620e61225" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.128.1.16:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:39:51.028360 master-0 kubenswrapper[31411]: I0224 02:39:51.028285 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 24 02:39:51.028360 master-0 kubenswrapper[31411]: I0224 02:39:51.028359 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 24 02:39:52.045968 master-0 kubenswrapper[31411]: I0224 02:39:52.045838 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a42cba47-e321-4b11-8df3-14382a641521" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.17:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 02:39:52.046927 master-0 kubenswrapper[31411]: I0224 02:39:52.045933 31411 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a42cba47-e321-4b11-8df3-14382a641521" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.17:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:39:52.440935 master-0 kubenswrapper[31411]: I0224 02:39:52.440751 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 24 02:39:52.488935 master-0 kubenswrapper[31411]: I0224 02:39:52.488857 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 24 02:39:52.809564 master-0 kubenswrapper[31411]: I0224 02:39:52.809440 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 24 02:39:57.957385 master-0 kubenswrapper[31411]: I0224 02:39:57.957295 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 24 02:39:57.958426 master-0 kubenswrapper[31411]: I0224 02:39:57.957481 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 24 02:39:57.958426 master-0 kubenswrapper[31411]: I0224 02:39:57.958243 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 24 02:39:57.958426 master-0 kubenswrapper[31411]: I0224 02:39:57.958329 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 24 02:39:57.968144 master-0 kubenswrapper[31411]: I0224 02:39:57.968085 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 24 02:39:57.968377 master-0 kubenswrapper[31411]: I0224 02:39:57.968340 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 24 02:40:01.038480 master-0 kubenswrapper[31411]: I0224 02:40:01.038374 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 24 02:40:01.039749 master-0 kubenswrapper[31411]: I0224 02:40:01.038715 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 24 02:40:01.048767 master-0 kubenswrapper[31411]: I0224 02:40:01.048677 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 24 02:40:01.049411 master-0 kubenswrapper[31411]: I0224 02:40:01.049341 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 24 02:40:37.405093 master-0 kubenswrapper[31411]: I0224 02:40:37.405021 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-q2bh9"] Feb 24 02:40:37.406341 master-0 kubenswrapper[31411]: I0224 02:40:37.406299 31411 kuberuntime_container.go:808] "Killing container with a grace period" pod="sushy-emulator/sushy-emulator-78f6d7d749-q2bh9" podUID="432763a0-0405-497d-b4a2-d253c31a5d3e" containerName="sushy-emulator" containerID="cri-o://9554571b8fb1928f8dad689398cbb0a68508918fa53aff32e7ab5771cdb98c43" gracePeriod=30 Feb 24 02:40:38.151294 master-0 kubenswrapper[31411]: I0224 02:40:38.151246 31411 generic.go:334] "Generic (PLEG): container finished" podID="432763a0-0405-497d-b4a2-d253c31a5d3e" containerID="9554571b8fb1928f8dad689398cbb0a68508918fa53aff32e7ab5771cdb98c43" exitCode=0 Feb 24 02:40:38.151636 master-0 kubenswrapper[31411]: I0224 02:40:38.151501 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-78f6d7d749-q2bh9" event={"ID":"432763a0-0405-497d-b4a2-d253c31a5d3e","Type":"ContainerDied","Data":"9554571b8fb1928f8dad689398cbb0a68508918fa53aff32e7ab5771cdb98c43"} Feb 24 02:40:38.375157 master-0 kubenswrapper[31411]: I0224 02:40:38.375103 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-78f6d7d749-q2bh9" Feb 24 02:40:38.546879 master-0 kubenswrapper[31411]: I0224 02:40:38.546820 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-84965d5d88-6n2dg"] Feb 24 02:40:38.548219 master-0 kubenswrapper[31411]: E0224 02:40:38.548199 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="432763a0-0405-497d-b4a2-d253c31a5d3e" containerName="sushy-emulator" Feb 24 02:40:38.548366 master-0 kubenswrapper[31411]: I0224 02:40:38.548353 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="432763a0-0405-497d-b4a2-d253c31a5d3e" containerName="sushy-emulator" Feb 24 02:40:38.548838 master-0 kubenswrapper[31411]: I0224 02:40:38.548797 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="432763a0-0405-497d-b4a2-d253c31a5d3e" containerName="sushy-emulator" Feb 24 02:40:38.558960 master-0 kubenswrapper[31411]: I0224 02:40:38.558904 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-84965d5d88-6n2dg" Feb 24 02:40:38.559467 master-0 kubenswrapper[31411]: I0224 02:40:38.558940 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-594mx\" (UniqueName: \"kubernetes.io/projected/432763a0-0405-497d-b4a2-d253c31a5d3e-kube-api-access-594mx\") pod \"432763a0-0405-497d-b4a2-d253c31a5d3e\" (UID: \"432763a0-0405-497d-b4a2-d253c31a5d3e\") " Feb 24 02:40:38.559793 master-0 kubenswrapper[31411]: I0224 02:40:38.559737 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/432763a0-0405-497d-b4a2-d253c31a5d3e-os-client-config\") pod \"432763a0-0405-497d-b4a2-d253c31a5d3e\" (UID: \"432763a0-0405-497d-b4a2-d253c31a5d3e\") " Feb 24 02:40:38.559864 master-0 kubenswrapper[31411]: I0224 02:40:38.559845 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/432763a0-0405-497d-b4a2-d253c31a5d3e-sushy-emulator-config\") pod \"432763a0-0405-497d-b4a2-d253c31a5d3e\" (UID: \"432763a0-0405-497d-b4a2-d253c31a5d3e\") " Feb 24 02:40:38.560656 master-0 kubenswrapper[31411]: I0224 02:40:38.560612 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/432763a0-0405-497d-b4a2-d253c31a5d3e-sushy-emulator-config" (OuterVolumeSpecName: "sushy-emulator-config") pod "432763a0-0405-497d-b4a2-d253c31a5d3e" (UID: "432763a0-0405-497d-b4a2-d253c31a5d3e"). InnerVolumeSpecName "sushy-emulator-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:40:38.561424 master-0 kubenswrapper[31411]: I0224 02:40:38.561372 31411 reconciler_common.go:293] "Volume detached for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/432763a0-0405-497d-b4a2-d253c31a5d3e-sushy-emulator-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:40:38.561897 master-0 kubenswrapper[31411]: I0224 02:40:38.561855 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-84965d5d88-6n2dg"] Feb 24 02:40:38.563438 master-0 kubenswrapper[31411]: I0224 02:40:38.563357 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/432763a0-0405-497d-b4a2-d253c31a5d3e-os-client-config" (OuterVolumeSpecName: "os-client-config") pod "432763a0-0405-497d-b4a2-d253c31a5d3e" (UID: "432763a0-0405-497d-b4a2-d253c31a5d3e"). InnerVolumeSpecName "os-client-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:40:38.564953 master-0 kubenswrapper[31411]: I0224 02:40:38.564895 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/432763a0-0405-497d-b4a2-d253c31a5d3e-kube-api-access-594mx" (OuterVolumeSpecName: "kube-api-access-594mx") pod "432763a0-0405-497d-b4a2-d253c31a5d3e" (UID: "432763a0-0405-497d-b4a2-d253c31a5d3e"). InnerVolumeSpecName "kube-api-access-594mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:40:38.663708 master-0 kubenswrapper[31411]: I0224 02:40:38.663632 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trk7k\" (UniqueName: \"kubernetes.io/projected/f8f3c1db-2333-4c12-8352-7461021a6140-kube-api-access-trk7k\") pod \"sushy-emulator-84965d5d88-6n2dg\" (UID: \"f8f3c1db-2333-4c12-8352-7461021a6140\") " pod="sushy-emulator/sushy-emulator-84965d5d88-6n2dg" Feb 24 02:40:38.663975 master-0 kubenswrapper[31411]: I0224 02:40:38.663792 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/f8f3c1db-2333-4c12-8352-7461021a6140-sushy-emulator-config\") pod \"sushy-emulator-84965d5d88-6n2dg\" (UID: \"f8f3c1db-2333-4c12-8352-7461021a6140\") " pod="sushy-emulator/sushy-emulator-84965d5d88-6n2dg" Feb 24 02:40:38.663975 master-0 kubenswrapper[31411]: I0224 02:40:38.663924 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/f8f3c1db-2333-4c12-8352-7461021a6140-os-client-config\") pod \"sushy-emulator-84965d5d88-6n2dg\" (UID: \"f8f3c1db-2333-4c12-8352-7461021a6140\") " pod="sushy-emulator/sushy-emulator-84965d5d88-6n2dg" Feb 24 02:40:38.664205 master-0 kubenswrapper[31411]: I0224 02:40:38.664112 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-594mx\" (UniqueName: \"kubernetes.io/projected/432763a0-0405-497d-b4a2-d253c31a5d3e-kube-api-access-594mx\") on node \"master-0\" DevicePath \"\"" Feb 24 02:40:38.664205 master-0 kubenswrapper[31411]: I0224 02:40:38.664136 31411 reconciler_common.go:293] "Volume detached for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/432763a0-0405-497d-b4a2-d253c31a5d3e-os-client-config\") on node \"master-0\" DevicePath \"\"" Feb 24 02:40:38.766817 master-0 kubenswrapper[31411]: I0224 02:40:38.766615 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/f8f3c1db-2333-4c12-8352-7461021a6140-sushy-emulator-config\") pod \"sushy-emulator-84965d5d88-6n2dg\" (UID: \"f8f3c1db-2333-4c12-8352-7461021a6140\") " pod="sushy-emulator/sushy-emulator-84965d5d88-6n2dg" Feb 24 02:40:38.766817 master-0 kubenswrapper[31411]: I0224 02:40:38.766779 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/f8f3c1db-2333-4c12-8352-7461021a6140-os-client-config\") pod \"sushy-emulator-84965d5d88-6n2dg\" (UID: \"f8f3c1db-2333-4c12-8352-7461021a6140\") " pod="sushy-emulator/sushy-emulator-84965d5d88-6n2dg" Feb 24 02:40:38.767133 master-0 kubenswrapper[31411]: I0224 02:40:38.766992 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trk7k\" (UniqueName: \"kubernetes.io/projected/f8f3c1db-2333-4c12-8352-7461021a6140-kube-api-access-trk7k\") pod \"sushy-emulator-84965d5d88-6n2dg\" (UID: \"f8f3c1db-2333-4c12-8352-7461021a6140\") " pod="sushy-emulator/sushy-emulator-84965d5d88-6n2dg" Feb 24 02:40:38.769555 master-0 kubenswrapper[31411]: I0224 02:40:38.769491 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/f8f3c1db-2333-4c12-8352-7461021a6140-sushy-emulator-config\") pod \"sushy-emulator-84965d5d88-6n2dg\" (UID: \"f8f3c1db-2333-4c12-8352-7461021a6140\") " pod="sushy-emulator/sushy-emulator-84965d5d88-6n2dg" Feb 24 02:40:38.772723 master-0 kubenswrapper[31411]: I0224 02:40:38.772208 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/f8f3c1db-2333-4c12-8352-7461021a6140-os-client-config\") pod \"sushy-emulator-84965d5d88-6n2dg\" (UID: \"f8f3c1db-2333-4c12-8352-7461021a6140\") " pod="sushy-emulator/sushy-emulator-84965d5d88-6n2dg" Feb 24 02:40:38.798409 master-0 kubenswrapper[31411]: I0224 02:40:38.798339 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trk7k\" (UniqueName: \"kubernetes.io/projected/f8f3c1db-2333-4c12-8352-7461021a6140-kube-api-access-trk7k\") pod \"sushy-emulator-84965d5d88-6n2dg\" (UID: \"f8f3c1db-2333-4c12-8352-7461021a6140\") " pod="sushy-emulator/sushy-emulator-84965d5d88-6n2dg" Feb 24 02:40:38.935367 master-0 kubenswrapper[31411]: I0224 02:40:38.935183 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-84965d5d88-6n2dg" Feb 24 02:40:39.219659 master-0 kubenswrapper[31411]: I0224 02:40:39.171525 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-78f6d7d749-q2bh9" event={"ID":"432763a0-0405-497d-b4a2-d253c31a5d3e","Type":"ContainerDied","Data":"ee56223b1fb091dd4b599b32727ba11df4ca2828b744f13aff27734cceeb6f8c"} Feb 24 02:40:39.219659 master-0 kubenswrapper[31411]: I0224 02:40:39.171670 31411 scope.go:117] "RemoveContainer" containerID="9554571b8fb1928f8dad689398cbb0a68508918fa53aff32e7ab5771cdb98c43" Feb 24 02:40:39.219659 master-0 kubenswrapper[31411]: I0224 02:40:39.171882 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-78f6d7d749-q2bh9" Feb 24 02:40:39.253619 master-0 kubenswrapper[31411]: I0224 02:40:39.247220 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-q2bh9"] Feb 24 02:40:39.266723 master-0 kubenswrapper[31411]: I0224 02:40:39.266373 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-q2bh9"] Feb 24 02:40:39.721795 master-0 kubenswrapper[31411]: I0224 02:40:39.721720 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-84965d5d88-6n2dg"] Feb 24 02:40:40.198816 master-0 kubenswrapper[31411]: I0224 02:40:40.198734 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-84965d5d88-6n2dg" event={"ID":"f8f3c1db-2333-4c12-8352-7461021a6140","Type":"ContainerStarted","Data":"b931974292975cc7a245aa4c314f1bcc03e4aa62fc7eef53a016c8ba7d6511f8"} Feb 24 02:40:40.198816 master-0 kubenswrapper[31411]: I0224 02:40:40.198815 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-84965d5d88-6n2dg" event={"ID":"f8f3c1db-2333-4c12-8352-7461021a6140","Type":"ContainerStarted","Data":"f295df183776fb2f51b723bc7f2366f25825c5e14ea001006b16fd91311ca249"} Feb 24 02:40:40.253344 master-0 kubenswrapper[31411]: I0224 02:40:40.253217 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-84965d5d88-6n2dg" podStartSLOduration=2.253186419 podStartE2EDuration="2.253186419s" podCreationTimestamp="2026-02-24 02:40:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 02:40:40.226086685 +0000 UTC m=+1183.443284571" watchObservedRunningTime="2026-02-24 02:40:40.253186419 +0000 UTC m=+1183.470384305" Feb 24 02:40:41.118710 master-0 kubenswrapper[31411]: I0224 02:40:41.118623 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="432763a0-0405-497d-b4a2-d253c31a5d3e" path="/var/lib/kubelet/pods/432763a0-0405-497d-b4a2-d253c31a5d3e/volumes" Feb 24 02:40:48.935886 master-0 kubenswrapper[31411]: I0224 02:40:48.935776 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-84965d5d88-6n2dg" Feb 24 02:40:48.937342 master-0 kubenswrapper[31411]: I0224 02:40:48.935903 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-84965d5d88-6n2dg" Feb 24 02:40:48.954301 master-0 kubenswrapper[31411]: I0224 02:40:48.954205 31411 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-84965d5d88-6n2dg" Feb 24 02:40:49.343446 master-0 kubenswrapper[31411]: I0224 02:40:49.343370 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-84965d5d88-6n2dg" Feb 24 02:41:59.326463 master-0 kubenswrapper[31411]: I0224 02:41:59.326385 31411 scope.go:117] "RemoveContainer" containerID="ad0ba8e89adf8f5cee99a82e7af1422d1983fe7b26bd545688e3c013c7c9b9b7" Feb 24 02:41:59.384167 master-0 kubenswrapper[31411]: I0224 02:41:59.383892 31411 scope.go:117] "RemoveContainer" containerID="c20e22b587026297ecbfb9bb4a70af1be08b2eb8a914ba2123643c2845e06777" Feb 24 02:42:24.442714 master-0 kubenswrapper[31411]: I0224 02:42:24.442641 31411 patch_prober.go:28] interesting pod/monitoring-plugin-5d9ddb8754-xtrdd container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.128.0.96:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 02:42:24.443364 master-0 kubenswrapper[31411]: I0224 02:42:24.442730 31411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-5d9ddb8754-xtrdd" podUID="823c983e-f9a6-4074-9a69-14ec0666dfd5" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.128.0.96:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 02:42:24.820808 master-0 kubenswrapper[31411]: I0224 02:42:24.820725 31411 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9tqsfz" podUID="e812dec6-4f25-4ba5-b08b-c2c7db77b4b3" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.156:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 02:42:24.821101 master-0 kubenswrapper[31411]: I0224 02:42:24.820872 31411 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9tqsfz" podUID="e812dec6-4f25-4ba5-b08b-c2c7db77b4b3" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.156:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 02:42:59.528939 master-0 kubenswrapper[31411]: I0224 02:42:59.528870 31411 scope.go:117] "RemoveContainer" containerID="80f83c7016640d60426f83a31172755f75a0e237220d8babddf8665022bb3a2d" Feb 24 02:42:59.558271 master-0 kubenswrapper[31411]: I0224 02:42:59.558216 31411 scope.go:117] "RemoveContainer" containerID="076b9809054f4fc762518eee1e3e36bf62481bd76b5142ee7d69a6bd72ff5f8a" Feb 24 02:42:59.637400 master-0 kubenswrapper[31411]: I0224 02:42:59.637344 31411 scope.go:117] "RemoveContainer" containerID="29c9cf6824c423edde9609e265c6f1c4cd9529c2ca073aad171b549468b2cff5" Feb 24 02:42:59.703342 master-0 kubenswrapper[31411]: I0224 02:42:59.703209 31411 scope.go:117] "RemoveContainer" containerID="e385e5e5a8a5cfb0183077cd94d1d4cb19ee83e46fecb9613441aff3b0fa8342" Feb 24 02:43:38.868131 master-0 kubenswrapper[31411]: I0224 02:43:38.867825 31411 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="93374608-d6a1-4e71-8682-3a86e5815f29" containerName="galera" probeResult="failure" output="command timed out" Feb 24 02:43:38.872350 master-0 kubenswrapper[31411]: I0224 02:43:38.872030 31411 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="93374608-d6a1-4e71-8682-3a86e5815f29" containerName="galera" probeResult="failure" output="command timed out" Feb 24 02:43:41.993601 master-0 kubenswrapper[31411]: I0224 02:43:41.984106 31411 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="86b07869-3ccf-46a9-9ca3-9954a1508cff" containerName="galera" probeResult="failure" output="command timed out" Feb 24 02:43:50.386606 master-0 kubenswrapper[31411]: I0224 02:43:49.812518 31411 patch_prober.go:28] interesting pod/controller-manager-c67bf58c9-mn7dg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.95:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 02:43:50.386606 master-0 kubenswrapper[31411]: I0224 02:43:49.812601 31411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-c67bf58c9-mn7dg" podUID="6a26aae0-6a54-41a9-8532-26195010c7cc" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.95:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 02:43:50.415602 master-0 kubenswrapper[31411]: I0224 02:43:50.410912 31411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-qqt7p" podUID="b085f760-0e24-41a8-af09-538396aad935" containerName="registry-server" probeResult="failure" output=< Feb 24 02:43:50.415602 master-0 kubenswrapper[31411]: timeout: failed to connect service ":50051" within 1s Feb 24 02:43:50.415602 master-0 kubenswrapper[31411]: > Feb 24 02:43:54.262592 master-0 kubenswrapper[31411]: I0224 02:43:54.260291 31411 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-6ac23-scheduler-0" podUID="ed7505b2-4ddf-4df3-a4f7-7b198aacd70b" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.128.0.238:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 02:43:54.917062 master-0 kubenswrapper[31411]: I0224 02:43:54.911832 31411 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-zfd69" podUID="44cfb629-0b50-4e8c-9b4c-e329a1b3c533" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.144:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 02:43:54.927147 master-0 kubenswrapper[31411]: I0224 02:43:54.927058 31411 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5t6bt" podUID="3bb72077-6f36-439c-8cc0-83bdbfcc3935" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.146:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 02:43:54.944392 master-0 kubenswrapper[31411]: I0224 02:43:54.933975 31411 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5t6bt" podUID="3bb72077-6f36-439c-8cc0-83bdbfcc3935" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.146:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 02:43:54.959656 master-0 kubenswrapper[31411]: I0224 02:43:54.955832 31411 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="93374608-d6a1-4e71-8682-3a86e5815f29" containerName="galera" probeResult="failure" output="command timed out" Feb 24 02:43:54.972171 master-0 kubenswrapper[31411]: I0224 02:43:54.971853 31411 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="86b07869-3ccf-46a9-9ca3-9954a1508cff" containerName="galera" probeResult="failure" output="command timed out" Feb 24 02:43:59.840504 master-0 kubenswrapper[31411]: I0224 02:43:59.840416 31411 scope.go:117] "RemoveContainer" containerID="9a9e66df9212f71e78ab3ccc9c8dcb0e8aa4cb35b5ded9f174d105d3405c6c1d" Feb 24 02:43:59.893951 master-0 kubenswrapper[31411]: I0224 02:43:59.893428 31411 scope.go:117] "RemoveContainer" containerID="6cdec3934af4e34ee4975c9e5118c80f5e4af1435142981792a3527895d9d943" Feb 24 02:44:32.338950 master-0 kubenswrapper[31411]: I0224 02:44:32.338299 31411 trace.go:236] Trace[1263162845]: "Calculate volume metrics of var-lib-ironic for pod openstack/ironic-conductor-0" (24-Feb-2026 02:44:30.742) (total time: 1595ms): Feb 24 02:44:32.338950 master-0 kubenswrapper[31411]: Trace[1263162845]: [1.595536059s] [1.595536059s] END Feb 24 02:44:32.665398 master-0 kubenswrapper[31411]: I0224 02:44:32.665246 31411 patch_prober.go:28] interesting pod/thanos-querier-69565684c5-snfqm container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.128.0.101:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 02:44:32.665398 master-0 kubenswrapper[31411]: I0224 02:44:32.665334 31411 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-69565684c5-snfqm" podUID="133397d5-a069-4b31-b4d8-a7442bc62eba" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.128.0.101:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 02:44:54.886994 master-0 kubenswrapper[31411]: I0224 02:44:54.886854 31411 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b9tqsfz" podUID="e812dec6-4f25-4ba5-b08b-c2c7db77b4b3" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.156:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 02:44:59.979476 master-0 kubenswrapper[31411]: I0224 02:44:59.979394 31411 scope.go:117] "RemoveContainer" containerID="764a91a85ac0d521c5936462cdbe700036ddc082cfa739afbda0b9bc198e3d84" Feb 24 02:45:00.015842 master-0 kubenswrapper[31411]: I0224 02:45:00.015805 31411 scope.go:117] "RemoveContainer" containerID="3fbdcfc81c110b39ef61e318226540831254e1314a24eae991738b4b04cab74f" Feb 24 02:45:00.175923 master-0 kubenswrapper[31411]: I0224 02:45:00.175835 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531685-l2l87"] Feb 24 02:45:00.178807 master-0 kubenswrapper[31411]: I0224 02:45:00.178767 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531685-l2l87" Feb 24 02:45:00.181129 master-0 kubenswrapper[31411]: I0224 02:45:00.181080 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-dqz7q" Feb 24 02:45:00.181129 master-0 kubenswrapper[31411]: I0224 02:45:00.181108 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 02:45:00.195942 master-0 kubenswrapper[31411]: I0224 02:45:00.195888 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531685-l2l87"] Feb 24 02:45:00.276465 master-0 kubenswrapper[31411]: I0224 02:45:00.276278 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11ac8aa9-5aeb-463e-a090-d2c5bb2408af-config-volume\") pod \"collect-profiles-29531685-l2l87\" (UID: \"11ac8aa9-5aeb-463e-a090-d2c5bb2408af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531685-l2l87" Feb 24 02:45:00.276465 master-0 kubenswrapper[31411]: I0224 02:45:00.276383 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11ac8aa9-5aeb-463e-a090-d2c5bb2408af-secret-volume\") pod \"collect-profiles-29531685-l2l87\" (UID: \"11ac8aa9-5aeb-463e-a090-d2c5bb2408af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531685-l2l87" Feb 24 02:45:00.276879 master-0 kubenswrapper[31411]: I0224 02:45:00.276529 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfpcr\" (UniqueName: \"kubernetes.io/projected/11ac8aa9-5aeb-463e-a090-d2c5bb2408af-kube-api-access-jfpcr\") pod \"collect-profiles-29531685-l2l87\" (UID: \"11ac8aa9-5aeb-463e-a090-d2c5bb2408af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531685-l2l87" Feb 24 02:45:00.379164 master-0 kubenswrapper[31411]: I0224 02:45:00.379091 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11ac8aa9-5aeb-463e-a090-d2c5bb2408af-config-volume\") pod \"collect-profiles-29531685-l2l87\" (UID: \"11ac8aa9-5aeb-463e-a090-d2c5bb2408af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531685-l2l87" Feb 24 02:45:00.379858 master-0 kubenswrapper[31411]: I0224 02:45:00.379834 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11ac8aa9-5aeb-463e-a090-d2c5bb2408af-secret-volume\") pod \"collect-profiles-29531685-l2l87\" (UID: \"11ac8aa9-5aeb-463e-a090-d2c5bb2408af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531685-l2l87" Feb 24 02:45:00.380188 master-0 kubenswrapper[31411]: I0224 02:45:00.380165 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfpcr\" (UniqueName: \"kubernetes.io/projected/11ac8aa9-5aeb-463e-a090-d2c5bb2408af-kube-api-access-jfpcr\") pod \"collect-profiles-29531685-l2l87\" (UID: \"11ac8aa9-5aeb-463e-a090-d2c5bb2408af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531685-l2l87" Feb 24 02:45:00.382019 master-0 kubenswrapper[31411]: I0224 02:45:00.381886 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11ac8aa9-5aeb-463e-a090-d2c5bb2408af-config-volume\") pod \"collect-profiles-29531685-l2l87\" (UID: \"11ac8aa9-5aeb-463e-a090-d2c5bb2408af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531685-l2l87" Feb 24 02:45:00.386264 master-0 kubenswrapper[31411]: I0224 02:45:00.386203 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11ac8aa9-5aeb-463e-a090-d2c5bb2408af-secret-volume\") pod \"collect-profiles-29531685-l2l87\" (UID: \"11ac8aa9-5aeb-463e-a090-d2c5bb2408af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531685-l2l87" Feb 24 02:45:00.400479 master-0 kubenswrapper[31411]: I0224 02:45:00.400357 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfpcr\" (UniqueName: \"kubernetes.io/projected/11ac8aa9-5aeb-463e-a090-d2c5bb2408af-kube-api-access-jfpcr\") pod \"collect-profiles-29531685-l2l87\" (UID: \"11ac8aa9-5aeb-463e-a090-d2c5bb2408af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531685-l2l87" Feb 24 02:45:00.498016 master-0 kubenswrapper[31411]: I0224 02:45:00.497924 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531685-l2l87" Feb 24 02:45:01.058635 master-0 kubenswrapper[31411]: I0224 02:45:01.058510 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531685-l2l87"] Feb 24 02:45:01.063040 master-0 kubenswrapper[31411]: W0224 02:45:01.062993 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11ac8aa9_5aeb_463e_a090_d2c5bb2408af.slice/crio-59e16467bd3fdd8bb852324917595ea10744bdfcbff27472998d43837ce057e0 WatchSource:0}: Error finding container 59e16467bd3fdd8bb852324917595ea10744bdfcbff27472998d43837ce057e0: Status 404 returned error can't find the container with id 59e16467bd3fdd8bb852324917595ea10744bdfcbff27472998d43837ce057e0 Feb 24 02:45:01.224605 master-0 kubenswrapper[31411]: I0224 02:45:01.224524 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531685-l2l87" event={"ID":"11ac8aa9-5aeb-463e-a090-d2c5bb2408af","Type":"ContainerStarted","Data":"59e16467bd3fdd8bb852324917595ea10744bdfcbff27472998d43837ce057e0"} Feb 24 02:45:02.242925 master-0 kubenswrapper[31411]: I0224 02:45:02.242861 31411 generic.go:334] "Generic (PLEG): container finished" podID="11ac8aa9-5aeb-463e-a090-d2c5bb2408af" containerID="8ffd673966bcde6477eef7bb84b7f3b34d5e8414897343804d74af4403a5c8e1" exitCode=0 Feb 24 02:45:02.242925 master-0 kubenswrapper[31411]: I0224 02:45:02.242922 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531685-l2l87" event={"ID":"11ac8aa9-5aeb-463e-a090-d2c5bb2408af","Type":"ContainerDied","Data":"8ffd673966bcde6477eef7bb84b7f3b34d5e8414897343804d74af4403a5c8e1"} Feb 24 02:45:03.741871 master-0 kubenswrapper[31411]: I0224 02:45:03.741836 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531685-l2l87" Feb 24 02:45:03.827217 master-0 kubenswrapper[31411]: I0224 02:45:03.827120 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfpcr\" (UniqueName: \"kubernetes.io/projected/11ac8aa9-5aeb-463e-a090-d2c5bb2408af-kube-api-access-jfpcr\") pod \"11ac8aa9-5aeb-463e-a090-d2c5bb2408af\" (UID: \"11ac8aa9-5aeb-463e-a090-d2c5bb2408af\") " Feb 24 02:45:03.828716 master-0 kubenswrapper[31411]: I0224 02:45:03.828694 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11ac8aa9-5aeb-463e-a090-d2c5bb2408af-config-volume\") pod \"11ac8aa9-5aeb-463e-a090-d2c5bb2408af\" (UID: \"11ac8aa9-5aeb-463e-a090-d2c5bb2408af\") " Feb 24 02:45:03.829062 master-0 kubenswrapper[31411]: I0224 02:45:03.829048 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11ac8aa9-5aeb-463e-a090-d2c5bb2408af-secret-volume\") pod \"11ac8aa9-5aeb-463e-a090-d2c5bb2408af\" (UID: \"11ac8aa9-5aeb-463e-a090-d2c5bb2408af\") " Feb 24 02:45:03.840650 master-0 kubenswrapper[31411]: I0224 02:45:03.838996 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11ac8aa9-5aeb-463e-a090-d2c5bb2408af-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "11ac8aa9-5aeb-463e-a090-d2c5bb2408af" (UID: "11ac8aa9-5aeb-463e-a090-d2c5bb2408af"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 02:45:03.840650 master-0 kubenswrapper[31411]: I0224 02:45:03.840459 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11ac8aa9-5aeb-463e-a090-d2c5bb2408af-config-volume" (OuterVolumeSpecName: "config-volume") pod "11ac8aa9-5aeb-463e-a090-d2c5bb2408af" (UID: "11ac8aa9-5aeb-463e-a090-d2c5bb2408af"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 02:45:03.847308 master-0 kubenswrapper[31411]: I0224 02:45:03.846990 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11ac8aa9-5aeb-463e-a090-d2c5bb2408af-kube-api-access-jfpcr" (OuterVolumeSpecName: "kube-api-access-jfpcr") pod "11ac8aa9-5aeb-463e-a090-d2c5bb2408af" (UID: "11ac8aa9-5aeb-463e-a090-d2c5bb2408af"). InnerVolumeSpecName "kube-api-access-jfpcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 02:45:03.932815 master-0 kubenswrapper[31411]: I0224 02:45:03.932655 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfpcr\" (UniqueName: \"kubernetes.io/projected/11ac8aa9-5aeb-463e-a090-d2c5bb2408af-kube-api-access-jfpcr\") on node \"master-0\" DevicePath \"\"" Feb 24 02:45:03.932815 master-0 kubenswrapper[31411]: I0224 02:45:03.932711 31411 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11ac8aa9-5aeb-463e-a090-d2c5bb2408af-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 24 02:45:03.932815 master-0 kubenswrapper[31411]: I0224 02:45:03.932727 31411 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11ac8aa9-5aeb-463e-a090-d2c5bb2408af-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 24 02:45:04.276169 master-0 kubenswrapper[31411]: I0224 02:45:04.276057 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531685-l2l87" event={"ID":"11ac8aa9-5aeb-463e-a090-d2c5bb2408af","Type":"ContainerDied","Data":"59e16467bd3fdd8bb852324917595ea10744bdfcbff27472998d43837ce057e0"} Feb 24 02:45:04.276169 master-0 kubenswrapper[31411]: I0224 02:45:04.276149 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59e16467bd3fdd8bb852324917595ea10744bdfcbff27472998d43837ce057e0" Feb 24 02:45:04.276624 master-0 kubenswrapper[31411]: I0224 02:45:04.276197 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531685-l2l87" Feb 24 02:45:05.041636 master-0 kubenswrapper[31411]: I0224 02:45:05.041501 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531640-kptmw"] Feb 24 02:45:05.057120 master-0 kubenswrapper[31411]: I0224 02:45:05.057042 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531640-kptmw"] Feb 24 02:45:05.109089 master-0 kubenswrapper[31411]: I0224 02:45:05.108980 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24983c94-f158-4a07-854b-2e5455374f19" path="/var/lib/kubelet/pods/24983c94-f158-4a07-854b-2e5455374f19/volumes" Feb 24 02:45:47.113133 master-0 kubenswrapper[31411]: I0224 02:45:47.113038 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e923-account-create-update-dswn2"] Feb 24 02:45:47.121521 master-0 kubenswrapper[31411]: I0224 02:45:47.121452 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-jl696"] Feb 24 02:45:47.134982 master-0 kubenswrapper[31411]: I0224 02:45:47.133839 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-e923-account-create-update-dswn2"] Feb 24 02:45:47.146031 master-0 kubenswrapper[31411]: I0224 02:45:47.145952 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-jl696"] Feb 24 02:45:49.111817 master-0 kubenswrapper[31411]: I0224 02:45:49.111755 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9158aad4-fb9b-4900-99cd-c78da20e920d" path="/var/lib/kubelet/pods/9158aad4-fb9b-4900-99cd-c78da20e920d/volumes" Feb 24 02:45:49.112689 master-0 kubenswrapper[31411]: I0224 02:45:49.112660 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc6984b0-e97a-4750-b89f-290aa2cc36b9" path="/var/lib/kubelet/pods/fc6984b0-e97a-4750-b89f-290aa2cc36b9/volumes" Feb 24 02:45:53.137421 master-0 kubenswrapper[31411]: I0224 02:45:53.137314 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5d23-account-create-update-q2xlr"] Feb 24 02:45:53.148855 master-0 kubenswrapper[31411]: I0224 02:45:53.148774 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-cjhw4"] Feb 24 02:45:53.184755 master-0 kubenswrapper[31411]: I0224 02:45:53.184645 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-5d23-account-create-update-q2xlr"] Feb 24 02:45:53.200410 master-0 kubenswrapper[31411]: I0224 02:45:53.200316 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-d7rmf"] Feb 24 02:45:53.300195 master-0 kubenswrapper[31411]: I0224 02:45:53.300111 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-07e8-account-create-update-4xjm5"] Feb 24 02:45:53.353790 master-0 kubenswrapper[31411]: I0224 02:45:53.353674 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-cjhw4"] Feb 24 02:45:53.368034 master-0 kubenswrapper[31411]: I0224 02:45:53.367951 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-07e8-account-create-update-4xjm5"] Feb 24 02:45:53.466613 master-0 kubenswrapper[31411]: I0224 02:45:53.463847 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-d7rmf"] Feb 24 02:45:55.116267 master-0 kubenswrapper[31411]: I0224 02:45:55.116178 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33d27427-00de-48ea-879d-ff3376adfbae" path="/var/lib/kubelet/pods/33d27427-00de-48ea-879d-ff3376adfbae/volumes" Feb 24 02:45:55.117706 master-0 kubenswrapper[31411]: I0224 02:45:55.117228 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34321c8e-7008-47b6-99ad-7752b89e8045" path="/var/lib/kubelet/pods/34321c8e-7008-47b6-99ad-7752b89e8045/volumes" Feb 24 02:45:55.118334 master-0 kubenswrapper[31411]: I0224 02:45:55.118282 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25" path="/var/lib/kubelet/pods/5d17ac59-f3ce-460e-b0c4-c0eb0a41fd25/volumes" Feb 24 02:45:55.119894 master-0 kubenswrapper[31411]: I0224 02:45:55.119844 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="856ea150-40e7-4381-ab60-83a9974d5ff7" path="/var/lib/kubelet/pods/856ea150-40e7-4381-ab60-83a9974d5ff7/volumes" Feb 24 02:46:00.483112 master-0 kubenswrapper[31411]: I0224 02:46:00.483025 31411 scope.go:117] "RemoveContainer" containerID="45b1b0a25f45e5b28d4c3d6363eda6915a2219746496fd39b35c97ca94f35908" Feb 24 02:46:00.510323 master-0 kubenswrapper[31411]: I0224 02:46:00.510258 31411 scope.go:117] "RemoveContainer" containerID="2124a3042cb7397e98aa772cf754d14881cec0b76a807be556609d45409f8987" Feb 24 02:46:00.996070 master-0 kubenswrapper[31411]: I0224 02:46:00.995990 31411 scope.go:117] "RemoveContainer" containerID="74de2ff9ddd0ff5fce54b3b6888b341dd50d253ae5a806a1d255df1f158482be" Feb 24 02:46:01.066966 master-0 kubenswrapper[31411]: I0224 02:46:01.064418 31411 scope.go:117] "RemoveContainer" containerID="44da10972596f19eda3d5b23c65ce994fe9ec7974580fd66084a9f783a62e356" Feb 24 02:46:01.789715 master-0 kubenswrapper[31411]: I0224 02:46:01.789401 31411 scope.go:117] "RemoveContainer" containerID="e51ee7952dfd445e552e98f87e6cac337f269d310845fcc3274f65c031cd5dd3" Feb 24 02:46:01.846382 master-0 kubenswrapper[31411]: I0224 02:46:01.846313 31411 scope.go:117] "RemoveContainer" containerID="fa0b6426fb5825820f55aba01104c816b6b323615f887f32756604f2dbc2e66b" Feb 24 02:46:01.876991 master-0 kubenswrapper[31411]: I0224 02:46:01.876937 31411 scope.go:117] "RemoveContainer" containerID="06f2b62681f4a8848bb5355e114e6e4bb4606b29bbb7762ebb5012cc0a5e1660" Feb 24 02:46:12.121130 master-0 kubenswrapper[31411]: I0224 02:46:12.121001 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-klrwt"] Feb 24 02:46:12.266030 master-0 kubenswrapper[31411]: I0224 02:46:12.265908 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-klrwt"] Feb 24 02:46:13.107659 master-0 kubenswrapper[31411]: I0224 02:46:13.107568 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dacf13b3-b36b-40db-8824-550ffb0b6cbd" path="/var/lib/kubelet/pods/dacf13b3-b36b-40db-8824-550ffb0b6cbd/volumes" Feb 24 02:46:19.081798 master-0 kubenswrapper[31411]: I0224 02:46:19.081695 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7051-account-create-update-2j7gx"] Feb 24 02:46:19.117425 master-0 kubenswrapper[31411]: I0224 02:46:19.117227 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-x5qn7"] Feb 24 02:46:19.122899 master-0 kubenswrapper[31411]: I0224 02:46:19.122848 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ed6f-account-create-update-kn7d6"] Feb 24 02:46:19.142515 master-0 kubenswrapper[31411]: I0224 02:46:19.142387 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-vptkz"] Feb 24 02:46:19.155000 master-0 kubenswrapper[31411]: I0224 02:46:19.154912 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7051-account-create-update-2j7gx"] Feb 24 02:46:19.167564 master-0 kubenswrapper[31411]: I0224 02:46:19.167455 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-x5qn7"] Feb 24 02:46:19.181933 master-0 kubenswrapper[31411]: I0224 02:46:19.181846 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-ed6f-account-create-update-kn7d6"] Feb 24 02:46:19.197124 master-0 kubenswrapper[31411]: I0224 02:46:19.195953 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-vptkz"] Feb 24 02:46:21.123650 master-0 kubenswrapper[31411]: I0224 02:46:21.123364 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4459f0c8-08d6-4c34-8144-f36dc7408608" path="/var/lib/kubelet/pods/4459f0c8-08d6-4c34-8144-f36dc7408608/volumes" Feb 24 02:46:21.125246 master-0 kubenswrapper[31411]: I0224 02:46:21.125187 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c7949ba-1c60-4a1d-872b-cc388aba1adc" path="/var/lib/kubelet/pods/5c7949ba-1c60-4a1d-872b-cc388aba1adc/volumes" Feb 24 02:46:21.127024 master-0 kubenswrapper[31411]: I0224 02:46:21.126555 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ac75776-ea65-4171-9504-a54ece21245f" path="/var/lib/kubelet/pods/6ac75776-ea65-4171-9504-a54ece21245f/volumes" Feb 24 02:46:21.128048 master-0 kubenswrapper[31411]: I0224 02:46:21.127979 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bdc9612-8122-43dd-b0bd-3a2044dc4848" path="/var/lib/kubelet/pods/6bdc9612-8122-43dd-b0bd-3a2044dc4848/volumes" Feb 24 02:46:24.103171 master-0 kubenswrapper[31411]: I0224 02:46:24.103021 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-4z2pz"] Feb 24 02:46:24.114800 master-0 kubenswrapper[31411]: I0224 02:46:24.114713 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-4z2pz"] Feb 24 02:46:25.108833 master-0 kubenswrapper[31411]: I0224 02:46:25.108729 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bdbc28e-3ed7-4454-8421-043a9d2864c8" path="/var/lib/kubelet/pods/0bdbc28e-3ed7-4454-8421-043a9d2864c8/volumes" Feb 24 02:46:26.047721 master-0 kubenswrapper[31411]: I0224 02:46:26.047642 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-dnhq7"] Feb 24 02:46:26.062527 master-0 kubenswrapper[31411]: I0224 02:46:26.062421 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-dnhq7"] Feb 24 02:46:27.119287 master-0 kubenswrapper[31411]: I0224 02:46:27.119216 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af" path="/var/lib/kubelet/pods/1dbd4d74-8d03-4f2c-95d7-e1b18b7db0af/volumes" Feb 24 02:46:33.065985 master-0 kubenswrapper[31411]: I0224 02:46:33.065877 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-db-create-8l585"] Feb 24 02:46:33.083897 master-0 kubenswrapper[31411]: I0224 02:46:33.083765 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-ecce-account-create-update-2pvjj"] Feb 24 02:46:33.114851 master-0 kubenswrapper[31411]: I0224 02:46:33.114751 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-db-create-8l585"] Feb 24 02:46:33.114851 master-0 kubenswrapper[31411]: I0224 02:46:33.114805 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-ecce-account-create-update-2pvjj"] Feb 24 02:46:35.110813 master-0 kubenswrapper[31411]: I0224 02:46:35.110710 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2752407f-852c-433f-80e1-0a3d258c7edf" path="/var/lib/kubelet/pods/2752407f-852c-433f-80e1-0a3d258c7edf/volumes" Feb 24 02:46:35.112210 master-0 kubenswrapper[31411]: I0224 02:46:35.111819 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f4631c5-4285-4b86-8afb-2462577e53fc" path="/var/lib/kubelet/pods/9f4631c5-4285-4b86-8afb-2462577e53fc/volumes" Feb 24 02:46:50.011373 master-0 kubenswrapper[31411]: I0224 02:46:50.011143 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-wdcb6"] Feb 24 02:46:50.063898 master-0 kubenswrapper[31411]: I0224 02:46:50.063782 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-wdcb6"] Feb 24 02:46:50.205017 master-0 kubenswrapper[31411]: I0224 02:46:50.204896 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-njvpx"] Feb 24 02:46:50.310016 master-0 kubenswrapper[31411]: I0224 02:46:50.309819 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-njvpx"] Feb 24 02:46:51.114837 master-0 kubenswrapper[31411]: I0224 02:46:51.114749 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e6b0cc7-1005-47be-bc50-5b6057e2407d" path="/var/lib/kubelet/pods/4e6b0cc7-1005-47be-bc50-5b6057e2407d/volumes" Feb 24 02:46:51.115980 master-0 kubenswrapper[31411]: I0224 02:46:51.115945 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7edb7291-5510-45f1-810f-de8e6bf08cd0" path="/var/lib/kubelet/pods/7edb7291-5510-45f1-810f-de8e6bf08cd0/volumes" Feb 24 02:46:59.074371 master-0 kubenswrapper[31411]: I0224 02:46:59.073932 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-k6pnr"] Feb 24 02:46:59.086900 master-0 kubenswrapper[31411]: I0224 02:46:59.086816 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-k6pnr"] Feb 24 02:46:59.107607 master-0 kubenswrapper[31411]: I0224 02:46:59.107525 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d7c1b26-0a14-4626-b7ed-ec82103e883c" path="/var/lib/kubelet/pods/8d7c1b26-0a14-4626-b7ed-ec82103e883c/volumes" Feb 24 02:47:02.135014 master-0 kubenswrapper[31411]: I0224 02:47:02.134922 31411 scope.go:117] "RemoveContainer" containerID="9e1a11cd30d8e641c8a6d186495650d5b23fc6cf4d80e70d7d5428f1c31d1861" Feb 24 02:47:02.172402 master-0 kubenswrapper[31411]: I0224 02:47:02.172342 31411 scope.go:117] "RemoveContainer" containerID="a396992afc6bd6a8dd79fe828cd981509e7cf8dd75f880184e243de5f99c769a" Feb 24 02:47:02.237756 master-0 kubenswrapper[31411]: I0224 02:47:02.237706 31411 scope.go:117] "RemoveContainer" containerID="c0ed8ed657a2640a5b93cec82ca016e1b8eaeacb56980d510576e81cb6580f18" Feb 24 02:47:02.313744 master-0 kubenswrapper[31411]: I0224 02:47:02.313685 31411 scope.go:117] "RemoveContainer" containerID="2ed19f5063aed050bc5f2a7be8758140c36c7d5a22c41712656ec008ef2ecb69" Feb 24 02:47:02.394184 master-0 kubenswrapper[31411]: I0224 02:47:02.393902 31411 scope.go:117] "RemoveContainer" containerID="6549e94a090b89b6149dbf8755d38f83b1b705346f90e412b1f91fca52cc279c" Feb 24 02:47:02.466968 master-0 kubenswrapper[31411]: I0224 02:47:02.466896 31411 scope.go:117] "RemoveContainer" containerID="0f8014e374704ae7a775f025c14900e36ea34a3ac04cdda51b844d6643cd40a8" Feb 24 02:47:02.498802 master-0 kubenswrapper[31411]: I0224 02:47:02.498719 31411 scope.go:117] "RemoveContainer" containerID="2d56cef06e1f16125a76fa9bc35f5f7ce7db7f550944fcbb2ee3e2af80254880" Feb 24 02:47:02.545769 master-0 kubenswrapper[31411]: I0224 02:47:02.545712 31411 scope.go:117] "RemoveContainer" containerID="7ff963faa5b47cf603ab6ed354024a4daa3d9b51272e1250252e26307a7cd261" Feb 24 02:47:02.592000 master-0 kubenswrapper[31411]: I0224 02:47:02.591920 31411 scope.go:117] "RemoveContainer" containerID="83cfaa12b349d53aaf1ea3130516915b5cfe8919f9c34403280f1d2f62e20ad8" Feb 24 02:47:02.623416 master-0 kubenswrapper[31411]: I0224 02:47:02.623359 31411 scope.go:117] "RemoveContainer" containerID="6d58f61c32f49c831e3aebfad2f2f326bbd6c8aa7cd7a0c8677d74e92febc7b3" Feb 24 02:47:02.648640 master-0 kubenswrapper[31411]: I0224 02:47:02.648555 31411 scope.go:117] "RemoveContainer" containerID="2338ebf21823338d8822ab61432c537f4a6ea8a833dcb6a268f84a3109e156a9" Feb 24 02:47:02.688396 master-0 kubenswrapper[31411]: I0224 02:47:02.688352 31411 scope.go:117] "RemoveContainer" containerID="836773369c211dfb595440864cdcc202fb44fd758d7990ee88075dba5852c88d" Feb 24 02:47:06.588775 master-0 kubenswrapper[31411]: I0224 02:47:06.588672 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-6ac23-db-sync-mhchn"] Feb 24 02:47:06.601464 master-0 kubenswrapper[31411]: I0224 02:47:06.601320 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-6ac23-db-sync-mhchn"] Feb 24 02:47:07.109125 master-0 kubenswrapper[31411]: I0224 02:47:07.109035 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c356cf44-9774-4260-9463-2960be302f0e" path="/var/lib/kubelet/pods/c356cf44-9774-4260-9463-2960be302f0e/volumes" Feb 24 02:47:14.108433 master-0 kubenswrapper[31411]: I0224 02:47:14.108336 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-db-sync-jzr8b"] Feb 24 02:47:14.127847 master-0 kubenswrapper[31411]: I0224 02:47:14.127761 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-db-sync-jzr8b"] Feb 24 02:47:15.124181 master-0 kubenswrapper[31411]: I0224 02:47:15.124084 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3a705bf-9636-4410-a44a-6ff6907d4179" path="/var/lib/kubelet/pods/a3a705bf-9636-4410-a44a-6ff6907d4179/volumes" Feb 24 02:47:20.115090 master-0 kubenswrapper[31411]: I0224 02:47:20.114975 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-db-create-8kz9s"] Feb 24 02:47:20.144534 master-0 kubenswrapper[31411]: I0224 02:47:20.144428 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-db-create-8kz9s"] Feb 24 02:47:21.122180 master-0 kubenswrapper[31411]: I0224 02:47:21.122084 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="681e6026-865f-4f36-9ca6-5321c0738d18" path="/var/lib/kubelet/pods/681e6026-865f-4f36-9ca6-5321c0738d18/volumes" Feb 24 02:47:21.163647 master-0 kubenswrapper[31411]: I0224 02:47:21.163555 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-2bdc-account-create-update-5cgdd"] Feb 24 02:47:21.257102 master-0 kubenswrapper[31411]: I0224 02:47:21.257012 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-2bdc-account-create-update-5cgdd"] Feb 24 02:47:23.121914 master-0 kubenswrapper[31411]: I0224 02:47:23.121801 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df4cb99b-a2a5-4423-ab60-5d180ea09e93" path="/var/lib/kubelet/pods/df4cb99b-a2a5-4423-ab60-5d180ea09e93/volumes" Feb 24 02:47:52.093609 master-0 kubenswrapper[31411]: I0224 02:47:52.092520 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-xrhk2"] Feb 24 02:47:52.115420 master-0 kubenswrapper[31411]: I0224 02:47:52.115342 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7331-account-create-update-4cdxr"] Feb 24 02:47:52.124666 master-0 kubenswrapper[31411]: I0224 02:47:52.124612 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-05a5-account-create-update-bt8vb"] Feb 24 02:47:52.143659 master-0 kubenswrapper[31411]: I0224 02:47:52.143538 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-pf58r"] Feb 24 02:47:52.143659 master-0 kubenswrapper[31411]: I0224 02:47:52.143655 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-db-sync-8hw9n"] Feb 24 02:47:52.163337 master-0 kubenswrapper[31411]: I0224 02:47:52.163271 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-d8zwm"] Feb 24 02:47:52.163442 master-0 kubenswrapper[31411]: I0224 02:47:52.163355 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-xrhk2"] Feb 24 02:47:52.174275 master-0 kubenswrapper[31411]: I0224 02:47:52.174211 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-d8b9-account-create-update-kq9f4"] Feb 24 02:47:52.177169 master-0 kubenswrapper[31411]: I0224 02:47:52.177129 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-7331-account-create-update-4cdxr"] Feb 24 02:47:52.186474 master-0 kubenswrapper[31411]: I0224 02:47:52.186434 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-pf58r"] Feb 24 02:47:52.204526 master-0 kubenswrapper[31411]: I0224 02:47:52.204445 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-05a5-account-create-update-bt8vb"] Feb 24 02:47:52.222214 master-0 kubenswrapper[31411]: I0224 02:47:52.222138 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-d8zwm"] Feb 24 02:47:52.222214 master-0 kubenswrapper[31411]: I0224 02:47:52.222200 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-db-sync-8hw9n"] Feb 24 02:47:52.234257 master-0 kubenswrapper[31411]: I0224 02:47:52.234191 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-d8b9-account-create-update-kq9f4"] Feb 24 02:47:53.105309 master-0 kubenswrapper[31411]: I0224 02:47:53.105230 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e4defeb-f6b0-46db-9acc-6df2d2490988" path="/var/lib/kubelet/pods/0e4defeb-f6b0-46db-9acc-6df2d2490988/volumes" Feb 24 02:47:53.106181 master-0 kubenswrapper[31411]: I0224 02:47:53.105887 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="264be425-2328-436a-9a2d-0215e640276c" path="/var/lib/kubelet/pods/264be425-2328-436a-9a2d-0215e640276c/volumes" Feb 24 02:47:53.106591 master-0 kubenswrapper[31411]: I0224 02:47:53.106536 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3695612d-87a3-4401-9303-05ae933a9f78" path="/var/lib/kubelet/pods/3695612d-87a3-4401-9303-05ae933a9f78/volumes" Feb 24 02:47:53.107721 master-0 kubenswrapper[31411]: I0224 02:47:53.107685 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80c28d7c-77ba-477e-9b90-8432c2c7b48f" path="/var/lib/kubelet/pods/80c28d7c-77ba-477e-9b90-8432c2c7b48f/volumes" Feb 24 02:47:53.108307 master-0 kubenswrapper[31411]: I0224 02:47:53.108274 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85b32976-b3fc-498f-b15b-690cee8bfc95" path="/var/lib/kubelet/pods/85b32976-b3fc-498f-b15b-690cee8bfc95/volumes" Feb 24 02:47:53.108883 master-0 kubenswrapper[31411]: I0224 02:47:53.108850 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c1ff294-924a-46b5-9107-84238d30135f" path="/var/lib/kubelet/pods/8c1ff294-924a-46b5-9107-84238d30135f/volumes" Feb 24 02:47:53.109464 master-0 kubenswrapper[31411]: I0224 02:47:53.109431 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8c676cf-386b-455c-b9f8-b88a3a34a136" path="/var/lib/kubelet/pods/b8c676cf-386b-455c-b9f8-b88a3a34a136/volumes" Feb 24 02:47:59.976303 master-0 kubenswrapper[31411]: I0224 02:47:59.976229 31411 patch_prober.go:28] interesting pod/packageserver-597975fc65-xcl6c container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.128.0.51:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 02:47:59.977094 master-0 kubenswrapper[31411]: I0224 02:47:59.976335 31411 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-597975fc65-xcl6c" podUID="9cad383a-cb69-41a8-aec8-23ee1c930430" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.51:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 02:48:00.936721 master-0 kubenswrapper[31411]: I0224 02:48:00.936559 31411 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="86b07869-3ccf-46a9-9ca3-9954a1508cff" containerName="galera" probeResult="failure" output="command timed out" Feb 24 02:48:03.605942 master-0 kubenswrapper[31411]: I0224 02:48:03.605877 31411 scope.go:117] "RemoveContainer" containerID="eba3c9e9d3744ec13ee7e23484b76faeb29d5d57adc284af42df09984d8ec479" Feb 24 02:48:03.629016 master-0 kubenswrapper[31411]: I0224 02:48:03.628935 31411 scope.go:117] "RemoveContainer" containerID="66967fc67f5d734652bce57f0d6e278d63a9fb0e5f8989303157d0300581964f" Feb 24 02:48:03.688387 master-0 kubenswrapper[31411]: I0224 02:48:03.688311 31411 scope.go:117] "RemoveContainer" containerID="5099be0b63b83ad0af9fae73dcbd9a1cb30dd6e86579c9efb5b56f154efb3c01" Feb 24 02:48:03.741272 master-0 kubenswrapper[31411]: I0224 02:48:03.741236 31411 scope.go:117] "RemoveContainer" containerID="f3b27cff129a8568d482620266fcff99c1cd11397d9f6cef9d597683f1f311f1" Feb 24 02:48:04.861387 master-0 kubenswrapper[31411]: I0224 02:48:04.861136 31411 scope.go:117] "RemoveContainer" containerID="b052e406402d006adcd8d7e77760763fe494cdc2a2e6d160a6e36e94f549d3d4" Feb 24 02:48:04.896076 master-0 kubenswrapper[31411]: I0224 02:48:04.895857 31411 scope.go:117] "RemoveContainer" containerID="df4c2d83721af90c423447bcf1b9b69d8aac7ee0523be5a3ea4ac895cf7ffb24" Feb 24 02:48:05.001704 master-0 kubenswrapper[31411]: I0224 02:48:05.001605 31411 scope.go:117] "RemoveContainer" containerID="47500ce406e24c05c3cb656695bc67b302bb1c3089aed248cbc1372b1c351a3e" Feb 24 02:48:05.052907 master-0 kubenswrapper[31411]: I0224 02:48:05.052863 31411 scope.go:117] "RemoveContainer" containerID="ddb0bf913baafc8ac71e50bd0cf260ae7dc8f2f523de3cba1b5ccee0768e147a" Feb 24 02:48:05.089289 master-0 kubenswrapper[31411]: I0224 02:48:05.089253 31411 scope.go:117] "RemoveContainer" containerID="4e6f99bfdd10b2eb4b70128246adebd3bd4068497ea6351d2322dddc725b1621" Feb 24 02:48:05.155049 master-0 kubenswrapper[31411]: I0224 02:48:05.154809 31411 scope.go:117] "RemoveContainer" containerID="42a465e40ef7b774a830f3590092a0cf93e2e6de5a60c4d097f6652a6877c75a" Feb 24 02:48:05.224266 master-0 kubenswrapper[31411]: I0224 02:48:05.224131 31411 scope.go:117] "RemoveContainer" containerID="42fee9d4a5f97dca1ebdfeeb55d78aa1a173c534dbd768ee386a6a27c02352df" Feb 24 02:48:05.256853 master-0 kubenswrapper[31411]: I0224 02:48:05.256007 31411 scope.go:117] "RemoveContainer" containerID="38ac37b6a76645d3cf79e9320b586251da08acfbf09dc87365e36648638ac236" Feb 24 02:48:15.244584 master-0 kubenswrapper[31411]: I0224 02:48:15.244502 31411 trace.go:236] Trace[1191251513]: "Calculate volume metrics of swift for pod openstack/swift-storage-0" (24-Feb-2026 02:48:14.082) (total time: 1162ms): Feb 24 02:48:15.244584 master-0 kubenswrapper[31411]: Trace[1191251513]: [1.16234739s] [1.16234739s] END Feb 24 02:48:31.067692 master-0 kubenswrapper[31411]: I0224 02:48:31.064995 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vdhjz"] Feb 24 02:48:31.078646 master-0 kubenswrapper[31411]: I0224 02:48:31.077305 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vdhjz"] Feb 24 02:48:31.106013 master-0 kubenswrapper[31411]: I0224 02:48:31.105934 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9" path="/var/lib/kubelet/pods/d9d4fad3-73a1-40b5-8af2-abb7bfee4bb9/volumes" Feb 24 02:48:54.067469 master-0 kubenswrapper[31411]: I0224 02:48:54.067382 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-mz9lj"] Feb 24 02:48:54.080534 master-0 kubenswrapper[31411]: I0224 02:48:54.080435 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-mz9lj"] Feb 24 02:48:55.058543 master-0 kubenswrapper[31411]: I0224 02:48:55.057598 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7jt69"] Feb 24 02:48:55.080478 master-0 kubenswrapper[31411]: I0224 02:48:55.080390 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7jt69"] Feb 24 02:48:55.109111 master-0 kubenswrapper[31411]: I0224 02:48:55.109024 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23b93a03-505a-41a6-94b6-9344778d91be" path="/var/lib/kubelet/pods/23b93a03-505a-41a6-94b6-9344778d91be/volumes" Feb 24 02:48:55.110107 master-0 kubenswrapper[31411]: I0224 02:48:55.110067 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83d58022-3d1c-4bab-ba2d-ca5a54d511db" path="/var/lib/kubelet/pods/83d58022-3d1c-4bab-ba2d-ca5a54d511db/volumes" Feb 24 02:49:05.635219 master-0 kubenswrapper[31411]: I0224 02:49:05.634978 31411 scope.go:117] "RemoveContainer" containerID="32da3892179cccca9e16d1e01d1b1762e3f078ab921801f1ce77728bb81e8507" Feb 24 02:49:05.725613 master-0 kubenswrapper[31411]: I0224 02:49:05.724453 31411 scope.go:117] "RemoveContainer" containerID="67f54de3f30c2b2affe27ec8d04a452901c3a9e1ea584b7774eac9304f416f7c" Feb 24 02:49:05.789124 master-0 kubenswrapper[31411]: I0224 02:49:05.789055 31411 scope.go:117] "RemoveContainer" containerID="24ed2b6d864e061dc427eae421ffa48e8027e239f97e5f29b3e2b21204ad97ff" Feb 24 02:49:34.241846 master-0 kubenswrapper[31411]: I0224 02:49:34.241774 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-host-discover-stqn6"] Feb 24 02:49:34.256037 master-0 kubenswrapper[31411]: I0224 02:49:34.255952 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-host-discover-stqn6"] Feb 24 02:49:35.066356 master-0 kubenswrapper[31411]: I0224 02:49:35.066261 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-rxn8v"] Feb 24 02:49:35.078739 master-0 kubenswrapper[31411]: I0224 02:49:35.078658 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-rxn8v"] Feb 24 02:49:35.109345 master-0 kubenswrapper[31411]: I0224 02:49:35.109272 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1c49571-1aa8-4a21-9d31-76039b6413d8" path="/var/lib/kubelet/pods/f1c49571-1aa8-4a21-9d31-76039b6413d8/volumes" Feb 24 02:49:35.110139 master-0 kubenswrapper[31411]: I0224 02:49:35.110106 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f284bd2b-440f-48e1-b995-9da2d0519a0b" path="/var/lib/kubelet/pods/f284bd2b-440f-48e1-b995-9da2d0519a0b/volumes" Feb 24 02:49:45.492094 master-0 kubenswrapper[31411]: I0224 02:49:45.491963 31411 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq" podUID="e2268b3c-ccf6-4309-ab1e-6c083c1f78cf" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.163:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 02:49:45.492094 master-0 kubenswrapper[31411]: I0224 02:49:45.491966 31411 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-q59hq" podUID="e2268b3c-ccf6-4309-ab1e-6c083c1f78cf" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.163:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 02:50:05.933786 master-0 kubenswrapper[31411]: I0224 02:50:05.933021 31411 scope.go:117] "RemoveContainer" containerID="6c1be05d035ee036673fcd1bda38523824882e54a83f1346c5a1559a69b29a6e" Feb 24 02:50:06.002004 master-0 kubenswrapper[31411]: I0224 02:50:06.001917 31411 scope.go:117] "RemoveContainer" containerID="ba129ba62efac6ca8a65fbed4375af6d02144201fef82d26b760768f8a027dfc" Feb 24 02:52:21.184719 master-0 kubenswrapper[31411]: I0224 02:52:21.183394 31411 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-559d754c8d-8sgn7" podUID="013fb964-8d21-4b63-9afb-521a7e902920" containerName="webhook-server" probeResult="failure" output="Get \"http://10.128.0.121:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 02:52:21.191359 master-0 kubenswrapper[31411]: I0224 02:52:21.188609 31411 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-559d754c8d-8sgn7" podUID="013fb964-8d21-4b63-9afb-521a7e902920" containerName="webhook-server" probeResult="failure" output="Get \"http://10.128.0.121:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 03:00:00.196323 master-0 kubenswrapper[31411]: I0224 03:00:00.196216 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531700-q4sct"] Feb 24 03:00:00.197409 master-0 kubenswrapper[31411]: E0224 03:00:00.197356 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ac8aa9-5aeb-463e-a090-d2c5bb2408af" containerName="collect-profiles" Feb 24 03:00:00.197409 master-0 kubenswrapper[31411]: I0224 03:00:00.197397 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ac8aa9-5aeb-463e-a090-d2c5bb2408af" containerName="collect-profiles" Feb 24 03:00:00.198081 master-0 kubenswrapper[31411]: I0224 03:00:00.198004 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="11ac8aa9-5aeb-463e-a090-d2c5bb2408af" containerName="collect-profiles" Feb 24 03:00:00.199821 master-0 kubenswrapper[31411]: I0224 03:00:00.199772 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531700-q4sct" Feb 24 03:00:00.203965 master-0 kubenswrapper[31411]: I0224 03:00:00.203909 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-dqz7q" Feb 24 03:00:00.204413 master-0 kubenswrapper[31411]: I0224 03:00:00.204371 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 03:00:00.213374 master-0 kubenswrapper[31411]: I0224 03:00:00.213285 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531700-q4sct"] Feb 24 03:00:00.275830 master-0 kubenswrapper[31411]: I0224 03:00:00.275725 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f94c3706-68ae-45da-99d7-95b44ad74777-config-volume\") pod \"collect-profiles-29531700-q4sct\" (UID: \"f94c3706-68ae-45da-99d7-95b44ad74777\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531700-q4sct" Feb 24 03:00:00.275830 master-0 kubenswrapper[31411]: I0224 03:00:00.275809 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnd88\" (UniqueName: \"kubernetes.io/projected/f94c3706-68ae-45da-99d7-95b44ad74777-kube-api-access-dnd88\") pod \"collect-profiles-29531700-q4sct\" (UID: \"f94c3706-68ae-45da-99d7-95b44ad74777\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531700-q4sct" Feb 24 03:00:00.275830 master-0 kubenswrapper[31411]: I0224 03:00:00.275852 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f94c3706-68ae-45da-99d7-95b44ad74777-secret-volume\") pod \"collect-profiles-29531700-q4sct\" (UID: \"f94c3706-68ae-45da-99d7-95b44ad74777\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531700-q4sct" Feb 24 03:00:00.379386 master-0 kubenswrapper[31411]: I0224 03:00:00.379306 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f94c3706-68ae-45da-99d7-95b44ad74777-config-volume\") pod \"collect-profiles-29531700-q4sct\" (UID: \"f94c3706-68ae-45da-99d7-95b44ad74777\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531700-q4sct" Feb 24 03:00:00.379386 master-0 kubenswrapper[31411]: I0224 03:00:00.379378 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnd88\" (UniqueName: \"kubernetes.io/projected/f94c3706-68ae-45da-99d7-95b44ad74777-kube-api-access-dnd88\") pod \"collect-profiles-29531700-q4sct\" (UID: \"f94c3706-68ae-45da-99d7-95b44ad74777\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531700-q4sct" Feb 24 03:00:00.379880 master-0 kubenswrapper[31411]: I0224 03:00:00.379594 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f94c3706-68ae-45da-99d7-95b44ad74777-secret-volume\") pod \"collect-profiles-29531700-q4sct\" (UID: \"f94c3706-68ae-45da-99d7-95b44ad74777\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531700-q4sct" Feb 24 03:00:00.380395 master-0 kubenswrapper[31411]: I0224 03:00:00.380330 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f94c3706-68ae-45da-99d7-95b44ad74777-config-volume\") pod \"collect-profiles-29531700-q4sct\" (UID: \"f94c3706-68ae-45da-99d7-95b44ad74777\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531700-q4sct" Feb 24 03:00:00.385171 master-0 kubenswrapper[31411]: I0224 03:00:00.385120 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f94c3706-68ae-45da-99d7-95b44ad74777-secret-volume\") pod \"collect-profiles-29531700-q4sct\" (UID: \"f94c3706-68ae-45da-99d7-95b44ad74777\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531700-q4sct" Feb 24 03:00:00.402765 master-0 kubenswrapper[31411]: I0224 03:00:00.402485 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnd88\" (UniqueName: \"kubernetes.io/projected/f94c3706-68ae-45da-99d7-95b44ad74777-kube-api-access-dnd88\") pod \"collect-profiles-29531700-q4sct\" (UID: \"f94c3706-68ae-45da-99d7-95b44ad74777\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531700-q4sct" Feb 24 03:00:00.538868 master-0 kubenswrapper[31411]: I0224 03:00:00.538774 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531700-q4sct" Feb 24 03:00:01.121074 master-0 kubenswrapper[31411]: I0224 03:00:01.120984 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531700-q4sct"] Feb 24 03:00:01.123318 master-0 kubenswrapper[31411]: W0224 03:00:01.123251 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf94c3706_68ae_45da_99d7_95b44ad74777.slice/crio-08fa0c8549394a7a8386ded5fcebd162cc7963ef96a2129b43f6db78f017cfc9 WatchSource:0}: Error finding container 08fa0c8549394a7a8386ded5fcebd162cc7963ef96a2129b43f6db78f017cfc9: Status 404 returned error can't find the container with id 08fa0c8549394a7a8386ded5fcebd162cc7963ef96a2129b43f6db78f017cfc9 Feb 24 03:00:01.856158 master-0 kubenswrapper[31411]: I0224 03:00:01.855918 31411 generic.go:334] "Generic (PLEG): container finished" podID="f94c3706-68ae-45da-99d7-95b44ad74777" containerID="6dc4ef4ca348d9577ab245d5a169e66e9aef33b6e588fb3f715c82386dd991f5" exitCode=0 Feb 24 03:00:01.856158 master-0 kubenswrapper[31411]: I0224 03:00:01.855999 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531700-q4sct" event={"ID":"f94c3706-68ae-45da-99d7-95b44ad74777","Type":"ContainerDied","Data":"6dc4ef4ca348d9577ab245d5a169e66e9aef33b6e588fb3f715c82386dd991f5"} Feb 24 03:00:01.856158 master-0 kubenswrapper[31411]: I0224 03:00:01.856084 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531700-q4sct" event={"ID":"f94c3706-68ae-45da-99d7-95b44ad74777","Type":"ContainerStarted","Data":"08fa0c8549394a7a8386ded5fcebd162cc7963ef96a2129b43f6db78f017cfc9"} Feb 24 03:00:03.401957 master-0 kubenswrapper[31411]: I0224 03:00:03.401847 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531700-q4sct" Feb 24 03:00:03.585968 master-0 kubenswrapper[31411]: I0224 03:00:03.585787 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnd88\" (UniqueName: \"kubernetes.io/projected/f94c3706-68ae-45da-99d7-95b44ad74777-kube-api-access-dnd88\") pod \"f94c3706-68ae-45da-99d7-95b44ad74777\" (UID: \"f94c3706-68ae-45da-99d7-95b44ad74777\") " Feb 24 03:00:03.586198 master-0 kubenswrapper[31411]: I0224 03:00:03.586108 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f94c3706-68ae-45da-99d7-95b44ad74777-config-volume\") pod \"f94c3706-68ae-45da-99d7-95b44ad74777\" (UID: \"f94c3706-68ae-45da-99d7-95b44ad74777\") " Feb 24 03:00:03.586377 master-0 kubenswrapper[31411]: I0224 03:00:03.586341 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f94c3706-68ae-45da-99d7-95b44ad74777-secret-volume\") pod \"f94c3706-68ae-45da-99d7-95b44ad74777\" (UID: \"f94c3706-68ae-45da-99d7-95b44ad74777\") " Feb 24 03:00:03.587136 master-0 kubenswrapper[31411]: I0224 03:00:03.587061 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f94c3706-68ae-45da-99d7-95b44ad74777-config-volume" (OuterVolumeSpecName: "config-volume") pod "f94c3706-68ae-45da-99d7-95b44ad74777" (UID: "f94c3706-68ae-45da-99d7-95b44ad74777"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 03:00:03.590725 master-0 kubenswrapper[31411]: I0224 03:00:03.590657 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f94c3706-68ae-45da-99d7-95b44ad74777-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f94c3706-68ae-45da-99d7-95b44ad74777" (UID: "f94c3706-68ae-45da-99d7-95b44ad74777"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 03:00:03.597828 master-0 kubenswrapper[31411]: I0224 03:00:03.597759 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f94c3706-68ae-45da-99d7-95b44ad74777-kube-api-access-dnd88" (OuterVolumeSpecName: "kube-api-access-dnd88") pod "f94c3706-68ae-45da-99d7-95b44ad74777" (UID: "f94c3706-68ae-45da-99d7-95b44ad74777"). InnerVolumeSpecName "kube-api-access-dnd88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 03:00:03.689993 master-0 kubenswrapper[31411]: I0224 03:00:03.689894 31411 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f94c3706-68ae-45da-99d7-95b44ad74777-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 24 03:00:03.689993 master-0 kubenswrapper[31411]: I0224 03:00:03.689949 31411 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f94c3706-68ae-45da-99d7-95b44ad74777-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 24 03:00:03.689993 master-0 kubenswrapper[31411]: I0224 03:00:03.689963 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnd88\" (UniqueName: \"kubernetes.io/projected/f94c3706-68ae-45da-99d7-95b44ad74777-kube-api-access-dnd88\") on node \"master-0\" DevicePath \"\"" Feb 24 03:00:03.892775 master-0 kubenswrapper[31411]: I0224 03:00:03.892592 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531700-q4sct" event={"ID":"f94c3706-68ae-45da-99d7-95b44ad74777","Type":"ContainerDied","Data":"08fa0c8549394a7a8386ded5fcebd162cc7963ef96a2129b43f6db78f017cfc9"} Feb 24 03:00:03.892775 master-0 kubenswrapper[31411]: I0224 03:00:03.892656 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08fa0c8549394a7a8386ded5fcebd162cc7963ef96a2129b43f6db78f017cfc9" Feb 24 03:00:03.892775 master-0 kubenswrapper[31411]: I0224 03:00:03.892712 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531700-q4sct" Feb 24 03:00:04.556598 master-0 kubenswrapper[31411]: I0224 03:00:04.555775 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531655-kw6fn"] Feb 24 03:00:04.567667 master-0 kubenswrapper[31411]: I0224 03:00:04.566393 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531655-kw6fn"] Feb 24 03:00:05.114124 master-0 kubenswrapper[31411]: I0224 03:00:05.114019 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06fb1d82-f9e9-473b-80c5-767ec3948bd4" path="/var/lib/kubelet/pods/06fb1d82-f9e9-473b-80c5-767ec3948bd4/volumes" Feb 24 03:00:06.998216 master-0 kubenswrapper[31411]: I0224 03:00:06.998164 31411 scope.go:117] "RemoveContainer" containerID="23e5ece2a1174ce846ce41906ef5a0fcc35a5f58a900b96b34aee280e09c4850" Feb 24 03:01:00.526424 master-0 kubenswrapper[31411]: I0224 03:01:00.526281 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29531701-28wv4"] Feb 24 03:01:00.528776 master-0 kubenswrapper[31411]: E0224 03:01:00.528737 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f94c3706-68ae-45da-99d7-95b44ad74777" containerName="collect-profiles" Feb 24 03:01:00.528953 master-0 kubenswrapper[31411]: I0224 03:01:00.528929 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="f94c3706-68ae-45da-99d7-95b44ad74777" containerName="collect-profiles" Feb 24 03:01:00.529639 master-0 kubenswrapper[31411]: I0224 03:01:00.529610 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="f94c3706-68ae-45da-99d7-95b44ad74777" containerName="collect-profiles" Feb 24 03:01:00.531364 master-0 kubenswrapper[31411]: I0224 03:01:00.531328 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29531701-28wv4" Feb 24 03:01:00.643234 master-0 kubenswrapper[31411]: I0224 03:01:00.643064 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29531701-28wv4"] Feb 24 03:01:00.670465 master-0 kubenswrapper[31411]: I0224 03:01:00.670367 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e0d56e-d846-405c-9f9f-ba21ade2b8c3-config-data\") pod \"keystone-cron-29531701-28wv4\" (UID: \"07e0d56e-d846-405c-9f9f-ba21ade2b8c3\") " pod="openstack/keystone-cron-29531701-28wv4" Feb 24 03:01:00.670949 master-0 kubenswrapper[31411]: I0224 03:01:00.670537 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/07e0d56e-d846-405c-9f9f-ba21ade2b8c3-fernet-keys\") pod \"keystone-cron-29531701-28wv4\" (UID: \"07e0d56e-d846-405c-9f9f-ba21ade2b8c3\") " pod="openstack/keystone-cron-29531701-28wv4" Feb 24 03:01:00.670949 master-0 kubenswrapper[31411]: I0224 03:01:00.670622 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwgx8\" (UniqueName: \"kubernetes.io/projected/07e0d56e-d846-405c-9f9f-ba21ade2b8c3-kube-api-access-fwgx8\") pod \"keystone-cron-29531701-28wv4\" (UID: \"07e0d56e-d846-405c-9f9f-ba21ade2b8c3\") " pod="openstack/keystone-cron-29531701-28wv4" Feb 24 03:01:00.670949 master-0 kubenswrapper[31411]: I0224 03:01:00.670723 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e0d56e-d846-405c-9f9f-ba21ade2b8c3-combined-ca-bundle\") pod \"keystone-cron-29531701-28wv4\" (UID: \"07e0d56e-d846-405c-9f9f-ba21ade2b8c3\") " pod="openstack/keystone-cron-29531701-28wv4" Feb 24 03:01:00.775024 master-0 kubenswrapper[31411]: I0224 03:01:00.774916 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/07e0d56e-d846-405c-9f9f-ba21ade2b8c3-fernet-keys\") pod \"keystone-cron-29531701-28wv4\" (UID: \"07e0d56e-d846-405c-9f9f-ba21ade2b8c3\") " pod="openstack/keystone-cron-29531701-28wv4" Feb 24 03:01:00.775400 master-0 kubenswrapper[31411]: I0224 03:01:00.775077 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwgx8\" (UniqueName: \"kubernetes.io/projected/07e0d56e-d846-405c-9f9f-ba21ade2b8c3-kube-api-access-fwgx8\") pod \"keystone-cron-29531701-28wv4\" (UID: \"07e0d56e-d846-405c-9f9f-ba21ade2b8c3\") " pod="openstack/keystone-cron-29531701-28wv4" Feb 24 03:01:00.775400 master-0 kubenswrapper[31411]: I0224 03:01:00.775226 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e0d56e-d846-405c-9f9f-ba21ade2b8c3-combined-ca-bundle\") pod \"keystone-cron-29531701-28wv4\" (UID: \"07e0d56e-d846-405c-9f9f-ba21ade2b8c3\") " pod="openstack/keystone-cron-29531701-28wv4" Feb 24 03:01:00.775674 master-0 kubenswrapper[31411]: I0224 03:01:00.775524 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e0d56e-d846-405c-9f9f-ba21ade2b8c3-config-data\") pod \"keystone-cron-29531701-28wv4\" (UID: \"07e0d56e-d846-405c-9f9f-ba21ade2b8c3\") " pod="openstack/keystone-cron-29531701-28wv4" Feb 24 03:01:00.781840 master-0 kubenswrapper[31411]: I0224 03:01:00.781689 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e0d56e-d846-405c-9f9f-ba21ade2b8c3-combined-ca-bundle\") pod \"keystone-cron-29531701-28wv4\" (UID: \"07e0d56e-d846-405c-9f9f-ba21ade2b8c3\") " pod="openstack/keystone-cron-29531701-28wv4" Feb 24 03:01:00.783043 master-0 kubenswrapper[31411]: I0224 03:01:00.782983 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e0d56e-d846-405c-9f9f-ba21ade2b8c3-config-data\") pod \"keystone-cron-29531701-28wv4\" (UID: \"07e0d56e-d846-405c-9f9f-ba21ade2b8c3\") " pod="openstack/keystone-cron-29531701-28wv4" Feb 24 03:01:00.783043 master-0 kubenswrapper[31411]: I0224 03:01:00.783015 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/07e0d56e-d846-405c-9f9f-ba21ade2b8c3-fernet-keys\") pod \"keystone-cron-29531701-28wv4\" (UID: \"07e0d56e-d846-405c-9f9f-ba21ade2b8c3\") " pod="openstack/keystone-cron-29531701-28wv4" Feb 24 03:01:00.801383 master-0 kubenswrapper[31411]: I0224 03:01:00.801296 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwgx8\" (UniqueName: \"kubernetes.io/projected/07e0d56e-d846-405c-9f9f-ba21ade2b8c3-kube-api-access-fwgx8\") pod \"keystone-cron-29531701-28wv4\" (UID: \"07e0d56e-d846-405c-9f9f-ba21ade2b8c3\") " pod="openstack/keystone-cron-29531701-28wv4" Feb 24 03:01:00.855907 master-0 kubenswrapper[31411]: I0224 03:01:00.855815 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29531701-28wv4" Feb 24 03:01:01.576186 master-0 kubenswrapper[31411]: I0224 03:01:01.576095 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29531701-28wv4"] Feb 24 03:01:01.578550 master-0 kubenswrapper[31411]: W0224 03:01:01.578450 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07e0d56e_d846_405c_9f9f_ba21ade2b8c3.slice/crio-759c55b467496e2fcf86d0e7180feaaf45f6f44773e8cd656ebab42b1f66d33e WatchSource:0}: Error finding container 759c55b467496e2fcf86d0e7180feaaf45f6f44773e8cd656ebab42b1f66d33e: Status 404 returned error can't find the container with id 759c55b467496e2fcf86d0e7180feaaf45f6f44773e8cd656ebab42b1f66d33e Feb 24 03:01:01.862160 master-0 kubenswrapper[31411]: I0224 03:01:01.862060 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29531701-28wv4" event={"ID":"07e0d56e-d846-405c-9f9f-ba21ade2b8c3","Type":"ContainerStarted","Data":"759c55b467496e2fcf86d0e7180feaaf45f6f44773e8cd656ebab42b1f66d33e"} Feb 24 03:01:02.894248 master-0 kubenswrapper[31411]: I0224 03:01:02.894133 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29531701-28wv4" event={"ID":"07e0d56e-d846-405c-9f9f-ba21ade2b8c3","Type":"ContainerStarted","Data":"e17cf65b178108eabd6fa1ef3daa7ea29aed89da9d53f0074c15e84e4c254d14"} Feb 24 03:01:03.089089 master-0 kubenswrapper[31411]: I0224 03:01:03.088936 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29531701-28wv4" podStartSLOduration=3.088902812 podStartE2EDuration="3.088902812s" podCreationTimestamp="2026-02-24 03:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 03:01:03.084403796 +0000 UTC m=+2406.301601682" watchObservedRunningTime="2026-02-24 03:01:03.088902812 +0000 UTC m=+2406.306100698" Feb 24 03:01:04.941809 master-0 kubenswrapper[31411]: I0224 03:01:04.941429 31411 generic.go:334] "Generic (PLEG): container finished" podID="07e0d56e-d846-405c-9f9f-ba21ade2b8c3" containerID="e17cf65b178108eabd6fa1ef3daa7ea29aed89da9d53f0074c15e84e4c254d14" exitCode=0 Feb 24 03:01:04.941809 master-0 kubenswrapper[31411]: I0224 03:01:04.941521 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29531701-28wv4" event={"ID":"07e0d56e-d846-405c-9f9f-ba21ade2b8c3","Type":"ContainerDied","Data":"e17cf65b178108eabd6fa1ef3daa7ea29aed89da9d53f0074c15e84e4c254d14"} Feb 24 03:01:06.459164 master-0 kubenswrapper[31411]: I0224 03:01:06.459090 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29531701-28wv4" Feb 24 03:01:06.590457 master-0 kubenswrapper[31411]: I0224 03:01:06.590272 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e0d56e-d846-405c-9f9f-ba21ade2b8c3-config-data\") pod \"07e0d56e-d846-405c-9f9f-ba21ade2b8c3\" (UID: \"07e0d56e-d846-405c-9f9f-ba21ade2b8c3\") " Feb 24 03:01:06.590457 master-0 kubenswrapper[31411]: I0224 03:01:06.590429 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwgx8\" (UniqueName: \"kubernetes.io/projected/07e0d56e-d846-405c-9f9f-ba21ade2b8c3-kube-api-access-fwgx8\") pod \"07e0d56e-d846-405c-9f9f-ba21ade2b8c3\" (UID: \"07e0d56e-d846-405c-9f9f-ba21ade2b8c3\") " Feb 24 03:01:06.590803 master-0 kubenswrapper[31411]: I0224 03:01:06.590768 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/07e0d56e-d846-405c-9f9f-ba21ade2b8c3-fernet-keys\") pod \"07e0d56e-d846-405c-9f9f-ba21ade2b8c3\" (UID: \"07e0d56e-d846-405c-9f9f-ba21ade2b8c3\") " Feb 24 03:01:06.590864 master-0 kubenswrapper[31411]: I0224 03:01:06.590835 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e0d56e-d846-405c-9f9f-ba21ade2b8c3-combined-ca-bundle\") pod \"07e0d56e-d846-405c-9f9f-ba21ade2b8c3\" (UID: \"07e0d56e-d846-405c-9f9f-ba21ade2b8c3\") " Feb 24 03:01:06.594273 master-0 kubenswrapper[31411]: I0224 03:01:06.594202 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07e0d56e-d846-405c-9f9f-ba21ade2b8c3-kube-api-access-fwgx8" (OuterVolumeSpecName: "kube-api-access-fwgx8") pod "07e0d56e-d846-405c-9f9f-ba21ade2b8c3" (UID: "07e0d56e-d846-405c-9f9f-ba21ade2b8c3"). InnerVolumeSpecName "kube-api-access-fwgx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 03:01:06.601945 master-0 kubenswrapper[31411]: I0224 03:01:06.601880 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e0d56e-d846-405c-9f9f-ba21ade2b8c3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "07e0d56e-d846-405c-9f9f-ba21ade2b8c3" (UID: "07e0d56e-d846-405c-9f9f-ba21ade2b8c3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 03:01:06.635863 master-0 kubenswrapper[31411]: I0224 03:01:06.635783 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e0d56e-d846-405c-9f9f-ba21ade2b8c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07e0d56e-d846-405c-9f9f-ba21ade2b8c3" (UID: "07e0d56e-d846-405c-9f9f-ba21ade2b8c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 03:01:06.676198 master-0 kubenswrapper[31411]: I0224 03:01:06.676073 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e0d56e-d846-405c-9f9f-ba21ade2b8c3-config-data" (OuterVolumeSpecName: "config-data") pod "07e0d56e-d846-405c-9f9f-ba21ade2b8c3" (UID: "07e0d56e-d846-405c-9f9f-ba21ade2b8c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 03:01:06.695349 master-0 kubenswrapper[31411]: I0224 03:01:06.695233 31411 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e0d56e-d846-405c-9f9f-ba21ade2b8c3-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 03:01:06.695349 master-0 kubenswrapper[31411]: I0224 03:01:06.695322 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwgx8\" (UniqueName: \"kubernetes.io/projected/07e0d56e-d846-405c-9f9f-ba21ade2b8c3-kube-api-access-fwgx8\") on node \"master-0\" DevicePath \"\"" Feb 24 03:01:06.695349 master-0 kubenswrapper[31411]: I0224 03:01:06.695346 31411 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/07e0d56e-d846-405c-9f9f-ba21ade2b8c3-fernet-keys\") on node \"master-0\" DevicePath \"\"" Feb 24 03:01:06.695754 master-0 kubenswrapper[31411]: I0224 03:01:06.695366 31411 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e0d56e-d846-405c-9f9f-ba21ade2b8c3-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 03:01:06.978557 master-0 kubenswrapper[31411]: I0224 03:01:06.978452 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29531701-28wv4" event={"ID":"07e0d56e-d846-405c-9f9f-ba21ade2b8c3","Type":"ContainerDied","Data":"759c55b467496e2fcf86d0e7180feaaf45f6f44773e8cd656ebab42b1f66d33e"} Feb 24 03:01:06.978557 master-0 kubenswrapper[31411]: I0224 03:01:06.978532 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="759c55b467496e2fcf86d0e7180feaaf45f6f44773e8cd656ebab42b1f66d33e" Feb 24 03:01:06.979099 master-0 kubenswrapper[31411]: I0224 03:01:06.978662 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29531701-28wv4" Feb 24 03:10:02.597286 master-0 kubenswrapper[31411]: E0224 03:10:02.597055 31411 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.32.10:33992->192.168.32.10:44853: write tcp 192.168.32.10:33992->192.168.32.10:44853: write: broken pipe Feb 24 03:15:00.176611 master-0 kubenswrapper[31411]: I0224 03:15:00.176352 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531715-pdf5j"] Feb 24 03:15:00.180600 master-0 kubenswrapper[31411]: E0224 03:15:00.177457 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e0d56e-d846-405c-9f9f-ba21ade2b8c3" containerName="keystone-cron" Feb 24 03:15:00.180600 master-0 kubenswrapper[31411]: I0224 03:15:00.177488 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e0d56e-d846-405c-9f9f-ba21ade2b8c3" containerName="keystone-cron" Feb 24 03:15:00.180600 master-0 kubenswrapper[31411]: I0224 03:15:00.177945 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="07e0d56e-d846-405c-9f9f-ba21ade2b8c3" containerName="keystone-cron" Feb 24 03:15:00.180600 master-0 kubenswrapper[31411]: I0224 03:15:00.179194 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531715-pdf5j" Feb 24 03:15:00.188123 master-0 kubenswrapper[31411]: I0224 03:15:00.182109 31411 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-dqz7q" Feb 24 03:15:00.188123 master-0 kubenswrapper[31411]: I0224 03:15:00.183427 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 03:15:00.192365 master-0 kubenswrapper[31411]: I0224 03:15:00.191530 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531715-pdf5j"] Feb 24 03:15:00.278965 master-0 kubenswrapper[31411]: I0224 03:15:00.278872 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chfp5\" (UniqueName: \"kubernetes.io/projected/8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1-kube-api-access-chfp5\") pod \"collect-profiles-29531715-pdf5j\" (UID: \"8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531715-pdf5j" Feb 24 03:15:00.280413 master-0 kubenswrapper[31411]: I0224 03:15:00.280323 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1-secret-volume\") pod \"collect-profiles-29531715-pdf5j\" (UID: \"8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531715-pdf5j" Feb 24 03:15:00.280524 master-0 kubenswrapper[31411]: I0224 03:15:00.280449 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1-config-volume\") pod \"collect-profiles-29531715-pdf5j\" (UID: \"8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531715-pdf5j" Feb 24 03:15:00.384640 master-0 kubenswrapper[31411]: I0224 03:15:00.383232 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chfp5\" (UniqueName: \"kubernetes.io/projected/8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1-kube-api-access-chfp5\") pod \"collect-profiles-29531715-pdf5j\" (UID: \"8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531715-pdf5j" Feb 24 03:15:00.384640 master-0 kubenswrapper[31411]: I0224 03:15:00.383464 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1-secret-volume\") pod \"collect-profiles-29531715-pdf5j\" (UID: \"8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531715-pdf5j" Feb 24 03:15:00.384640 master-0 kubenswrapper[31411]: I0224 03:15:00.383490 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1-config-volume\") pod \"collect-profiles-29531715-pdf5j\" (UID: \"8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531715-pdf5j" Feb 24 03:15:00.384640 master-0 kubenswrapper[31411]: I0224 03:15:00.384390 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1-config-volume\") pod \"collect-profiles-29531715-pdf5j\" (UID: \"8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531715-pdf5j" Feb 24 03:15:00.389988 master-0 kubenswrapper[31411]: I0224 03:15:00.389923 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1-secret-volume\") pod \"collect-profiles-29531715-pdf5j\" (UID: \"8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531715-pdf5j" Feb 24 03:15:00.411596 master-0 kubenswrapper[31411]: I0224 03:15:00.411475 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chfp5\" (UniqueName: \"kubernetes.io/projected/8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1-kube-api-access-chfp5\") pod \"collect-profiles-29531715-pdf5j\" (UID: \"8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531715-pdf5j" Feb 24 03:15:00.530803 master-0 kubenswrapper[31411]: I0224 03:15:00.530704 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531715-pdf5j" Feb 24 03:15:01.129071 master-0 kubenswrapper[31411]: I0224 03:15:01.128909 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531715-pdf5j"] Feb 24 03:15:01.191155 master-0 kubenswrapper[31411]: I0224 03:15:01.191049 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531715-pdf5j" event={"ID":"8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1","Type":"ContainerStarted","Data":"9a1d8d0341626bf0625b14edd5b503451d9bd3a49dfbcac96353932359f9a022"} Feb 24 03:15:02.210047 master-0 kubenswrapper[31411]: I0224 03:15:02.209954 31411 generic.go:334] "Generic (PLEG): container finished" podID="8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1" containerID="ad8083b3283833520ab0cac0dc457d50dba777d0a12bb5196c539438c9a8168d" exitCode=0 Feb 24 03:15:02.210047 master-0 kubenswrapper[31411]: I0224 03:15:02.210043 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531715-pdf5j" event={"ID":"8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1","Type":"ContainerDied","Data":"ad8083b3283833520ab0cac0dc457d50dba777d0a12bb5196c539438c9a8168d"} Feb 24 03:15:03.800993 master-0 kubenswrapper[31411]: I0224 03:15:03.800922 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531715-pdf5j" Feb 24 03:15:03.864174 master-0 kubenswrapper[31411]: I0224 03:15:03.864111 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1-secret-volume\") pod \"8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1\" (UID: \"8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1\") " Feb 24 03:15:03.864484 master-0 kubenswrapper[31411]: I0224 03:15:03.864238 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1-config-volume\") pod \"8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1\" (UID: \"8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1\") " Feb 24 03:15:03.864721 master-0 kubenswrapper[31411]: I0224 03:15:03.864687 31411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chfp5\" (UniqueName: \"kubernetes.io/projected/8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1-kube-api-access-chfp5\") pod \"8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1\" (UID: \"8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1\") " Feb 24 03:15:03.866529 master-0 kubenswrapper[31411]: I0224 03:15:03.866464 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1-config-volume" (OuterVolumeSpecName: "config-volume") pod "8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1" (UID: "8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 03:15:03.869412 master-0 kubenswrapper[31411]: I0224 03:15:03.869357 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1-kube-api-access-chfp5" (OuterVolumeSpecName: "kube-api-access-chfp5") pod "8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1" (UID: "8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1"). InnerVolumeSpecName "kube-api-access-chfp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 03:15:03.873280 master-0 kubenswrapper[31411]: I0224 03:15:03.873193 31411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1" (UID: "8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 03:15:03.968724 master-0 kubenswrapper[31411]: I0224 03:15:03.968660 31411 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 24 03:15:03.968724 master-0 kubenswrapper[31411]: I0224 03:15:03.968713 31411 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 24 03:15:03.968724 master-0 kubenswrapper[31411]: I0224 03:15:03.968730 31411 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chfp5\" (UniqueName: \"kubernetes.io/projected/8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1-kube-api-access-chfp5\") on node \"master-0\" DevicePath \"\"" Feb 24 03:15:04.237039 master-0 kubenswrapper[31411]: I0224 03:15:04.236959 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531715-pdf5j" event={"ID":"8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1","Type":"ContainerDied","Data":"9a1d8d0341626bf0625b14edd5b503451d9bd3a49dfbcac96353932359f9a022"} Feb 24 03:15:04.237039 master-0 kubenswrapper[31411]: I0224 03:15:04.237032 31411 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a1d8d0341626bf0625b14edd5b503451d9bd3a49dfbcac96353932359f9a022" Feb 24 03:15:04.237438 master-0 kubenswrapper[31411]: I0224 03:15:04.237077 31411 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531715-pdf5j" Feb 24 03:15:04.955390 master-0 kubenswrapper[31411]: I0224 03:15:04.955295 31411 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531670-t652n"] Feb 24 03:15:04.977810 master-0 kubenswrapper[31411]: I0224 03:15:04.977687 31411 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531670-t652n"] Feb 24 03:15:05.114143 master-0 kubenswrapper[31411]: I0224 03:15:05.114057 31411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8325a5f-7007-43fe-a995-11f8205c19b2" path="/var/lib/kubelet/pods/c8325a5f-7007-43fe-a995-11f8205c19b2/volumes" Feb 24 03:15:07.615711 master-0 kubenswrapper[31411]: I0224 03:15:07.614474 31411 scope.go:117] "RemoveContainer" containerID="17295c9c53675112adc7bb9bc9c1af22b6775319fc779fe0c3460fa72ae66c4e" Feb 24 03:15:22.908302 master-0 kubenswrapper[31411]: I0224 03:15:22.907045 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-k6m47/must-gather-mr5fj"] Feb 24 03:15:22.908302 master-0 kubenswrapper[31411]: E0224 03:15:22.907727 31411 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1" containerName="collect-profiles" Feb 24 03:15:22.908302 master-0 kubenswrapper[31411]: I0224 03:15:22.907741 31411 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1" containerName="collect-profiles" Feb 24 03:15:22.909186 master-0 kubenswrapper[31411]: I0224 03:15:22.908515 31411 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b6ce87a-7bc0-42f0-82e3-7abeeeb20af1" containerName="collect-profiles" Feb 24 03:15:22.911805 master-0 kubenswrapper[31411]: I0224 03:15:22.909993 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k6m47/must-gather-mr5fj" Feb 24 03:15:22.916126 master-0 kubenswrapper[31411]: I0224 03:15:22.916060 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-k6m47"/"openshift-service-ca.crt" Feb 24 03:15:22.916381 master-0 kubenswrapper[31411]: I0224 03:15:22.916091 31411 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-k6m47"/"kube-root-ca.crt" Feb 24 03:15:22.925205 master-0 kubenswrapper[31411]: I0224 03:15:22.925145 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-k6m47/must-gather-9wb6p"] Feb 24 03:15:22.929742 master-0 kubenswrapper[31411]: I0224 03:15:22.929670 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k6m47/must-gather-9wb6p" Feb 24 03:15:22.960406 master-0 kubenswrapper[31411]: I0224 03:15:22.959644 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-k6m47/must-gather-mr5fj"] Feb 24 03:15:22.978100 master-0 kubenswrapper[31411]: I0224 03:15:22.976872 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-k6m47/must-gather-9wb6p"] Feb 24 03:15:23.057418 master-0 kubenswrapper[31411]: I0224 03:15:23.057310 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-572wg\" (UniqueName: \"kubernetes.io/projected/6b2c82c5-f68d-4656-a9ab-2256dadb2cd7-kube-api-access-572wg\") pod \"must-gather-mr5fj\" (UID: \"6b2c82c5-f68d-4656-a9ab-2256dadb2cd7\") " pod="openshift-must-gather-k6m47/must-gather-mr5fj" Feb 24 03:15:23.057869 master-0 kubenswrapper[31411]: I0224 03:15:23.057832 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6b2c82c5-f68d-4656-a9ab-2256dadb2cd7-must-gather-output\") pod \"must-gather-mr5fj\" (UID: \"6b2c82c5-f68d-4656-a9ab-2256dadb2cd7\") " pod="openshift-must-gather-k6m47/must-gather-mr5fj" Feb 24 03:15:23.058362 master-0 kubenswrapper[31411]: I0224 03:15:23.058325 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qllvx\" (UniqueName: \"kubernetes.io/projected/451f119a-f6b6-49b6-9b53-8a322c7c26cc-kube-api-access-qllvx\") pod \"must-gather-9wb6p\" (UID: \"451f119a-f6b6-49b6-9b53-8a322c7c26cc\") " pod="openshift-must-gather-k6m47/must-gather-9wb6p" Feb 24 03:15:23.058459 master-0 kubenswrapper[31411]: I0224 03:15:23.058428 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/451f119a-f6b6-49b6-9b53-8a322c7c26cc-must-gather-output\") pod \"must-gather-9wb6p\" (UID: \"451f119a-f6b6-49b6-9b53-8a322c7c26cc\") " pod="openshift-must-gather-k6m47/must-gather-9wb6p" Feb 24 03:15:23.161199 master-0 kubenswrapper[31411]: I0224 03:15:23.161045 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qllvx\" (UniqueName: \"kubernetes.io/projected/451f119a-f6b6-49b6-9b53-8a322c7c26cc-kube-api-access-qllvx\") pod \"must-gather-9wb6p\" (UID: \"451f119a-f6b6-49b6-9b53-8a322c7c26cc\") " pod="openshift-must-gather-k6m47/must-gather-9wb6p" Feb 24 03:15:23.161199 master-0 kubenswrapper[31411]: I0224 03:15:23.161119 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/451f119a-f6b6-49b6-9b53-8a322c7c26cc-must-gather-output\") pod \"must-gather-9wb6p\" (UID: \"451f119a-f6b6-49b6-9b53-8a322c7c26cc\") " pod="openshift-must-gather-k6m47/must-gather-9wb6p" Feb 24 03:15:23.161512 master-0 kubenswrapper[31411]: I0224 03:15:23.161227 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-572wg\" (UniqueName: \"kubernetes.io/projected/6b2c82c5-f68d-4656-a9ab-2256dadb2cd7-kube-api-access-572wg\") pod \"must-gather-mr5fj\" (UID: \"6b2c82c5-f68d-4656-a9ab-2256dadb2cd7\") " pod="openshift-must-gather-k6m47/must-gather-mr5fj" Feb 24 03:15:23.161512 master-0 kubenswrapper[31411]: I0224 03:15:23.161305 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6b2c82c5-f68d-4656-a9ab-2256dadb2cd7-must-gather-output\") pod \"must-gather-mr5fj\" (UID: \"6b2c82c5-f68d-4656-a9ab-2256dadb2cd7\") " pod="openshift-must-gather-k6m47/must-gather-mr5fj" Feb 24 03:15:23.161914 master-0 kubenswrapper[31411]: I0224 03:15:23.161879 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6b2c82c5-f68d-4656-a9ab-2256dadb2cd7-must-gather-output\") pod \"must-gather-mr5fj\" (UID: \"6b2c82c5-f68d-4656-a9ab-2256dadb2cd7\") " pod="openshift-must-gather-k6m47/must-gather-mr5fj" Feb 24 03:15:23.162540 master-0 kubenswrapper[31411]: I0224 03:15:23.162510 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/451f119a-f6b6-49b6-9b53-8a322c7c26cc-must-gather-output\") pod \"must-gather-9wb6p\" (UID: \"451f119a-f6b6-49b6-9b53-8a322c7c26cc\") " pod="openshift-must-gather-k6m47/must-gather-9wb6p" Feb 24 03:15:23.178567 master-0 kubenswrapper[31411]: I0224 03:15:23.178522 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qllvx\" (UniqueName: \"kubernetes.io/projected/451f119a-f6b6-49b6-9b53-8a322c7c26cc-kube-api-access-qllvx\") pod \"must-gather-9wb6p\" (UID: \"451f119a-f6b6-49b6-9b53-8a322c7c26cc\") " pod="openshift-must-gather-k6m47/must-gather-9wb6p" Feb 24 03:15:23.180922 master-0 kubenswrapper[31411]: I0224 03:15:23.180863 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-572wg\" (UniqueName: \"kubernetes.io/projected/6b2c82c5-f68d-4656-a9ab-2256dadb2cd7-kube-api-access-572wg\") pod \"must-gather-mr5fj\" (UID: \"6b2c82c5-f68d-4656-a9ab-2256dadb2cd7\") " pod="openshift-must-gather-k6m47/must-gather-mr5fj" Feb 24 03:15:23.260639 master-0 kubenswrapper[31411]: I0224 03:15:23.260565 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k6m47/must-gather-mr5fj" Feb 24 03:15:23.281597 master-0 kubenswrapper[31411]: I0224 03:15:23.279951 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k6m47/must-gather-9wb6p" Feb 24 03:15:23.842152 master-0 kubenswrapper[31411]: W0224 03:15:23.842081 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b2c82c5_f68d_4656_a9ab_2256dadb2cd7.slice/crio-38f5a709eb262a760f38499208839284f3904e81a71157649e194a926ea58714 WatchSource:0}: Error finding container 38f5a709eb262a760f38499208839284f3904e81a71157649e194a926ea58714: Status 404 returned error can't find the container with id 38f5a709eb262a760f38499208839284f3904e81a71157649e194a926ea58714 Feb 24 03:15:23.846862 master-0 kubenswrapper[31411]: I0224 03:15:23.846818 31411 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 03:15:23.850327 master-0 kubenswrapper[31411]: I0224 03:15:23.850228 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-k6m47/must-gather-mr5fj"] Feb 24 03:15:23.955103 master-0 kubenswrapper[31411]: I0224 03:15:23.954991 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-k6m47/must-gather-9wb6p"] Feb 24 03:15:23.957647 master-0 kubenswrapper[31411]: W0224 03:15:23.957559 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod451f119a_f6b6_49b6_9b53_8a322c7c26cc.slice/crio-a07c237ffb63ed8d01074acf3b12953cdfa9660d098b8a748830c98ebe58ce38 WatchSource:0}: Error finding container a07c237ffb63ed8d01074acf3b12953cdfa9660d098b8a748830c98ebe58ce38: Status 404 returned error can't find the container with id a07c237ffb63ed8d01074acf3b12953cdfa9660d098b8a748830c98ebe58ce38 Feb 24 03:15:24.628682 master-0 kubenswrapper[31411]: I0224 03:15:24.628552 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k6m47/must-gather-mr5fj" event={"ID":"6b2c82c5-f68d-4656-a9ab-2256dadb2cd7","Type":"ContainerStarted","Data":"38f5a709eb262a760f38499208839284f3904e81a71157649e194a926ea58714"} Feb 24 03:15:24.631799 master-0 kubenswrapper[31411]: I0224 03:15:24.631758 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k6m47/must-gather-9wb6p" event={"ID":"451f119a-f6b6-49b6-9b53-8a322c7c26cc","Type":"ContainerStarted","Data":"a07c237ffb63ed8d01074acf3b12953cdfa9660d098b8a748830c98ebe58ce38"} Feb 24 03:15:25.667391 master-0 kubenswrapper[31411]: I0224 03:15:25.667315 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k6m47/must-gather-mr5fj" event={"ID":"6b2c82c5-f68d-4656-a9ab-2256dadb2cd7","Type":"ContainerStarted","Data":"dc871f04226fb357a06ee15fb1dc519e0892b2e9e9cc6ff030626c47237bfe18"} Feb 24 03:15:26.687704 master-0 kubenswrapper[31411]: I0224 03:15:26.687624 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k6m47/must-gather-mr5fj" event={"ID":"6b2c82c5-f68d-4656-a9ab-2256dadb2cd7","Type":"ContainerStarted","Data":"185a012ae08f069aaa092aeac3a9bb8de4d6aa392f3d57212c41f7f9d6383e47"} Feb 24 03:15:26.716491 master-0 kubenswrapper[31411]: I0224 03:15:26.716362 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-k6m47/must-gather-mr5fj" podStartSLOduration=3.396750872 podStartE2EDuration="4.716342575s" podCreationTimestamp="2026-02-24 03:15:22 +0000 UTC" firstStartedPulling="2026-02-24 03:15:23.846318128 +0000 UTC m=+3267.063515974" lastFinishedPulling="2026-02-24 03:15:25.165909831 +0000 UTC m=+3268.383107677" observedRunningTime="2026-02-24 03:15:26.712776285 +0000 UTC m=+3269.929974131" watchObservedRunningTime="2026-02-24 03:15:26.716342575 +0000 UTC m=+3269.933540421" Feb 24 03:15:29.515264 master-0 kubenswrapper[31411]: I0224 03:15:29.515199 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-57476485-9cjj5_732a3831-20e0-47dc-a29a-8bb4659541b7/cluster-version-operator/0.log" Feb 24 03:15:32.641876 master-0 kubenswrapper[31411]: I0224 03:15:32.641722 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-nsdtc_67ece790-a3fa-4b4f-8e72-a7079e197d01/nmstate-console-plugin/0.log" Feb 24 03:15:32.672100 master-0 kubenswrapper[31411]: I0224 03:15:32.671643 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-bpzvz_ce0aa18f-9e7b-4c9e-965d-55b21a7e14d5/nmstate-handler/0.log" Feb 24 03:15:32.690633 master-0 kubenswrapper[31411]: I0224 03:15:32.690562 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-zx9wt_005ab2f6-2bb8-45d7-9846-0149a6aa9742/nmstate-metrics/0.log" Feb 24 03:15:32.705588 master-0 kubenswrapper[31411]: I0224 03:15:32.705537 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-zx9wt_005ab2f6-2bb8-45d7-9846-0149a6aa9742/kube-rbac-proxy/0.log" Feb 24 03:15:32.719837 master-0 kubenswrapper[31411]: I0224 03:15:32.719753 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-s2t6d_5dd28fe3-673b-4b02-8fab-ab06e03d54e4/controller/0.log" Feb 24 03:15:32.737504 master-0 kubenswrapper[31411]: I0224 03:15:32.737460 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-xp57m_9550d924-11ab-4206-acdb-02ed4342db27/nmstate-operator/0.log" Feb 24 03:15:32.738417 master-0 kubenswrapper[31411]: I0224 03:15:32.738399 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-s2t6d_5dd28fe3-673b-4b02-8fab-ab06e03d54e4/kube-rbac-proxy/0.log" Feb 24 03:15:32.761531 master-0 kubenswrapper[31411]: I0224 03:15:32.761489 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-rft7d_92671039-2aaf-47d0-bfe1-7395d0d41e17/nmstate-webhook/0.log" Feb 24 03:15:32.778112 master-0 kubenswrapper[31411]: I0224 03:15:32.777969 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gll2f_092e38f4-b68c-422f-8663-f152fa7bb09f/controller/0.log" Feb 24 03:15:34.238295 master-0 kubenswrapper[31411]: I0224 03:15:34.238224 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gll2f_092e38f4-b68c-422f-8663-f152fa7bb09f/frr/0.log" Feb 24 03:15:34.253721 master-0 kubenswrapper[31411]: I0224 03:15:34.253647 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gll2f_092e38f4-b68c-422f-8663-f152fa7bb09f/reloader/0.log" Feb 24 03:15:34.268557 master-0 kubenswrapper[31411]: I0224 03:15:34.268506 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gll2f_092e38f4-b68c-422f-8663-f152fa7bb09f/frr-metrics/0.log" Feb 24 03:15:34.278607 master-0 kubenswrapper[31411]: I0224 03:15:34.276112 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gll2f_092e38f4-b68c-422f-8663-f152fa7bb09f/kube-rbac-proxy/0.log" Feb 24 03:15:34.284048 master-0 kubenswrapper[31411]: I0224 03:15:34.282948 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gll2f_092e38f4-b68c-422f-8663-f152fa7bb09f/kube-rbac-proxy-frr/0.log" Feb 24 03:15:34.291346 master-0 kubenswrapper[31411]: I0224 03:15:34.291298 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gll2f_092e38f4-b68c-422f-8663-f152fa7bb09f/cp-frr-files/0.log" Feb 24 03:15:34.302342 master-0 kubenswrapper[31411]: I0224 03:15:34.301763 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gll2f_092e38f4-b68c-422f-8663-f152fa7bb09f/cp-reloader/0.log" Feb 24 03:15:34.314396 master-0 kubenswrapper[31411]: I0224 03:15:34.314334 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gll2f_092e38f4-b68c-422f-8663-f152fa7bb09f/cp-metrics/0.log" Feb 24 03:15:34.327396 master-0 kubenswrapper[31411]: I0224 03:15:34.327343 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-lthbs_83bea055-a58c-42dd-8ae4-755f7f2944c0/frr-k8s-webhook-server/0.log" Feb 24 03:15:34.360346 master-0 kubenswrapper[31411]: I0224 03:15:34.360147 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7577845998-zvq74_4fd22fab-71ee-4af2-9ed2-fab1b5ad38a7/manager/0.log" Feb 24 03:15:34.387764 master-0 kubenswrapper[31411]: I0224 03:15:34.387683 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-559d754c8d-8sgn7_013fb964-8d21-4b63-9afb-521a7e902920/webhook-server/0.log" Feb 24 03:15:34.833782 master-0 kubenswrapper[31411]: I0224 03:15:34.833386 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lbfkl_2376dbda-b2e8-45e5-af4c-7382f0994ae3/speaker/0.log" Feb 24 03:15:34.840291 master-0 kubenswrapper[31411]: I0224 03:15:34.840230 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lbfkl_2376dbda-b2e8-45e5-af4c-7382f0994ae3/kube-rbac-proxy/0.log" Feb 24 03:15:35.872318 master-0 kubenswrapper[31411]: I0224 03:15:35.872217 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-7f7cbb95f8-pfw2n_3a070e67-63c1-4d58-8c68-1a2aa5a1702a/oauth-openshift/0.log" Feb 24 03:15:37.896651 master-0 kubenswrapper[31411]: I0224 03:15:37.896536 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k6m47/must-gather-9wb6p" event={"ID":"451f119a-f6b6-49b6-9b53-8a322c7c26cc","Type":"ContainerStarted","Data":"c39e80e2258c60fa0a98a49eaa3bcff1734dafaf70de808bc4532a8ba61dd7dd"} Feb 24 03:15:38.064933 master-0 kubenswrapper[31411]: I0224 03:15:38.064870 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcdctl/0.log" Feb 24 03:15:38.229258 master-0 kubenswrapper[31411]: I0224 03:15:38.229148 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-46vmq_cabdddba-5507-4e47-98ef-a00c6d0f305d/authentication-operator/2.log" Feb 24 03:15:38.342320 master-0 kubenswrapper[31411]: I0224 03:15:38.342260 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-46vmq_cabdddba-5507-4e47-98ef-a00c6d0f305d/authentication-operator/3.log" Feb 24 03:15:38.402362 master-0 kubenswrapper[31411]: I0224 03:15:38.402310 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd/0.log" Feb 24 03:15:38.429419 master-0 kubenswrapper[31411]: I0224 03:15:38.429252 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-metrics/0.log" Feb 24 03:15:38.445293 master-0 kubenswrapper[31411]: I0224 03:15:38.445238 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-readyz/0.log" Feb 24 03:15:38.467379 master-0 kubenswrapper[31411]: I0224 03:15:38.466932 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-rev/0.log" Feb 24 03:15:38.493533 master-0 kubenswrapper[31411]: I0224 03:15:38.493404 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/setup/0.log" Feb 24 03:15:38.508031 master-0 kubenswrapper[31411]: I0224 03:15:38.506618 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-ensure-env-vars/0.log" Feb 24 03:15:38.528923 master-0 kubenswrapper[31411]: I0224 03:15:38.527946 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-resources-copy/0.log" Feb 24 03:15:38.590930 master-0 kubenswrapper[31411]: I0224 03:15:38.588947 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_64b7ea36-8849-4955-80b5-c7e7c12fcc29/installer/0.log" Feb 24 03:15:38.630864 master-0 kubenswrapper[31411]: I0224 03:15:38.630796 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_50c78047-1c4d-4535-ba2c-31f080d6a57d/installer/0.log" Feb 24 03:15:38.925446 master-0 kubenswrapper[31411]: I0224 03:15:38.925251 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k6m47/must-gather-9wb6p" event={"ID":"451f119a-f6b6-49b6-9b53-8a322c7c26cc","Type":"ContainerStarted","Data":"a4ba56b21c79c12ba512883f6076ce7d3fc14bfc5d825d49f6091e26bfd6783b"} Feb 24 03:15:38.949284 master-0 kubenswrapper[31411]: I0224 03:15:38.948669 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-k6m47/must-gather-9wb6p" podStartSLOduration=4.015551682 podStartE2EDuration="16.9486409s" podCreationTimestamp="2026-02-24 03:15:22 +0000 UTC" firstStartedPulling="2026-02-24 03:15:23.962725406 +0000 UTC m=+3267.179923252" lastFinishedPulling="2026-02-24 03:15:36.895814614 +0000 UTC m=+3280.113012470" observedRunningTime="2026-02-24 03:15:38.947121548 +0000 UTC m=+3282.164319414" watchObservedRunningTime="2026-02-24 03:15:38.9486409 +0000 UTC m=+3282.165838746" Feb 24 03:15:39.633756 master-0 kubenswrapper[31411]: I0224 03:15:39.633672 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-k6m47/perf-node-gather-daemonset-rcfhc"] Feb 24 03:15:39.635778 master-0 kubenswrapper[31411]: I0224 03:15:39.635735 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k6m47/perf-node-gather-daemonset-rcfhc" Feb 24 03:15:39.655515 master-0 kubenswrapper[31411]: I0224 03:15:39.655360 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-k6m47/perf-node-gather-daemonset-rcfhc"] Feb 24 03:15:39.682703 master-0 kubenswrapper[31411]: I0224 03:15:39.672249 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-7b65dc9fcb-22sgl_6a08a1e4-cf92-4733-a8af-c7ac5b21e925/router/2.log" Feb 24 03:15:39.682703 master-0 kubenswrapper[31411]: I0224 03:15:39.673128 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-7b65dc9fcb-22sgl_6a08a1e4-cf92-4733-a8af-c7ac5b21e925/router/3.log" Feb 24 03:15:39.733158 master-0 kubenswrapper[31411]: I0224 03:15:39.732682 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvf2s\" (UniqueName: \"kubernetes.io/projected/b0859b83-cc93-49d6-9745-c7bac9cda031-kube-api-access-lvf2s\") pod \"perf-node-gather-daemonset-rcfhc\" (UID: \"b0859b83-cc93-49d6-9745-c7bac9cda031\") " pod="openshift-must-gather-k6m47/perf-node-gather-daemonset-rcfhc" Feb 24 03:15:39.733158 master-0 kubenswrapper[31411]: I0224 03:15:39.732784 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0859b83-cc93-49d6-9745-c7bac9cda031-lib-modules\") pod \"perf-node-gather-daemonset-rcfhc\" (UID: \"b0859b83-cc93-49d6-9745-c7bac9cda031\") " pod="openshift-must-gather-k6m47/perf-node-gather-daemonset-rcfhc" Feb 24 03:15:39.733158 master-0 kubenswrapper[31411]: I0224 03:15:39.732873 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/b0859b83-cc93-49d6-9745-c7bac9cda031-podres\") pod \"perf-node-gather-daemonset-rcfhc\" (UID: \"b0859b83-cc93-49d6-9745-c7bac9cda031\") " pod="openshift-must-gather-k6m47/perf-node-gather-daemonset-rcfhc" Feb 24 03:15:39.733158 master-0 kubenswrapper[31411]: I0224 03:15:39.732931 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/b0859b83-cc93-49d6-9745-c7bac9cda031-proc\") pod \"perf-node-gather-daemonset-rcfhc\" (UID: \"b0859b83-cc93-49d6-9745-c7bac9cda031\") " pod="openshift-must-gather-k6m47/perf-node-gather-daemonset-rcfhc" Feb 24 03:15:39.733158 master-0 kubenswrapper[31411]: I0224 03:15:39.732950 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b0859b83-cc93-49d6-9745-c7bac9cda031-sys\") pod \"perf-node-gather-daemonset-rcfhc\" (UID: \"b0859b83-cc93-49d6-9745-c7bac9cda031\") " pod="openshift-must-gather-k6m47/perf-node-gather-daemonset-rcfhc" Feb 24 03:15:39.747800 master-0 kubenswrapper[31411]: I0224 03:15:39.747559 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-f2lj9_7fa1462b-8f1c-4a77-9c1c-f0f79910737f/assisted-installer-controller/0.log" Feb 24 03:15:39.836678 master-0 kubenswrapper[31411]: I0224 03:15:39.835978 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0859b83-cc93-49d6-9745-c7bac9cda031-lib-modules\") pod \"perf-node-gather-daemonset-rcfhc\" (UID: \"b0859b83-cc93-49d6-9745-c7bac9cda031\") " pod="openshift-must-gather-k6m47/perf-node-gather-daemonset-rcfhc" Feb 24 03:15:39.836678 master-0 kubenswrapper[31411]: I0224 03:15:39.836151 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/b0859b83-cc93-49d6-9745-c7bac9cda031-podres\") pod \"perf-node-gather-daemonset-rcfhc\" (UID: \"b0859b83-cc93-49d6-9745-c7bac9cda031\") " pod="openshift-must-gather-k6m47/perf-node-gather-daemonset-rcfhc" Feb 24 03:15:39.836678 master-0 kubenswrapper[31411]: I0224 03:15:39.836232 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/b0859b83-cc93-49d6-9745-c7bac9cda031-proc\") pod \"perf-node-gather-daemonset-rcfhc\" (UID: \"b0859b83-cc93-49d6-9745-c7bac9cda031\") " pod="openshift-must-gather-k6m47/perf-node-gather-daemonset-rcfhc" Feb 24 03:15:39.836678 master-0 kubenswrapper[31411]: I0224 03:15:39.836265 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b0859b83-cc93-49d6-9745-c7bac9cda031-sys\") pod \"perf-node-gather-daemonset-rcfhc\" (UID: \"b0859b83-cc93-49d6-9745-c7bac9cda031\") " pod="openshift-must-gather-k6m47/perf-node-gather-daemonset-rcfhc" Feb 24 03:15:39.836678 master-0 kubenswrapper[31411]: I0224 03:15:39.836395 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvf2s\" (UniqueName: \"kubernetes.io/projected/b0859b83-cc93-49d6-9745-c7bac9cda031-kube-api-access-lvf2s\") pod \"perf-node-gather-daemonset-rcfhc\" (UID: \"b0859b83-cc93-49d6-9745-c7bac9cda031\") " pod="openshift-must-gather-k6m47/perf-node-gather-daemonset-rcfhc" Feb 24 03:15:39.836678 master-0 kubenswrapper[31411]: I0224 03:15:39.836617 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b0859b83-cc93-49d6-9745-c7bac9cda031-sys\") pod \"perf-node-gather-daemonset-rcfhc\" (UID: \"b0859b83-cc93-49d6-9745-c7bac9cda031\") " pod="openshift-must-gather-k6m47/perf-node-gather-daemonset-rcfhc" Feb 24 03:15:39.836678 master-0 kubenswrapper[31411]: I0224 03:15:39.836695 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/b0859b83-cc93-49d6-9745-c7bac9cda031-podres\") pod \"perf-node-gather-daemonset-rcfhc\" (UID: \"b0859b83-cc93-49d6-9745-c7bac9cda031\") " pod="openshift-must-gather-k6m47/perf-node-gather-daemonset-rcfhc" Feb 24 03:15:39.837291 master-0 kubenswrapper[31411]: I0224 03:15:39.837187 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0859b83-cc93-49d6-9745-c7bac9cda031-lib-modules\") pod \"perf-node-gather-daemonset-rcfhc\" (UID: \"b0859b83-cc93-49d6-9745-c7bac9cda031\") " pod="openshift-must-gather-k6m47/perf-node-gather-daemonset-rcfhc" Feb 24 03:15:39.837353 master-0 kubenswrapper[31411]: I0224 03:15:39.837318 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/b0859b83-cc93-49d6-9745-c7bac9cda031-proc\") pod \"perf-node-gather-daemonset-rcfhc\" (UID: \"b0859b83-cc93-49d6-9745-c7bac9cda031\") " pod="openshift-must-gather-k6m47/perf-node-gather-daemonset-rcfhc" Feb 24 03:15:39.869624 master-0 kubenswrapper[31411]: I0224 03:15:39.869371 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvf2s\" (UniqueName: \"kubernetes.io/projected/b0859b83-cc93-49d6-9745-c7bac9cda031-kube-api-access-lvf2s\") pod \"perf-node-gather-daemonset-rcfhc\" (UID: \"b0859b83-cc93-49d6-9745-c7bac9cda031\") " pod="openshift-must-gather-k6m47/perf-node-gather-daemonset-rcfhc" Feb 24 03:15:39.955625 master-0 kubenswrapper[31411]: I0224 03:15:39.955547 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k6m47/perf-node-gather-daemonset-rcfhc" Feb 24 03:15:40.458875 master-0 kubenswrapper[31411]: I0224 03:15:40.458373 31411 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-k6m47/perf-node-gather-daemonset-rcfhc"] Feb 24 03:15:40.475887 master-0 kubenswrapper[31411]: W0224 03:15:40.475784 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb0859b83_cc93_49d6_9745_c7bac9cda031.slice/crio-835a59678b10db89f8ffcdba6550b06680050429e4e8ffcc149a104a47b40d1f WatchSource:0}: Error finding container 835a59678b10db89f8ffcdba6550b06680050429e4e8ffcc149a104a47b40d1f: Status 404 returned error can't find the container with id 835a59678b10db89f8ffcdba6550b06680050429e4e8ffcc149a104a47b40d1f Feb 24 03:15:40.858773 master-0 kubenswrapper[31411]: I0224 03:15:40.858720 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-77597cc7cf-8j2k2_b176946a-c056-441c-9145-b88ca4d75758/oauth-apiserver/0.log" Feb 24 03:15:40.872469 master-0 kubenswrapper[31411]: I0224 03:15:40.872410 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-77597cc7cf-8j2k2_b176946a-c056-441c-9145-b88ca4d75758/fix-audit-permissions/0.log" Feb 24 03:15:40.956730 master-0 kubenswrapper[31411]: I0224 03:15:40.956645 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k6m47/perf-node-gather-daemonset-rcfhc" event={"ID":"b0859b83-cc93-49d6-9745-c7bac9cda031","Type":"ContainerStarted","Data":"835a59678b10db89f8ffcdba6550b06680050429e4e8ffcc149a104a47b40d1f"} Feb 24 03:15:41.802601 master-0 kubenswrapper[31411]: I0224 03:15:41.802454 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-mtrdk_91168f3d-70eb-4351-bb83-5411a96ad29d/kube-rbac-proxy/0.log" Feb 24 03:15:41.883972 master-0 kubenswrapper[31411]: I0224 03:15:41.883835 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-mtrdk_91168f3d-70eb-4351-bb83-5411a96ad29d/cluster-autoscaler-operator/0.log" Feb 24 03:15:41.906781 master-0 kubenswrapper[31411]: I0224 03:15:41.906728 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k98fq_7b4e3ba0-5194-4e20-8f12-dea4b67504fe/cluster-baremetal-operator/2.log" Feb 24 03:15:41.907807 master-0 kubenswrapper[31411]: I0224 03:15:41.907749 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k98fq_7b4e3ba0-5194-4e20-8f12-dea4b67504fe/cluster-baremetal-operator/3.log" Feb 24 03:15:41.924927 master-0 kubenswrapper[31411]: I0224 03:15:41.924882 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k98fq_7b4e3ba0-5194-4e20-8f12-dea4b67504fe/baremetal-kube-rbac-proxy/0.log" Feb 24 03:15:41.950521 master-0 kubenswrapper[31411]: I0224 03:15:41.950316 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-ckntz_a4cea44a-1c6e-465f-97df-2c951056cb85/control-plane-machine-set-operator/1.log" Feb 24 03:15:41.951632 master-0 kubenswrapper[31411]: I0224 03:15:41.951465 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-ckntz_a4cea44a-1c6e-465f-97df-2c951056cb85/control-plane-machine-set-operator/0.log" Feb 24 03:15:41.971619 master-0 kubenswrapper[31411]: I0224 03:15:41.970744 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k6m47/perf-node-gather-daemonset-rcfhc" event={"ID":"b0859b83-cc93-49d6-9745-c7bac9cda031","Type":"ContainerStarted","Data":"f97d9c35ac094071b7c43742eb29c28264e1b2d296bccef4b4004c4963226ef1"} Feb 24 03:15:41.971619 master-0 kubenswrapper[31411]: I0224 03:15:41.971377 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-must-gather-k6m47/perf-node-gather-daemonset-rcfhc" Feb 24 03:15:41.975942 master-0 kubenswrapper[31411]: I0224 03:15:41.975892 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5c7cf458b4-dsjgm_0ce6dd93-084c-4e15-8b7c-e0829a6df14e/kube-rbac-proxy/0.log" Feb 24 03:15:41.994887 master-0 kubenswrapper[31411]: I0224 03:15:41.994845 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5c7cf458b4-dsjgm_0ce6dd93-084c-4e15-8b7c-e0829a6df14e/machine-api-operator/0.log" Feb 24 03:15:42.059191 master-0 kubenswrapper[31411]: I0224 03:15:42.059043 31411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-k6m47/perf-node-gather-daemonset-rcfhc" podStartSLOduration=3.059007755 podStartE2EDuration="3.059007755s" podCreationTimestamp="2026-02-24 03:15:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 03:15:41.991048623 +0000 UTC m=+3285.208246469" watchObservedRunningTime="2026-02-24 03:15:42.059007755 +0000 UTC m=+3285.276205601" Feb 24 03:15:42.292656 master-0 kubenswrapper[31411]: I0224 03:15:42.289366 31411 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-k6m47/master-0-debug-7njl2"] Feb 24 03:15:42.292656 master-0 kubenswrapper[31411]: I0224 03:15:42.291248 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k6m47/master-0-debug-7njl2" Feb 24 03:15:42.423119 master-0 kubenswrapper[31411]: I0224 03:15:42.421703 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b52rg\" (UniqueName: \"kubernetes.io/projected/edbbc3d5-ca42-424b-91ee-1bf0fe8b0c68-kube-api-access-b52rg\") pod \"master-0-debug-7njl2\" (UID: \"edbbc3d5-ca42-424b-91ee-1bf0fe8b0c68\") " pod="openshift-must-gather-k6m47/master-0-debug-7njl2" Feb 24 03:15:42.423742 master-0 kubenswrapper[31411]: I0224 03:15:42.423652 31411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/edbbc3d5-ca42-424b-91ee-1bf0fe8b0c68-host\") pod \"master-0-debug-7njl2\" (UID: \"edbbc3d5-ca42-424b-91ee-1bf0fe8b0c68\") " pod="openshift-must-gather-k6m47/master-0-debug-7njl2" Feb 24 03:15:42.528626 master-0 kubenswrapper[31411]: I0224 03:15:42.528536 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/edbbc3d5-ca42-424b-91ee-1bf0fe8b0c68-host\") pod \"master-0-debug-7njl2\" (UID: \"edbbc3d5-ca42-424b-91ee-1bf0fe8b0c68\") " pod="openshift-must-gather-k6m47/master-0-debug-7njl2" Feb 24 03:15:42.529044 master-0 kubenswrapper[31411]: I0224 03:15:42.528732 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/edbbc3d5-ca42-424b-91ee-1bf0fe8b0c68-host\") pod \"master-0-debug-7njl2\" (UID: \"edbbc3d5-ca42-424b-91ee-1bf0fe8b0c68\") " pod="openshift-must-gather-k6m47/master-0-debug-7njl2" Feb 24 03:15:42.529245 master-0 kubenswrapper[31411]: I0224 03:15:42.529219 31411 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b52rg\" (UniqueName: \"kubernetes.io/projected/edbbc3d5-ca42-424b-91ee-1bf0fe8b0c68-kube-api-access-b52rg\") pod \"master-0-debug-7njl2\" (UID: \"edbbc3d5-ca42-424b-91ee-1bf0fe8b0c68\") " pod="openshift-must-gather-k6m47/master-0-debug-7njl2" Feb 24 03:15:42.555056 master-0 kubenswrapper[31411]: I0224 03:15:42.554879 31411 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b52rg\" (UniqueName: \"kubernetes.io/projected/edbbc3d5-ca42-424b-91ee-1bf0fe8b0c68-kube-api-access-b52rg\") pod \"master-0-debug-7njl2\" (UID: \"edbbc3d5-ca42-424b-91ee-1bf0fe8b0c68\") " pod="openshift-must-gather-k6m47/master-0-debug-7njl2" Feb 24 03:15:42.628734 master-0 kubenswrapper[31411]: I0224 03:15:42.628655 31411 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-k6m47/master-0-debug-7njl2" Feb 24 03:15:42.697497 master-0 kubenswrapper[31411]: W0224 03:15:42.697397 31411 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedbbc3d5_ca42_424b_91ee_1bf0fe8b0c68.slice/crio-9ac27c6afaa79622c0be7b234c2f55872a23b0289cb8e89348e1fdd9d8816e50 WatchSource:0}: Error finding container 9ac27c6afaa79622c0be7b234c2f55872a23b0289cb8e89348e1fdd9d8816e50: Status 404 returned error can't find the container with id 9ac27c6afaa79622c0be7b234c2f55872a23b0289cb8e89348e1fdd9d8816e50 Feb 24 03:15:43.003648 master-0 kubenswrapper[31411]: I0224 03:15:43.001706 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k6m47/master-0-debug-7njl2" event={"ID":"edbbc3d5-ca42-424b-91ee-1bf0fe8b0c68","Type":"ContainerStarted","Data":"9ac27c6afaa79622c0be7b234c2f55872a23b0289cb8e89348e1fdd9d8816e50"} Feb 24 03:15:43.721544 master-0 kubenswrapper[31411]: I0224 03:15:43.721495 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-8znkt_8e70a9f5-1154-40e9-a487-21e36e7f420a/cluster-cloud-controller-manager/0.log" Feb 24 03:15:43.723432 master-0 kubenswrapper[31411]: I0224 03:15:43.722943 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-8znkt_8e70a9f5-1154-40e9-a487-21e36e7f420a/cluster-cloud-controller-manager/1.log" Feb 24 03:15:43.741997 master-0 kubenswrapper[31411]: I0224 03:15:43.741662 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-8znkt_8e70a9f5-1154-40e9-a487-21e36e7f420a/config-sync-controllers/1.log" Feb 24 03:15:43.743758 master-0 kubenswrapper[31411]: I0224 03:15:43.743477 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-8znkt_8e70a9f5-1154-40e9-a487-21e36e7f420a/config-sync-controllers/0.log" Feb 24 03:15:43.768565 master-0 kubenswrapper[31411]: I0224 03:15:43.768457 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-8znkt_8e70a9f5-1154-40e9-a487-21e36e7f420a/kube-rbac-proxy/0.log" Feb 24 03:15:44.075697 master-0 kubenswrapper[31411]: I0224 03:15:44.071124 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-6ac23-api-0_1097a77b-8299-4100-b3f4-d94348cf6578/cinder-6ac23-api-log/0.log" Feb 24 03:15:44.147599 master-0 kubenswrapper[31411]: I0224 03:15:44.147522 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-6ac23-api-0_1097a77b-8299-4100-b3f4-d94348cf6578/cinder-api/0.log" Feb 24 03:15:44.263424 master-0 kubenswrapper[31411]: I0224 03:15:44.261759 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-6ac23-backup-0_eaa90bcd-7367-47bd-ab29-0fe04562014b/cinder-backup/0.log" Feb 24 03:15:44.298163 master-0 kubenswrapper[31411]: I0224 03:15:44.298086 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-6ac23-backup-0_eaa90bcd-7367-47bd-ab29-0fe04562014b/probe/0.log" Feb 24 03:15:44.495162 master-0 kubenswrapper[31411]: I0224 03:15:44.495096 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-6ac23-scheduler-0_ed7505b2-4ddf-4df3-a4f7-7b198aacd70b/cinder-scheduler/0.log" Feb 24 03:15:44.560251 master-0 kubenswrapper[31411]: I0224 03:15:44.532453 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-6ac23-scheduler-0_ed7505b2-4ddf-4df3-a4f7-7b198aacd70b/probe/0.log" Feb 24 03:15:44.623547 master-0 kubenswrapper[31411]: I0224 03:15:44.623482 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-6ac23-volume-lvm-iscsi-0_d6d3b198-9449-43a4-9252-2659f60e7959/cinder-volume/0.log" Feb 24 03:15:44.663669 master-0 kubenswrapper[31411]: I0224 03:15:44.663617 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-6ac23-volume-lvm-iscsi-0_d6d3b198-9449-43a4-9252-2659f60e7959/probe/0.log" Feb 24 03:15:44.690848 master-0 kubenswrapper[31411]: I0224 03:15:44.690788 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7586c46c57-vgvpz_c86d48d3-ae36-493d-8e45-02729b2681f1/dnsmasq-dns/0.log" Feb 24 03:15:44.700591 master-0 kubenswrapper[31411]: I0224 03:15:44.700527 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7586c46c57-vgvpz_c86d48d3-ae36-493d-8e45-02729b2681f1/init/0.log" Feb 24 03:15:44.804048 master-0 kubenswrapper[31411]: I0224 03:15:44.803896 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-8705a-default-external-api-0_9860c0f4-4e67-4271-8e5f-e69506201f8b/glance-log/0.log" Feb 24 03:15:44.830607 master-0 kubenswrapper[31411]: I0224 03:15:44.830101 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-8705a-default-external-api-0_9860c0f4-4e67-4271-8e5f-e69506201f8b/glance-httpd/0.log" Feb 24 03:15:44.922030 master-0 kubenswrapper[31411]: I0224 03:15:44.921969 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-8705a-default-internal-api-0_4a09f571-15e6-485f-af69-147f5e94181f/glance-log/0.log" Feb 24 03:15:44.948959 master-0 kubenswrapper[31411]: I0224 03:15:44.948860 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-8705a-default-internal-api-0_4a09f571-15e6-485f-af69-147f5e94181f/glance-httpd/0.log" Feb 24 03:15:44.966889 master-0 kubenswrapper[31411]: I0224 03:15:44.966772 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-85b75c94bc-pp6mc_da4d9085-6b7b-4507-803b-39a20e05bf2c/ironic-api-log/0.log" Feb 24 03:15:45.029984 master-0 kubenswrapper[31411]: I0224 03:15:45.029920 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-85b75c94bc-pp6mc_da4d9085-6b7b-4507-803b-39a20e05bf2c/ironic-api/0.log" Feb 24 03:15:45.043677 master-0 kubenswrapper[31411]: I0224 03:15:45.043164 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-85b75c94bc-pp6mc_da4d9085-6b7b-4507-803b-39a20e05bf2c/init/0.log" Feb 24 03:15:45.073332 master-0 kubenswrapper[31411]: I0224 03:15:45.071220 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_d289f4ce-9a2f-4d57-bf7b-414618c7c4e8/ironic-conductor/0.log" Feb 24 03:15:45.094769 master-0 kubenswrapper[31411]: I0224 03:15:45.094710 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_d289f4ce-9a2f-4d57-bf7b-414618c7c4e8/httpboot/0.log" Feb 24 03:15:45.116169 master-0 kubenswrapper[31411]: I0224 03:15:45.116082 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_d289f4ce-9a2f-4d57-bf7b-414618c7c4e8/dnsmasq/0.log" Feb 24 03:15:45.131879 master-0 kubenswrapper[31411]: I0224 03:15:45.131815 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_d289f4ce-9a2f-4d57-bf7b-414618c7c4e8/init/0.log" Feb 24 03:15:45.145833 master-0 kubenswrapper[31411]: I0224 03:15:45.145770 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_d289f4ce-9a2f-4d57-bf7b-414618c7c4e8/ironic-python-agent-init/0.log" Feb 24 03:15:45.971284 master-0 kubenswrapper[31411]: I0224 03:15:45.971219 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_d289f4ce-9a2f-4d57-bf7b-414618c7c4e8/pxe-init/0.log" Feb 24 03:15:46.044920 master-0 kubenswrapper[31411]: I0224 03:15:46.044846 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_f57ae074-7d06-4293-91d0-f5dbd04ff2f4/ironic-inspector-httpd/0.log" Feb 24 03:15:46.113851 master-0 kubenswrapper[31411]: I0224 03:15:46.113780 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_f57ae074-7d06-4293-91d0-f5dbd04ff2f4/ironic-inspector/0.log" Feb 24 03:15:46.124083 master-0 kubenswrapper[31411]: I0224 03:15:46.123859 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_f57ae074-7d06-4293-91d0-f5dbd04ff2f4/inspector-httpboot/0.log" Feb 24 03:15:46.136433 master-0 kubenswrapper[31411]: I0224 03:15:46.136110 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_f57ae074-7d06-4293-91d0-f5dbd04ff2f4/ramdisk-logs/0.log" Feb 24 03:15:46.145032 master-0 kubenswrapper[31411]: I0224 03:15:46.144906 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_f57ae074-7d06-4293-91d0-f5dbd04ff2f4/inspector-dnsmasq/0.log" Feb 24 03:15:46.152887 master-0 kubenswrapper[31411]: I0224 03:15:46.152792 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_f57ae074-7d06-4293-91d0-f5dbd04ff2f4/ironic-python-agent-init/0.log" Feb 24 03:15:46.167400 master-0 kubenswrapper[31411]: I0224 03:15:46.166904 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_f57ae074-7d06-4293-91d0-f5dbd04ff2f4/inspector-pxe-init/0.log" Feb 24 03:15:46.185904 master-0 kubenswrapper[31411]: I0224 03:15:46.185776 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-7d8f6784f6-dqjdm_72106e8c-2a98-4a82-9f36-c820986c5665/ironic-neutron-agent/2.log" Feb 24 03:15:46.189620 master-0 kubenswrapper[31411]: I0224 03:15:46.189565 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-7d8f6784f6-dqjdm_72106e8c-2a98-4a82-9f36-c820986c5665/ironic-neutron-agent/1.log" Feb 24 03:15:46.203623 master-0 kubenswrapper[31411]: I0224 03:15:46.203587 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-6968c58f46-fcr59_74a7801b-b7a4-4292-91b3-6285c239aeb7/kube-rbac-proxy/0.log" Feb 24 03:15:46.241020 master-0 kubenswrapper[31411]: I0224 03:15:46.240877 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-6968c58f46-fcr59_74a7801b-b7a4-4292-91b3-6285c239aeb7/cloud-credential-operator/0.log" Feb 24 03:15:46.294448 master-0 kubenswrapper[31411]: I0224 03:15:46.294393 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-8f98fb65f-btxw6_14b140d8-b731-46c1-bb66-63e4345873c0/keystone-api/0.log" Feb 24 03:15:46.308746 master-0 kubenswrapper[31411]: I0224 03:15:46.307670 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29531701-28wv4_07e0d56e-d846-405c-9f9f-ba21ade2b8c3/keystone-cron/0.log" Feb 24 03:15:48.397270 master-0 kubenswrapper[31411]: I0224 03:15:48.397212 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-ccrxg_c92835f0-7f32-4584-8304-843d7979392a/openshift-config-operator/1.log" Feb 24 03:15:48.400629 master-0 kubenswrapper[31411]: I0224 03:15:48.400586 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-ccrxg_c92835f0-7f32-4584-8304-843d7979392a/openshift-config-operator/2.log" Feb 24 03:15:48.420291 master-0 kubenswrapper[31411]: I0224 03:15:48.420245 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-ccrxg_c92835f0-7f32-4584-8304-843d7979392a/openshift-api/0.log" Feb 24 03:15:49.617473 master-0 kubenswrapper[31411]: I0224 03:15:49.617406 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-5df5ffc47c-gmjbd_8ea06201-f138-475b-86de-769d333048cb/console-operator/1.log" Feb 24 03:15:49.668338 master-0 kubenswrapper[31411]: I0224 03:15:49.665432 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-5df5ffc47c-gmjbd_8ea06201-f138-475b-86de-769d333048cb/console-operator/2.log" Feb 24 03:15:50.000762 master-0 kubenswrapper[31411]: I0224 03:15:50.000373 31411 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-k6m47/perf-node-gather-daemonset-rcfhc" Feb 24 03:15:50.676396 master-0 kubenswrapper[31411]: I0224 03:15:50.676327 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7db5f64756-h92rx_4cdc2a12-e6fc-4502-9814-32dd7b61b02e/console/0.log" Feb 24 03:15:50.757196 master-0 kubenswrapper[31411]: I0224 03:15:50.757140 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-955b69498-x847l_2fbb8ae4-fc8b-46ff-a295-10a1207dd571/download-server/0.log" Feb 24 03:15:51.863934 master-0 kubenswrapper[31411]: I0224 03:15:51.863869 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-f94476f49-c5wlk_011c6603-d533-4449-b409-f6f698a3bd50/cluster-storage-operator/0.log" Feb 24 03:15:51.888038 master-0 kubenswrapper[31411]: I0224 03:15:51.887927 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-8l58x_f6e7b773-7ecd-4a5c-8bef-d672f371e7e5/snapshot-controller/4.log" Feb 24 03:15:51.894055 master-0 kubenswrapper[31411]: I0224 03:15:51.893976 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-8l58x_f6e7b773-7ecd-4a5c-8bef-d672f371e7e5/snapshot-controller/5.log" Feb 24 03:15:51.935284 master-0 kubenswrapper[31411]: I0224 03:15:51.935194 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-6fb4df594f-c95qc_7b098bd4-5751-4b01-8409-0688fd29233e/csi-snapshot-controller-operator/0.log" Feb 24 03:15:51.940025 master-0 kubenswrapper[31411]: I0224 03:15:51.939902 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-6fb4df594f-c95qc_7b098bd4-5751-4b01-8409-0688fd29233e/csi-snapshot-controller-operator/1.log" Feb 24 03:15:53.013220 master-0 kubenswrapper[31411]: I0224 03:15:53.013053 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-8c7d49845-hxcn2_2cb764f6-40f8-4e87-8be0-b9d7b0364201/dns-operator/0.log" Feb 24 03:15:53.032715 master-0 kubenswrapper[31411]: I0224 03:15:53.032662 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-8c7d49845-hxcn2_2cb764f6-40f8-4e87-8be0-b9d7b0364201/kube-rbac-proxy/0.log" Feb 24 03:15:54.057293 master-0 kubenswrapper[31411]: I0224 03:15:54.056594 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-5rf6m_8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c/dns/0.log" Feb 24 03:15:54.084891 master-0 kubenswrapper[31411]: I0224 03:15:54.084839 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-5rf6m_8e90470d-20e0-4eb4-bc8e-b4e4c19aab3c/kube-rbac-proxy/0.log" Feb 24 03:15:54.117295 master-0 kubenswrapper[31411]: I0224 03:15:54.117236 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-4lwwp_390a7aa5-c7f7-4baf-a2d2-e6da9a465042/dns-node-resolver/0.log" Feb 24 03:15:55.337602 master-0 kubenswrapper[31411]: I0224 03:15:55.331228 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-jb9vb_fbe9964a-9e82-48e9-82b0-7c07e4cec3a2/etcd-operator/1.log" Feb 24 03:15:55.350598 master-0 kubenswrapper[31411]: I0224 03:15:55.347020 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-jb9vb_fbe9964a-9e82-48e9-82b0-7c07e4cec3a2/etcd-operator/0.log" Feb 24 03:15:56.395474 master-0 kubenswrapper[31411]: I0224 03:15:56.395404 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcdctl/0.log" Feb 24 03:15:56.872821 master-0 kubenswrapper[31411]: I0224 03:15:56.871475 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd/0.log" Feb 24 03:15:56.901813 master-0 kubenswrapper[31411]: I0224 03:15:56.901744 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-metrics/0.log" Feb 24 03:15:56.932100 master-0 kubenswrapper[31411]: I0224 03:15:56.932033 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-readyz/0.log" Feb 24 03:15:56.953909 master-0 kubenswrapper[31411]: I0224 03:15:56.953837 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-rev/0.log" Feb 24 03:15:56.974712 master-0 kubenswrapper[31411]: I0224 03:15:56.973372 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/setup/0.log" Feb 24 03:15:56.987318 master-0 kubenswrapper[31411]: I0224 03:15:56.987206 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-ensure-env-vars/0.log" Feb 24 03:15:57.004550 master-0 kubenswrapper[31411]: I0224 03:15:57.004485 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_b419b8533666d3ae7054c771ce97a95f/etcd-resources-copy/0.log" Feb 24 03:15:57.078520 master-0 kubenswrapper[31411]: I0224 03:15:57.077113 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_64b7ea36-8849-4955-80b5-c7e7c12fcc29/installer/0.log" Feb 24 03:15:57.161725 master-0 kubenswrapper[31411]: I0224 03:15:57.148558 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_50c78047-1c4d-4535-ba2c-31f080d6a57d/installer/0.log" Feb 24 03:15:58.373138 master-0 kubenswrapper[31411]: I0224 03:15:58.369336 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-779979bdf7-d7sx4_c84dc269-43ae-4083-9998-a0b3c90bb681/cluster-image-registry-operator/0.log" Feb 24 03:15:58.425895 master-0 kubenswrapper[31411]: I0224 03:15:58.425767 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-779979bdf7-d7sx4_c84dc269-43ae-4083-9998-a0b3c90bb681/cluster-image-registry-operator/1.log" Feb 24 03:15:58.444319 master-0 kubenswrapper[31411]: I0224 03:15:58.443976 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-xrqvm_9a7e8d1d-3fc9-4111-9a9f-a1c939b1978a/node-ca/0.log" Feb 24 03:15:59.503764 master-0 kubenswrapper[31411]: I0224 03:15:59.503633 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-6dlqb_c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/ingress-operator/4.log" Feb 24 03:15:59.519600 master-0 kubenswrapper[31411]: I0224 03:15:59.519438 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-6dlqb_c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/ingress-operator/5.log" Feb 24 03:15:59.540342 master-0 kubenswrapper[31411]: I0224 03:15:59.539552 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-6dlqb_c3278a82-ee70-4d6c-9c96-f8cb1bcb9334/kube-rbac-proxy/0.log" Feb 24 03:16:00.302555 master-0 kubenswrapper[31411]: I0224 03:16:00.302483 31411 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-k6m47/master-0-debug-7njl2" event={"ID":"edbbc3d5-ca42-424b-91ee-1bf0fe8b0c68","Type":"ContainerStarted","Data":"9f8d62bdf9e4274398fd322ca60174a8ed4c12823aa85e3b6ccd0d3a9000b802"} Feb 24 03:16:00.415699 master-0 kubenswrapper[31411]: I0224 03:16:00.415616 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_cca46d62-e09a-45a1-ae65-0465747dc0a7/memcached/0.log" Feb 24 03:16:00.559496 master-0 kubenswrapper[31411]: I0224 03:16:00.559447 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-jjpsc_3e36c9eb-0368-46dc-af84-9c602a15555d/serve-healthcheck-canary/0.log" Feb 24 03:16:00.580162 master-0 kubenswrapper[31411]: I0224 03:16:00.580111 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6b46dbc6bf-ngrn9_4cae4ee5-812a-4144-bd0a-aadc0a96ace5/neutron-api/0.log" Feb 24 03:16:00.598734 master-0 kubenswrapper[31411]: I0224 03:16:00.598641 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6b46dbc6bf-ngrn9_4cae4ee5-812a-4144-bd0a-aadc0a96ace5/neutron-httpd/0.log" Feb 24 03:16:00.699714 master-0 kubenswrapper[31411]: I0224 03:16:00.697843 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_685eb0ae-79cd-488a-894f-2ef620e61225/nova-api-log/0.log" Feb 24 03:16:00.928900 master-0 kubenswrapper[31411]: I0224 03:16:00.928758 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_685eb0ae-79cd-488a-894f-2ef620e61225/nova-api-api/0.log" Feb 24 03:16:01.045603 master-0 kubenswrapper[31411]: I0224 03:16:01.041776 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_a2d00e0e-a4bc-45ca-bf97-ee71a47cff31/nova-cell0-conductor-conductor/0.log" Feb 24 03:16:01.195045 master-0 kubenswrapper[31411]: I0224 03:16:01.194879 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-compute-ironic-compute-0_74a07828-16f1-4b69-bfbd-a0e76519ce98/nova-cell1-compute-ironic-compute-compute/0.log" Feb 24 03:16:01.309031 master-0 kubenswrapper[31411]: I0224 03:16:01.308964 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_908cf055-4162-49bd-93cb-4d0a9add9b11/nova-cell1-conductor-conductor/0.log" Feb 24 03:16:01.402631 master-0 kubenswrapper[31411]: I0224 03:16:01.401920 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_47607b00-65e8-4d2e-90de-95f633ed0872/nova-cell1-novncproxy-novncproxy/0.log" Feb 24 03:16:01.481003 master-0 kubenswrapper[31411]: I0224 03:16:01.480871 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-59b498fcfb-dbkwd_8e0c87ae-6387-4c00-b03d-582566907fb6/insights-operator/0.log" Feb 24 03:16:01.511229 master-0 kubenswrapper[31411]: I0224 03:16:01.511167 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-59b498fcfb-dbkwd_8e0c87ae-6387-4c00-b03d-582566907fb6/insights-operator/1.log" Feb 24 03:16:01.545863 master-0 kubenswrapper[31411]: I0224 03:16:01.545789 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a42cba47-e321-4b11-8df3-14382a641521/nova-metadata-log/0.log" Feb 24 03:16:02.475056 master-0 kubenswrapper[31411]: I0224 03:16:02.474975 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a42cba47-e321-4b11-8df3-14382a641521/nova-metadata-metadata/0.log" Feb 24 03:16:02.571205 master-0 kubenswrapper[31411]: I0224 03:16:02.571160 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_ad415201-c48f-463e-b907-3a9bf748006d/nova-scheduler-scheduler/0.log" Feb 24 03:16:02.597497 master-0 kubenswrapper[31411]: I0224 03:16:02.597352 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_86b07869-3ccf-46a9-9ca3-9954a1508cff/galera/0.log" Feb 24 03:16:02.616234 master-0 kubenswrapper[31411]: I0224 03:16:02.616174 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_86b07869-3ccf-46a9-9ca3-9954a1508cff/mysql-bootstrap/0.log" Feb 24 03:16:02.643846 master-0 kubenswrapper[31411]: I0224 03:16:02.643783 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_93374608-d6a1-4e71-8682-3a86e5815f29/galera/0.log" Feb 24 03:16:02.660306 master-0 kubenswrapper[31411]: I0224 03:16:02.660264 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_93374608-d6a1-4e71-8682-3a86e5815f29/mysql-bootstrap/0.log" Feb 24 03:16:02.677478 master-0 kubenswrapper[31411]: I0224 03:16:02.677429 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_361cf20c-412e-43a0-b2e7-44a9d4964b05/openstackclient/0.log" Feb 24 03:16:02.718294 master-0 kubenswrapper[31411]: I0224 03:16:02.718224 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-hjmv9_76252167-d1e5-4ee1-b26f-853eb9e161a7/ovn-controller/0.log" Feb 24 03:16:02.726549 master-0 kubenswrapper[31411]: I0224 03:16:02.726509 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-5w4cf_77be4a7f-0ff5-439e-9298-8e071291ba72/openstack-network-exporter/0.log" Feb 24 03:16:02.742421 master-0 kubenswrapper[31411]: I0224 03:16:02.742378 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lp2wm_67e9af05-4de5-4257-b103-4af520af6fec/ovsdb-server/0.log" Feb 24 03:16:02.772050 master-0 kubenswrapper[31411]: I0224 03:16:02.772000 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lp2wm_67e9af05-4de5-4257-b103-4af520af6fec/ovs-vswitchd/0.log" Feb 24 03:16:02.784797 master-0 kubenswrapper[31411]: I0224 03:16:02.784742 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lp2wm_67e9af05-4de5-4257-b103-4af520af6fec/ovsdb-server-init/0.log" Feb 24 03:16:02.808633 master-0 kubenswrapper[31411]: I0224 03:16:02.808565 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_bd15c92e-3d10-4ad1-8769-ff495fcf6f49/ovn-northd/0.log" Feb 24 03:16:02.820379 master-0 kubenswrapper[31411]: I0224 03:16:02.820317 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_bd15c92e-3d10-4ad1-8769-ff495fcf6f49/openstack-network-exporter/0.log" Feb 24 03:16:02.840045 master-0 kubenswrapper[31411]: I0224 03:16:02.839984 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d2e5cfb6-e3cd-428c-9efe-8d23b1f289df/ovsdbserver-nb/0.log" Feb 24 03:16:02.854715 master-0 kubenswrapper[31411]: I0224 03:16:02.851144 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d2e5cfb6-e3cd-428c-9efe-8d23b1f289df/openstack-network-exporter/0.log" Feb 24 03:16:02.876323 master-0 kubenswrapper[31411]: I0224 03:16:02.875362 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_faa44386-3634-42fe-b2fc-6cdd257a8b1e/ovsdbserver-sb/0.log" Feb 24 03:16:02.883904 master-0 kubenswrapper[31411]: I0224 03:16:02.883848 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_faa44386-3634-42fe-b2fc-6cdd257a8b1e/openstack-network-exporter/0.log" Feb 24 03:16:02.986948 master-0 kubenswrapper[31411]: I0224 03:16:02.986857 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-f597cf46d-llslv_2b01bb6c-8488-4275-a2b0-ee35dbd9eb39/placement-log/0.log" Feb 24 03:16:03.041848 master-0 kubenswrapper[31411]: I0224 03:16:03.041783 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-f597cf46d-llslv_2b01bb6c-8488-4275-a2b0-ee35dbd9eb39/placement-api/0.log" Feb 24 03:16:03.083976 master-0 kubenswrapper[31411]: I0224 03:16:03.083908 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_fc47c58d-5bd1-4cb0-942f-6a048792da9a/rabbitmq/0.log" Feb 24 03:16:03.090790 master-0 kubenswrapper[31411]: I0224 03:16:03.090547 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_fc47c58d-5bd1-4cb0-942f-6a048792da9a/setup-container/0.log" Feb 24 03:16:03.143551 master-0 kubenswrapper[31411]: I0224 03:16:03.143367 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_5680b3af-dae8-4617-80b2-30c0a9818130/rabbitmq/0.log" Feb 24 03:16:03.149335 master-0 kubenswrapper[31411]: I0224 03:16:03.149296 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_5680b3af-dae8-4617-80b2-30c0a9818130/setup-container/0.log" Feb 24 03:16:03.266357 master-0 kubenswrapper[31411]: I0224 03:16:03.266299 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-675fbd6d58-pdtfj_ea5f1aef-3469-4396-b111-a7fd2dc4f40c/proxy-httpd/0.log" Feb 24 03:16:03.280694 master-0 kubenswrapper[31411]: I0224 03:16:03.280634 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-675fbd6d58-pdtfj_ea5f1aef-3469-4396-b111-a7fd2dc4f40c/proxy-server/0.log" Feb 24 03:16:03.293114 master-0 kubenswrapper[31411]: I0224 03:16:03.293048 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-gm5ph_c1211f9d-8d58-49d4-8892-61d252358fa6/swift-ring-rebalance/0.log" Feb 24 03:16:03.330085 master-0 kubenswrapper[31411]: I0224 03:16:03.329996 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9c814b7c-b62b-4104-8139-8e6cd597d33f/account-server/0.log" Feb 24 03:16:03.356182 master-0 kubenswrapper[31411]: I0224 03:16:03.356119 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9c814b7c-b62b-4104-8139-8e6cd597d33f/account-replicator/0.log" Feb 24 03:16:03.365436 master-0 kubenswrapper[31411]: I0224 03:16:03.365389 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9c814b7c-b62b-4104-8139-8e6cd597d33f/account-auditor/0.log" Feb 24 03:16:03.386688 master-0 kubenswrapper[31411]: I0224 03:16:03.386608 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9c814b7c-b62b-4104-8139-8e6cd597d33f/account-reaper/0.log" Feb 24 03:16:03.399453 master-0 kubenswrapper[31411]: I0224 03:16:03.399313 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9c814b7c-b62b-4104-8139-8e6cd597d33f/container-server/0.log" Feb 24 03:16:03.444322 master-0 kubenswrapper[31411]: I0224 03:16:03.444272 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9c814b7c-b62b-4104-8139-8e6cd597d33f/container-replicator/0.log" Feb 24 03:16:03.451704 master-0 kubenswrapper[31411]: I0224 03:16:03.451644 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9c814b7c-b62b-4104-8139-8e6cd597d33f/container-auditor/0.log" Feb 24 03:16:03.461485 master-0 kubenswrapper[31411]: I0224 03:16:03.461431 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9c814b7c-b62b-4104-8139-8e6cd597d33f/container-updater/0.log" Feb 24 03:16:03.469094 master-0 kubenswrapper[31411]: I0224 03:16:03.469030 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9c814b7c-b62b-4104-8139-8e6cd597d33f/object-server/0.log" Feb 24 03:16:03.499077 master-0 kubenswrapper[31411]: I0224 03:16:03.499009 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9c814b7c-b62b-4104-8139-8e6cd597d33f/object-replicator/0.log" Feb 24 03:16:03.525282 master-0 kubenswrapper[31411]: I0224 03:16:03.525227 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9c814b7c-b62b-4104-8139-8e6cd597d33f/object-auditor/0.log" Feb 24 03:16:03.534521 master-0 kubenswrapper[31411]: I0224 03:16:03.534483 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9c814b7c-b62b-4104-8139-8e6cd597d33f/object-updater/0.log" Feb 24 03:16:03.546915 master-0 kubenswrapper[31411]: I0224 03:16:03.546860 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9c814b7c-b62b-4104-8139-8e6cd597d33f/object-expirer/0.log" Feb 24 03:16:03.554058 master-0 kubenswrapper[31411]: I0224 03:16:03.553899 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9c814b7c-b62b-4104-8139-8e6cd597d33f/rsync/0.log" Feb 24 03:16:03.559536 master-0 kubenswrapper[31411]: I0224 03:16:03.559498 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9c814b7c-b62b-4104-8139-8e6cd597d33f/swift-recon-cron/0.log" Feb 24 03:16:03.867732 master-0 kubenswrapper[31411]: I0224 03:16:03.867567 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_977f33ee-175a-43f4-8b50-f539e1c2c583/alertmanager/0.log" Feb 24 03:16:03.882812 master-0 kubenswrapper[31411]: I0224 03:16:03.882737 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_977f33ee-175a-43f4-8b50-f539e1c2c583/config-reloader/0.log" Feb 24 03:16:03.899205 master-0 kubenswrapper[31411]: I0224 03:16:03.899065 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_977f33ee-175a-43f4-8b50-f539e1c2c583/kube-rbac-proxy-web/0.log" Feb 24 03:16:03.917229 master-0 kubenswrapper[31411]: I0224 03:16:03.917169 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_977f33ee-175a-43f4-8b50-f539e1c2c583/kube-rbac-proxy/0.log" Feb 24 03:16:03.938710 master-0 kubenswrapper[31411]: I0224 03:16:03.938636 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_977f33ee-175a-43f4-8b50-f539e1c2c583/kube-rbac-proxy-metric/0.log" Feb 24 03:16:03.953420 master-0 kubenswrapper[31411]: I0224 03:16:03.953370 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_977f33ee-175a-43f4-8b50-f539e1c2c583/prom-label-proxy/0.log" Feb 24 03:16:03.972796 master-0 kubenswrapper[31411]: I0224 03:16:03.972756 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_977f33ee-175a-43f4-8b50-f539e1c2c583/init-config-reloader/0.log" Feb 24 03:16:04.027377 master-0 kubenswrapper[31411]: I0224 03:16:04.027300 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_cluster-monitoring-operator-6bb6d78bf-fkzdb_f2e9cdff-8c15-43df-b8df-7fe3a73fda86/cluster-monitoring-operator/0.log" Feb 24 03:16:04.094180 master-0 kubenswrapper[31411]: I0224 03:16:04.094134 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-59584d565f-f6f26_a5305004-5311-4bc4-ad7c-6670f97c89cb/kube-state-metrics/0.log" Feb 24 03:16:04.122005 master-0 kubenswrapper[31411]: I0224 03:16:04.121881 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-59584d565f-f6f26_a5305004-5311-4bc4-ad7c-6670f97c89cb/kube-rbac-proxy-main/0.log" Feb 24 03:16:04.139700 master-0 kubenswrapper[31411]: I0224 03:16:04.139654 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-59584d565f-f6f26_a5305004-5311-4bc4-ad7c-6670f97c89cb/kube-rbac-proxy-self/0.log" Feb 24 03:16:04.157936 master-0 kubenswrapper[31411]: I0224 03:16:04.157892 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_metrics-server-67ddc7b799-zlnvf_02fc214d-8c40-4ed5-9f18-8bf5863d8d70/metrics-server/0.log" Feb 24 03:16:04.174533 master-0 kubenswrapper[31411]: I0224 03:16:04.174499 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-5d9ddb8754-xtrdd_823c983e-f9a6-4074-9a69-14ec0666dfd5/monitoring-plugin/0.log" Feb 24 03:16:04.200460 master-0 kubenswrapper[31411]: I0224 03:16:04.200398 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-2qn8m_24765ff1-5e7d-4100-ad81-8f73555fc0a2/node-exporter/0.log" Feb 24 03:16:04.214457 master-0 kubenswrapper[31411]: I0224 03:16:04.214403 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-2qn8m_24765ff1-5e7d-4100-ad81-8f73555fc0a2/kube-rbac-proxy/0.log" Feb 24 03:16:04.225792 master-0 kubenswrapper[31411]: I0224 03:16:04.225750 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-2qn8m_24765ff1-5e7d-4100-ad81-8f73555fc0a2/init-textfile/0.log" Feb 24 03:16:04.247980 master-0 kubenswrapper[31411]: I0224 03:16:04.247904 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-6dbff8cb4c-swtr6_608a8a56-daee-4fa1-8300-42155217c68b/kube-rbac-proxy-main/0.log" Feb 24 03:16:04.269675 master-0 kubenswrapper[31411]: I0224 03:16:04.269622 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-6dbff8cb4c-swtr6_608a8a56-daee-4fa1-8300-42155217c68b/kube-rbac-proxy-self/0.log" Feb 24 03:16:04.299884 master-0 kubenswrapper[31411]: I0224 03:16:04.299825 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-6dbff8cb4c-swtr6_608a8a56-daee-4fa1-8300-42155217c68b/openshift-state-metrics/0.log" Feb 24 03:16:04.367633 master-0 kubenswrapper[31411]: I0224 03:16:04.367546 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0/prometheus/0.log" Feb 24 03:16:04.386842 master-0 kubenswrapper[31411]: I0224 03:16:04.386688 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0/config-reloader/0.log" Feb 24 03:16:04.407882 master-0 kubenswrapper[31411]: I0224 03:16:04.407825 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0/thanos-sidecar/0.log" Feb 24 03:16:04.426042 master-0 kubenswrapper[31411]: I0224 03:16:04.425669 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0/kube-rbac-proxy-web/0.log" Feb 24 03:16:04.439482 master-0 kubenswrapper[31411]: I0224 03:16:04.439431 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0/kube-rbac-proxy/0.log" Feb 24 03:16:04.458903 master-0 kubenswrapper[31411]: I0224 03:16:04.458640 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0/kube-rbac-proxy-thanos/0.log" Feb 24 03:16:04.748144 master-0 kubenswrapper[31411]: I0224 03:16:04.748082 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_c3f5dc53-a9d4-4d18-a3ed-9415a1849bf0/init-config-reloader/0.log" Feb 24 03:16:04.804092 master-0 kubenswrapper[31411]: I0224 03:16:04.804014 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-754bc4d665-66lml_df2b8111-41c6-4333-b473-4c08fb836f70/prometheus-operator/0.log" Feb 24 03:16:04.855495 master-0 kubenswrapper[31411]: I0224 03:16:04.855418 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-754bc4d665-66lml_df2b8111-41c6-4333-b473-4c08fb836f70/kube-rbac-proxy/0.log" Feb 24 03:16:04.879067 master-0 kubenswrapper[31411]: I0224 03:16:04.878985 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-admission-webhook-75d56db95f-9gkp2_22a83952-32ec-48f7-85cd-209b62362ae2/prometheus-operator-admission-webhook/0.log" Feb 24 03:16:04.903737 master-0 kubenswrapper[31411]: I0224 03:16:04.903688 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-cc55f5fb6-hcn4g_d57a5233-cd86-45d7-9f96-92eb0cc06b7d/telemeter-client/0.log" Feb 24 03:16:04.919556 master-0 kubenswrapper[31411]: I0224 03:16:04.919501 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-cc55f5fb6-hcn4g_d57a5233-cd86-45d7-9f96-92eb0cc06b7d/reload/0.log" Feb 24 03:16:04.941421 master-0 kubenswrapper[31411]: I0224 03:16:04.936181 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-cc55f5fb6-hcn4g_d57a5233-cd86-45d7-9f96-92eb0cc06b7d/kube-rbac-proxy/0.log" Feb 24 03:16:04.962423 master-0 kubenswrapper[31411]: I0224 03:16:04.962370 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-69565684c5-snfqm_133397d5-a069-4b31-b4d8-a7442bc62eba/thanos-query/0.log" Feb 24 03:16:04.985487 master-0 kubenswrapper[31411]: I0224 03:16:04.985421 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-69565684c5-snfqm_133397d5-a069-4b31-b4d8-a7442bc62eba/kube-rbac-proxy-web/0.log" Feb 24 03:16:05.005729 master-0 kubenswrapper[31411]: I0224 03:16:05.005567 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-69565684c5-snfqm_133397d5-a069-4b31-b4d8-a7442bc62eba/kube-rbac-proxy/0.log" Feb 24 03:16:05.023643 master-0 kubenswrapper[31411]: I0224 03:16:05.023523 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-69565684c5-snfqm_133397d5-a069-4b31-b4d8-a7442bc62eba/prom-label-proxy/0.log" Feb 24 03:16:05.043595 master-0 kubenswrapper[31411]: I0224 03:16:05.043521 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-69565684c5-snfqm_133397d5-a069-4b31-b4d8-a7442bc62eba/kube-rbac-proxy-rules/0.log" Feb 24 03:16:05.059715 master-0 kubenswrapper[31411]: I0224 03:16:05.059642 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-69565684c5-snfqm_133397d5-a069-4b31-b4d8-a7442bc62eba/kube-rbac-proxy-metrics/0.log" Feb 24 03:16:07.839947 master-0 kubenswrapper[31411]: I0224 03:16:07.838768 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-s2t6d_5dd28fe3-673b-4b02-8fab-ab06e03d54e4/controller/0.log" Feb 24 03:16:07.856513 master-0 kubenswrapper[31411]: I0224 03:16:07.856454 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-s2t6d_5dd28fe3-673b-4b02-8fab-ab06e03d54e4/kube-rbac-proxy/0.log" Feb 24 03:16:07.882380 master-0 kubenswrapper[31411]: I0224 03:16:07.882294 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gll2f_092e38f4-b68c-422f-8663-f152fa7bb09f/controller/0.log" Feb 24 03:16:08.994877 master-0 kubenswrapper[31411]: I0224 03:16:08.994813 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-s2t6d_5dd28fe3-673b-4b02-8fab-ab06e03d54e4/controller/0.log" Feb 24 03:16:09.084531 master-0 kubenswrapper[31411]: I0224 03:16:09.084492 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-s2t6d_5dd28fe3-673b-4b02-8fab-ab06e03d54e4/kube-rbac-proxy/0.log" Feb 24 03:16:09.087620 master-0 kubenswrapper[31411]: I0224 03:16:09.087525 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq_33033660-765b-494d-8bc4-a6af0592fac5/extract/0.log" Feb 24 03:16:09.098900 master-0 kubenswrapper[31411]: I0224 03:16:09.098126 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq_33033660-765b-494d-8bc4-a6af0592fac5/util/0.log" Feb 24 03:16:09.109551 master-0 kubenswrapper[31411]: I0224 03:16:09.109366 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14s7clq_33033660-765b-494d-8bc4-a6af0592fac5/pull/0.log" Feb 24 03:16:09.116919 master-0 kubenswrapper[31411]: I0224 03:16:09.116846 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gll2f_092e38f4-b68c-422f-8663-f152fa7bb09f/controller/0.log" Feb 24 03:16:10.234610 master-0 kubenswrapper[31411]: I0224 03:16:10.234254 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gll2f_092e38f4-b68c-422f-8663-f152fa7bb09f/frr/0.log" Feb 24 03:16:10.281301 master-0 kubenswrapper[31411]: I0224 03:16:10.281244 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gll2f_092e38f4-b68c-422f-8663-f152fa7bb09f/reloader/0.log" Feb 24 03:16:10.305503 master-0 kubenswrapper[31411]: I0224 03:16:10.304261 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gll2f_092e38f4-b68c-422f-8663-f152fa7bb09f/frr-metrics/0.log" Feb 24 03:16:10.329706 master-0 kubenswrapper[31411]: I0224 03:16:10.329146 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gll2f_092e38f4-b68c-422f-8663-f152fa7bb09f/kube-rbac-proxy/0.log" Feb 24 03:16:10.356866 master-0 kubenswrapper[31411]: I0224 03:16:10.356238 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gll2f_092e38f4-b68c-422f-8663-f152fa7bb09f/kube-rbac-proxy-frr/0.log" Feb 24 03:16:10.375634 master-0 kubenswrapper[31411]: I0224 03:16:10.375563 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gll2f_092e38f4-b68c-422f-8663-f152fa7bb09f/cp-frr-files/0.log" Feb 24 03:16:10.392388 master-0 kubenswrapper[31411]: I0224 03:16:10.392356 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gll2f_092e38f4-b68c-422f-8663-f152fa7bb09f/cp-reloader/0.log" Feb 24 03:16:10.408695 master-0 kubenswrapper[31411]: I0224 03:16:10.408642 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gll2f_092e38f4-b68c-422f-8663-f152fa7bb09f/cp-metrics/0.log" Feb 24 03:16:10.435364 master-0 kubenswrapper[31411]: I0224 03:16:10.435274 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-lthbs_83bea055-a58c-42dd-8ae4-755f7f2944c0/frr-k8s-webhook-server/0.log" Feb 24 03:16:10.490244 master-0 kubenswrapper[31411]: I0224 03:16:10.490186 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7577845998-zvq74_4fd22fab-71ee-4af2-9ed2-fab1b5ad38a7/manager/0.log" Feb 24 03:16:10.509320 master-0 kubenswrapper[31411]: I0224 03:16:10.509267 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-559d754c8d-8sgn7_013fb964-8d21-4b63-9afb-521a7e902920/webhook-server/0.log" Feb 24 03:16:11.174636 master-0 kubenswrapper[31411]: I0224 03:16:11.173158 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lbfkl_2376dbda-b2e8-45e5-af4c-7382f0994ae3/speaker/0.log" Feb 24 03:16:11.200830 master-0 kubenswrapper[31411]: I0224 03:16:11.200763 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lbfkl_2376dbda-b2e8-45e5-af4c-7382f0994ae3/kube-rbac-proxy/0.log" Feb 24 03:16:11.916662 master-0 kubenswrapper[31411]: I0224 03:16:11.916551 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gll2f_092e38f4-b68c-422f-8663-f152fa7bb09f/frr/0.log" Feb 24 03:16:11.938945 master-0 kubenswrapper[31411]: I0224 03:16:11.937847 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gll2f_092e38f4-b68c-422f-8663-f152fa7bb09f/reloader/0.log" Feb 24 03:16:11.944416 master-0 kubenswrapper[31411]: I0224 03:16:11.944187 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gll2f_092e38f4-b68c-422f-8663-f152fa7bb09f/frr-metrics/0.log" Feb 24 03:16:11.954607 master-0 kubenswrapper[31411]: I0224 03:16:11.954541 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gll2f_092e38f4-b68c-422f-8663-f152fa7bb09f/kube-rbac-proxy/0.log" Feb 24 03:16:11.960935 master-0 kubenswrapper[31411]: I0224 03:16:11.960868 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gll2f_092e38f4-b68c-422f-8663-f152fa7bb09f/kube-rbac-proxy-frr/0.log" Feb 24 03:16:11.972448 master-0 kubenswrapper[31411]: I0224 03:16:11.972404 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gll2f_092e38f4-b68c-422f-8663-f152fa7bb09f/cp-frr-files/0.log" Feb 24 03:16:11.981622 master-0 kubenswrapper[31411]: I0224 03:16:11.981549 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gll2f_092e38f4-b68c-422f-8663-f152fa7bb09f/cp-reloader/0.log" Feb 24 03:16:11.989166 master-0 kubenswrapper[31411]: I0224 03:16:11.989093 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gll2f_092e38f4-b68c-422f-8663-f152fa7bb09f/cp-metrics/0.log" Feb 24 03:16:12.002472 master-0 kubenswrapper[31411]: I0224 03:16:12.002391 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-lthbs_83bea055-a58c-42dd-8ae4-755f7f2944c0/frr-k8s-webhook-server/0.log" Feb 24 03:16:12.038259 master-0 kubenswrapper[31411]: I0224 03:16:12.038200 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7577845998-zvq74_4fd22fab-71ee-4af2-9ed2-fab1b5ad38a7/manager/0.log" Feb 24 03:16:12.049368 master-0 kubenswrapper[31411]: I0224 03:16:12.049322 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-559d754c8d-8sgn7_013fb964-8d21-4b63-9afb-521a7e902920/webhook-server/0.log" Feb 24 03:16:12.605939 master-0 kubenswrapper[31411]: I0224 03:16:12.604165 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lbfkl_2376dbda-b2e8-45e5-af4c-7382f0994ae3/speaker/0.log" Feb 24 03:16:12.613798 master-0 kubenswrapper[31411]: I0224 03:16:12.613482 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lbfkl_2376dbda-b2e8-45e5-af4c-7382f0994ae3/kube-rbac-proxy/0.log" Feb 24 03:16:13.657131 master-0 kubenswrapper[31411]: I0224 03:16:13.656972 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-bcf775fc9-8x6sd_6a9ccd8e-d964-4c03-8ffc-51b464030c25/cluster-node-tuning-operator/1.log" Feb 24 03:16:13.657838 master-0 kubenswrapper[31411]: I0224 03:16:13.657805 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-bcf775fc9-8x6sd_6a9ccd8e-d964-4c03-8ffc-51b464030c25/cluster-node-tuning-operator/0.log" Feb 24 03:16:13.683760 master-0 kubenswrapper[31411]: I0224 03:16:13.683675 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-26b2v_638b3f88-0386-4f30-8ca5-6255e8f936fc/tuned/0.log" Feb 24 03:16:14.293881 master-0 kubenswrapper[31411]: I0224 03:16:14.293802 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-2ldv2_7cac8bf4-b1c9-4b3c-a536-8408f6ad8495/manager/0.log" Feb 24 03:16:14.905643 master-0 kubenswrapper[31411]: I0224 03:16:14.905483 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-b72xt_dd4e8aac-8f11-4e85-ac94-2160ae3adf4c/manager/0.log" Feb 24 03:16:14.924944 master-0 kubenswrapper[31411]: I0224 03:16:14.924887 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-dzbvc_129f6086-7edd-41da-adf1-38c9b82e0932/manager/0.log" Feb 24 03:16:15.023825 master-0 kubenswrapper[31411]: I0224 03:16:15.023769 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-784b5bb6c5-zfd69_44cfb629-0b50-4e8c-9b4c-e329a1b3c533/manager/0.log" Feb 24 03:16:15.038425 master-0 kubenswrapper[31411]: I0224 03:16:15.035930 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-5t6bt_3bb72077-6f36-439c-8cc0-83bdbfcc3935/manager/0.log" Feb 24 03:16:15.054916 master-0 kubenswrapper[31411]: I0224 03:16:15.054866 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-49gvb_6ae1849b-a4a6-4f60-bf3d-713c1f0df81f/manager/0.log" Feb 24 03:16:15.296034 master-0 kubenswrapper[31411]: I0224 03:16:15.295740 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-5f879c76b6-2kk8t_37d3adf3-e0cc-4f32-94ee-f89a8f4f49b4/manager/0.log" Feb 24 03:16:15.384985 master-0 kubenswrapper[31411]: I0224 03:16:15.383493 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-hksp2_2da684b9-3acd-40d2-8562-f212bc136dc5/manager/0.log" Feb 24 03:16:15.459859 master-0 kubenswrapper[31411]: I0224 03:16:15.459783 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-ws6cb_70a415e4-fc72-4449-87a5-67a04c4ee4aa/manager/0.log" Feb 24 03:16:15.471312 master-0 kubenswrapper[31411]: I0224 03:16:15.471259 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-psxsg_3e724dd4-e900-4138-90c3-ee1fc4fc8350/manager/0.log" Feb 24 03:16:15.513373 master-0 kubenswrapper[31411]: I0224 03:16:15.513300 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-5xt4j_7211589b-d7b6-48c3-b3f2-d74d133733b0/manager/0.log" Feb 24 03:16:15.571358 master-0 kubenswrapper[31411]: I0224 03:16:15.570594 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6bd4687957-lwlws_98db9e0e-7186-41ca-af3e-d192ec846273/manager/0.log" Feb 24 03:16:15.671745 master-0 kubenswrapper[31411]: I0224 03:16:15.671676 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-nffrm_8cb468a5-7f35-4562-b24a-ee51dfb14055/manager/0.log" Feb 24 03:16:15.685660 master-0 kubenswrapper[31411]: I0224 03:16:15.682621 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-659dc6bbfc-74cdr_0ec9ca9d-8f74-4018-970c-370187583fae/manager/0.log" Feb 24 03:16:15.704632 master-0 kubenswrapper[31411]: I0224 03:16:15.702984 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-579b7786b9tqsfz_e812dec6-4f25-4ba5-b08b-c2c7db77b4b3/manager/0.log" Feb 24 03:16:15.796608 master-0 kubenswrapper[31411]: I0224 03:16:15.795747 31411 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-2492q_f85222bf-f51a-4232-8db1-1e6ee593617b/kube-apiserver-operator/1.log"